[Structured Streaminig] multiple queries in one application

2020-04-29 Thread lec ssmi
Hi:
   I run a lot of queries in one spark structured streaming application. I
found that when one query fails, other queries can continue to run. But
there is no abnormal information.So the  queries are getting less and less,
and we can't find the reason. Is there any good solution for this situation?

Best
Lec Ssmi


Spark job stuck at s3a-file-system metrics system started

2020-04-29 Thread Aniruddha P Tekade
Hello,

I am trying to run a spark job that is trying to write the data into a
custom s3 endpoint bucket. But I am stuck at this line of output and job is
not moving forward at all -

20/04/29 16:03:59 INFO SharedState: Setting
hive.metastore.warehouse.dir ('null') to the value of
spark.sql.warehouse.dir
('file:/Users/abc/IdeaProjects/qct-air-detection/spark-warehouse/').
20/04/29 16:03:59 INFO SharedState: Warehouse path is
'file:/Users/abc/IdeaProjects/qct-air-detection/spark-warehouse/'.
20/04/29 16:04:01 WARN MetricsConfig: Cannot locate configuration:
tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
20/04/29 16:04:02 INFO MetricsSystemImpl: Scheduled Metric snapshot
period at 10 second(s).
20/04/29 16:04:02 INFO MetricsSystemImpl: s3a-file-system metrics system started

After long time of waiting it shows this -

org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on
test-bucket: com.amazonaws.SdkClientException: Unable to execute HTTP
request: Connect to s3-region0.mycloud.com:443
[s3-region0.mycloud.com/10.10.3.72] failed: Connection refused
(Connection refused): Unable to execute HTTP request: Connect to
s3-region0.mycloud.com:443 [s3-region0.mycloud.com/10.10.3.72] failed:
Connection refused (Connection refused)

However, I am able to access this bucket from aws cli from the same
machine. I don't understand why it is saying not able to execute the HTTP
request.

I am using -

spark   3.0.0-preview2
hadoop-aws  3.2.0
aws-java-sdk-bundle 1.11.375

My spark code has following properties set for hadoop configuration -

spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", ENDPOINT);
spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", ACCESS_KEY);
spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", SECRET_KEY);
spark.sparkContext.hadoopConfiguration.set("fs.s3a.path.style.access", "true")

Can someone help me in understanding what is wrong here? Is there anything
else I need to configure. The custom s3-endpoint and its keys are valid and
working from aws cli profile. What is wrong with the scala code here?

val dataStreamWriter: DataStreamWriter[Row] =
PM25quality.select(dayofmonth(current_date()) as "day",
month(current_date()) as "month", year(current_date()) as "year")
  .writeStream
  .format("parquet")
  .option("checkpointLocation", "/Users/abc/Desktop/qct-checkpoint/")
  .outputMode("append")
  .trigger(Trigger.ProcessingTime("15 seconds"))
  .partitionBy("year", "month", "day")
  .option("path", "s3a://test-bucket")

val streamingQuery: StreamingQuery = dataStreamWriter.start()

Aniruddha
---
ᐧ


Trump and modi butcher of Gujarat as Allies. Modi was banned to enter by US courts

2020-04-29 Thread James Mitchel


What is a VPN ? freedom from natzi owen censorship

2020-04-29 Thread James Mitchel


https://www.lausanne.org/content/lga/2019-05/the-rise-of-hindu-fundamentalism?gclid=Cj0KCQjwy6T1BRDXARIsAIqCTXpmVG-8QJwiOSTVH8fkhRXj3QXUufApRXbPJUTpLlZ4f4wWgFNlPVkaAndGEALw_wcB

2020-04-29 Thread James Mitchel

https://globalnews.ca/news/6823170/canadian-politicians-targeted-indian-intelligence/

Natzi Owen of Apache.org and two hindutwa against me.Characters who stole last 
remaining dignity from Apache tribe.Allies蘿

Abusing me
A price. Worth paying.
I will chose different technology to put bread on the table.




Re: Filtering on multiple columns in spark

2020-04-29 Thread Edgardo Szrajber
Maybe create a column with "lit" function for the variables you are comparing 
against.Bentzi

Sent from Yahoo Mail on Android 
 
  On Wed, Apr 29, 2020 at 18:40, Mich Talebzadeh 
wrote:   
The below line works

 

valc = 
newDF.withColumn("target_mobile_no",col("target_mobile_no").cast(StringType)).

filter("length(target_mobile_no) != 10 OR 
substring(target_mobile_no,1,1)!= '7'")

 

 

But not the following when the values are passed as parameters

 

valrejectedDF = 
newDF.withColumn("target_mobile_no",col("target_mobile_no").cast(StringType)).

 filter("length(target_mobile_no) != broadcastStagingConfig.mobileNoLengthOR 
substring(target_mobile_no,1,1) != broadcastStagingConfig.ukMobileNoStart")

 

Ithink it cannot interpret them




Dr Mich Talebzadeh

 

LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com




Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destructionof data or any other property which may arise from relying 
on this email's technical content is explicitly disclaimed.The author will in 
no case be liable for any monetary damages arising from suchloss, damage or 
destruction. 

 


On Wed, 29 Apr 2020 at 13:25, Mich Talebzadeh  wrote:

OK how do you pass variables for 10 and '7' 

 val rejectedDF = newDF.withColumn("target_mobile_no", 
col("target_mobile_no").cast(StringType)).

   filter("length(target_mobile_no) != 10 OR 
substring(target_mobile_no,1,1) != '7'")

in above in Scala. Neither $ value below or lit() are working!

   val rejectedDF 
=newDF.withColumn("target_mobile_no",col("target_mobile_no").cast(StringType)).

filter("length(target_mobile_no) != 
${broadcastStagingConfig.mobileNoLength}OR substring(target_mobile_no,1,1) 
!=${broadcastStagingConfig.ukMobileNoStart}")




Thanks











Dr Mich Talebzadeh

 

LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com




Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destructionof data or any other property which may arise from relying 
on this email's technical content is explicitly disclaimed.The author will in 
no case be liable for any monetary damages arising from suchloss, damage or 
destruction. 

 


On Wed, 29 Apr 2020 at 10:15, Mich Talebzadeh  wrote:

Hi Zhang,
Yes the SQL way worked fine
  val rejectedDF 
=newDF.withColumn("target_mobile_no",col("target_mobile_no").cast(StringType)).

  filter("length(target_mobile_no) != 10 OR 
substring(target_mobile_no,1,1)!= '7'")

Many thanks,

Dr Mich Talebzadeh

 

LinkedIn  
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com




Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destructionof data or any other property which may arise from relying 
on this email's technical content is explicitly disclaimed.The author will in 
no case be liable for any monetary damages arising from suchloss, damage or 
destruction. 

 


On Wed, 29 Apr 2020 at 09:51, ZHANG Wei  wrote:

AFAICT, maybe Spark SQL built-in functions[1] can help as below:

scala> df.show()
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+


scala> df.filter("length(name) == 4 or substring(name, 1, 1) == 'J'").show()
+---+--+
|age|  name|
+---+--+
| 30|  Andy|
| 19|Justin|
+---+--+


-- 
Cheers,
-z
[1] https://spark.apache.org/docs/latest/api/sql/index.html

On Wed, 29 Apr 2020 08:45:26 +0100
Mich Talebzadeh  wrote:

> Hi,
> 
> 
> 
> Trying to filter a dataframe with multiple conditions using OR "||" as below
> 
> 
> 
>   val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
> 
>                    filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
> 
> 
> 
> This throws this error
> 
> 
> 
> res12: org.apache.spark.sql.DataFrame = []
> 
> :49: error: value || is not a member of Int
> 
>                           filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
> 
> 
> 
> Try another way
> 
> 
> 
> val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
> 
>                    filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
> 
>   rejectedDF.createOrReplaceTempView("tmp")
> 
> 
> 
> Tried few options but I am still getting this error
> 
> 
> 
> :49: error: value !=== is not a member of
> org.apache.spark.sql.Column
> 
>                           filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
> 
>                                                       

Re: Filtering on multiple columns in spark

2020-04-29 Thread Mich Talebzadeh
The below line works



val c = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

 filter("length(target_mobile_no) != 10 OR
substring(target_mobile_no,1,1) != '7'")





But not the following when the values are passed as parameters



val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

  filter(*"*length(target_mobile_no) !=
broadcastStagingConfig.mobileNoLength OR substring(target_mobile_no,1,1) !=
broadcastStagingConfig.ukMobileNoStart*"*)



I think it cannot interpret them


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 29 Apr 2020 at 13:25, Mich Talebzadeh 
wrote:

> OK how do you pass variables for 10 and '7'
>
>  val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
>
>filter("length(target_mobile_no) != 10 OR
> substring(target_mobile_no,1,1) != '7'")
>
> in above in Scala. Neither $ value below or lit() are working!
>
>val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
>
>  filter("length(target_mobile_no) !=
> ${broadcastStagingConfig.mobileNoLength} OR substring(target_mobile_no,1,1)
> != ${broadcastStagingConfig.ukMobileNoStart}")
>
>
> Thanks
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 29 Apr 2020 at 10:15, Mich Talebzadeh 
> wrote:
>
>> Hi Zhang,
>>
>> Yes the SQL way worked fine
>>
>>   val rejectedDF = newDF.withColumn("target_mobile_no",
>> col("target_mobile_no").cast(StringType)).
>>
>>filter("length(target_mobile_no) != 10 OR
>> substring(target_mobile_no,1,1) != '7'")
>>
>> Many thanks,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 29 Apr 2020 at 09:51, ZHANG Wei  wrote:
>>
>>> AFAICT, maybe Spark SQL built-in functions[1] can help as below:
>>>
>>> scala> df.show()
>>> ++---+
>>> | age|   name|
>>> ++---+
>>> |null|Michael|
>>> |  30|   Andy|
>>> |  19| Justin|
>>> ++---+
>>>
>>>
>>> scala> df.filter("length(name) == 4 or substring(name, 1, 1) ==
>>> 'J'").show()
>>> +---+--+
>>> |age|  name|
>>> +---+--+
>>> | 30|  Andy|
>>> | 19|Justin|
>>> +---+--+
>>>
>>>
>>> --
>>> Cheers,
>>> -z
>>> [1] https://spark.apache.org/docs/latest/api/sql/index.html
>>>
>>> On Wed, 29 Apr 2020 08:45:26 +0100
>>> Mich Talebzadeh  wrote:
>>>
>>> > Hi,
>>> >
>>> >
>>> >
>>> > Trying to filter a dataframe with multiple conditions using OR "||" as
>>> below
>>> >
>>> >
>>> >
>>> >   val rejectedDF = newDF.withColumn("target_mobile_no",
>>> > col("target_mobile_no").cast(StringType)).
>>> >
>>> >filter(length(col("target_mobile_no")) !== 10 ||
>>> > substring(col("target_mobile_no"),1,1) !== "7")
>>> >
>>> >
>>> >
>>> > This throws this error
>>> >
>>> >
>>> >
>>> > res12: org.apache.spark.sql.DataFrame = []
>>> >
>>> > :49: error: value || is not a member of Int
>>> >
>>> >   filter(length(col("target_mobile_no")) !==
>>> 10 ||
>>> > substring(col("target_mobile_no"),1,1) !== "7")
>>> >
>>> >
>>> >
>>> > Try another way
>>> >
>>> >
>>> >
>>> > val rejectedDF = newDF.withColumn("target_mobile_no",
>>> > col("target_mobile_no").cast(StringType)).
>>> >
>>> >

Re: Lightbend Scala professional training & certification

2020-04-29 Thread Som Lima
I think I am going to focus on spring boot and apache camel.

I'll do Apache spark in the back ground.

So see you.

I am going to unsubscribe here.




On Wed, 29 Apr 2020, 13:58 Som Lima,  wrote:

> The end value is important  for me.
>
> I think certification in commercial framework is most valuable for me.
>
> What is free is the access to the framework. That is invaluable.
>
> In the past Access to commercial frameworks was not possible .
> One could only get it if a company sent you.
>
> This involved one week intense training courses. costing in the order of
> USD $3000. That is still the case. With Spring for example you can train
> freely and sit the certification for  minimal fee ($200) in the event a
> company doesn't send you on a one week USD $3000 course spring course.
>
> ACCESS : Being able to download and install a commercial framework  is
> priceless.
>
> I think  frameworks like Spark and accompanying mathematical  concepts
> cannot be learned in one week but highly achievable  for proficient
> commercial development in two months. Resources for learning relevant maths
> are abundant. Actually relevant maths are encapsulated in  APIs.  That is
> Bad news for masters and phD mathematicians.
>
> During one such one week course my colleague , an american outspoken type
> said .  he was right.  As it happened the framework fell short due to
> concepts I knew from commercial software development experience.
>
>
> I would say certification in Java is only worth 1/10  to that of
> certification in a java framework like J2ee , spring , Spark.
> Even certification as a DBA is more valuable than Language Certification
> because simply you can show you can operate equipment like a database.
>
> With language certification you can build an IDE. Same as  people who buy
> a lathe the first thing they do is use it to build another Lathe.
>
> On Wed, 29 Apr 2020, 13:09 Mich Talebzadeh, 
> wrote:
>
>> I don't think that will be free!
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 29 Apr 2020 at 13:01, Som Lima  wrote:
>>
>>> Is there a databricks or other professional certification for Apache
>>> Spark  ?
>>>
>>>
>>> On Wed, 29 Apr 2020, 11:29 Mich Talebzadeh, 
>>> wrote:
>>>
 Hi,

 Has anyone had experience of taking training courses  with Lightbend
 training on Scala

 I believe they are offering free Scala courses and certifications.

 Thanks,

 Dr Mich Talebzadeh



 LinkedIn * 
 https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 *



 http://talebzadehmich.wordpress.com


 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.



>>>


Re: On spam messages

2020-04-29 Thread Deepak Sharma
Much appreciated Sean.
Thanks.


On Wed, 29 Apr 2020 at 6:48 PM, Sean Owen  wrote:

> I am subscribed to this list to watch for a certain person's new
> accounts, which are posting obviously off-topic and inappropriate
> messages. It goes without saying that this is unacceptable and a CoC
> violation, and anyone posting that will be immediately removed and
> blocked.
>
> In the meantime, please don't prolong and expand these threads by
> engaging the very off-topic discussion on the list. You can email me
> privately to ensure I've caught any such messages.
>
> Yes, the original account was removed for this behavior and the new
> one will be too immediately.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
> --
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


On spam messages

2020-04-29 Thread Sean Owen
I am subscribed to this list to watch for a certain person's new
accounts, which are posting obviously off-topic and inappropriate
messages. It goes without saying that this is unacceptable and a CoC
violation, and anyone posting that will be immediately removed and
blocked.

In the meantime, please don't prolong and expand these threads by
engaging the very off-topic discussion on the list. You can email me
privately to ensure I've caught any such messages.

Yes, the original account was removed for this behavior and the new
one will be too immediately.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Lightbend Scala professional training & certification

2020-04-29 Thread Som Lima
The end value is important  for me.

I think certification in commercial framework is most valuable for me.

What is free is the access to the framework. That is invaluable.

In the past Access to commercial frameworks was not possible .
One could only get it if a company sent you.

This involved one week intense training courses. costing in the order of
USD $3000. That is still the case. With Spring for example you can train
freely and sit the certification for  minimal fee ($200) in the event a
company doesn't send you on a one week USD $3000 course spring course.

ACCESS : Being able to download and install a commercial framework  is
priceless.

I think  frameworks like Spark and accompanying mathematical  concepts
cannot be learned in one week but highly achievable  for proficient
commercial development in two months. Resources for learning relevant maths
are abundant. Actually relevant maths are encapsulated in  APIs.  That is
Bad news for masters and phD mathematicians.

During one such one week course my colleague , an american outspoken type
said .  he was right.  As it happened the framework fell short due to
concepts I knew from commercial software development experience.


I would say certification in Java is only worth 1/10  to that of
certification in a java framework like J2ee , spring , Spark.
Even certification as a DBA is more valuable than Language Certification
because simply you can show you can operate equipment like a database.

With language certification you can build an IDE. Same as  people who buy a
lathe the first thing they do is use it to build another Lathe.

On Wed, 29 Apr 2020, 13:09 Mich Talebzadeh, 
wrote:

> I don't think that will be free!
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 29 Apr 2020 at 13:01, Som Lima  wrote:
>
>> Is there a databricks or other professional certification for Apache
>> Spark  ?
>>
>>
>> On Wed, 29 Apr 2020, 11:29 Mich Talebzadeh, 
>> wrote:
>>
>>> Hi,
>>>
>>> Has anyone had experience of taking training courses  with Lightbend
>>> training on Scala
>>>
>>> I believe they are offering free Scala courses and certifications.
>>>
>>> Thanks,
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>


Re: Filtering on multiple columns in spark

2020-04-29 Thread Mich Talebzadeh
OK how do you pass variables for 10 and '7'

 val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

   filter("length(target_mobile_no) != 10 OR
substring(target_mobile_no,1,1) != '7'")

in above in Scala. Neither $ value below or lit() are working!

   val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

 filter("length(target_mobile_no) !=
${broadcastStagingConfig.mobileNoLength} OR substring(target_mobile_no,1,1)
!= ${broadcastStagingConfig.ukMobileNoStart}")


Thanks





Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 29 Apr 2020 at 10:15, Mich Talebzadeh 
wrote:

> Hi Zhang,
>
> Yes the SQL way worked fine
>
>   val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
>
>filter("length(target_mobile_no) != 10 OR
> substring(target_mobile_no,1,1) != '7'")
>
> Many thanks,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 29 Apr 2020 at 09:51, ZHANG Wei  wrote:
>
>> AFAICT, maybe Spark SQL built-in functions[1] can help as below:
>>
>> scala> df.show()
>> ++---+
>> | age|   name|
>> ++---+
>> |null|Michael|
>> |  30|   Andy|
>> |  19| Justin|
>> ++---+
>>
>>
>> scala> df.filter("length(name) == 4 or substring(name, 1, 1) ==
>> 'J'").show()
>> +---+--+
>> |age|  name|
>> +---+--+
>> | 30|  Andy|
>> | 19|Justin|
>> +---+--+
>>
>>
>> --
>> Cheers,
>> -z
>> [1] https://spark.apache.org/docs/latest/api/sql/index.html
>>
>> On Wed, 29 Apr 2020 08:45:26 +0100
>> Mich Talebzadeh  wrote:
>>
>> > Hi,
>> >
>> >
>> >
>> > Trying to filter a dataframe with multiple conditions using OR "||" as
>> below
>> >
>> >
>> >
>> >   val rejectedDF = newDF.withColumn("target_mobile_no",
>> > col("target_mobile_no").cast(StringType)).
>> >
>> >filter(length(col("target_mobile_no")) !== 10 ||
>> > substring(col("target_mobile_no"),1,1) !== "7")
>> >
>> >
>> >
>> > This throws this error
>> >
>> >
>> >
>> > res12: org.apache.spark.sql.DataFrame = []
>> >
>> > :49: error: value || is not a member of Int
>> >
>> >   filter(length(col("target_mobile_no")) !== 10
>> ||
>> > substring(col("target_mobile_no"),1,1) !== "7")
>> >
>> >
>> >
>> > Try another way
>> >
>> >
>> >
>> > val rejectedDF = newDF.withColumn("target_mobile_no",
>> > col("target_mobile_no").cast(StringType)).
>> >
>> >filter(length(col("target_mobile_no")) !=== 10 ||
>> > substring(col("target_mobile_no"),1,1) !=== "7")
>> >
>> >   rejectedDF.createOrReplaceTempView("tmp")
>> >
>> >
>> >
>> > Tried few options but I am still getting this error
>> >
>> >
>> >
>> > :49: error: value !=== is not a member of
>> > org.apache.spark.sql.Column
>> >
>> >   filter(length(col("target_mobile_no")) !===
>> 10 ||
>> > substring(col("target_mobile_no"),1,1) !=== "7")
>> >
>> >  ^
>> >
>> > :49: error: value || is not a member of Int
>> >
>> >   filter(length(col("target_mobile_no")) !===
>> 10 ||
>> > substring(col("target_mobile_no"),1,1) !=== "7")
>> >
>> >
>> >
>> > I can create a dataframe for each filter but that does not look
>> efficient
>> > to me?
>> >
>> >
>> >
>> > Thanks
>> >
>> >
>> >
>> > Dr Mich Talebzadeh
>> >
>> >
>> >
>> > LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > <
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >*
>> >
>> >
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> > loss, damage or destruction of data or any other property which may
>> arise
>> > from relying on this email's technical content 

Re: Lightbend Scala professional training & certification

2020-04-29 Thread Mich Talebzadeh
I don't think that will be free!

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 29 Apr 2020 at 13:01, Som Lima  wrote:

> Is there a databricks or other professional certification for Apache
> Spark  ?
>
>
> On Wed, 29 Apr 2020, 11:29 Mich Talebzadeh, 
> wrote:
>
>> Hi,
>>
>> Has anyone had experience of taking training courses  with Lightbend
>> training on Scala
>>
>> I believe they are offering free Scala courses and certifications.
>>
>> Thanks,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>


Re: Lightbend Scala professional training & certification

2020-04-29 Thread Som Lima
Is there a databricks or other professional certification for Apache Spark
?


On Wed, 29 Apr 2020, 11:29 Mich Talebzadeh, 
wrote:

> Hi,
>
> Has anyone had experience of taking training courses  with Lightbend
> training on Scala
>
> I believe they are offering free Scala courses and certifications.
>
> Thanks,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread akshay naidu
Today, entire Indian nationals are mourning for the demise of Irfan Khan, A
true Indian Muslim. And this idiot Zahid Amin or whoever created this
bot(not sure if its a bot or something)  spreading rumors about India.
Rights given to Muslims in India are much open then any other Muslim
majority country. Well it's not this guys fault entirely. He's just an
idiot who's been brainwashed as a kid by the kind of people who run camps
against humanity. These idiots are made into believe that by doing such
nonsense stuff , 72 *hoor *will welcome them after death.. bullsh*t.
And it's because of your kind the other honest and real Muslim suffers not
just in India but in every part of the world.

MODERATOR , PLEASE SPAM THIS ACCOUNT.

On Wed, Apr 29, 2020 at 12:38 PM Zahid Amin  wrote:

> EVIL PROSPERS ONLY WHEN GOOD MEN DO NOTHING.
>
> I have done some good today. I unveiled Evil.
>
> FACT:
>  10 million Kasmiris Muslim and Chinese on Lockdown since August 2019.
>
> FACT : Citizen Amendment Bill
> Cast out non Hindu from India Beginning with Muslims.
>
> FACT:  OFFICIAL USA Report Religious Freedom :Recognition of those two
> Facts. India most Dangerous country for ethnic minorities.
>
> FACT  Pakistan created in 1947 for Muslims and ethnic minorities to live
> separate.
>
> FACT:  you Indians in IT industry are all brahmin etc. The purists the
> Hindutwa.
>
>
> FACT: My tribe fought in the 1857 Indian Rebellion and I am my next five
> generations without home.
>
>
>
>
>
>
>
>
>
>
>
> Sent: Wednesday, April 29, 2020 at 8:32 AM
> > From: "Deepak Sharma" 
> > To: "Gaurav Agarwal" 
> > Cc: "Zahid Amin" , "user" 
> > Subject: Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA
> Religious Freedom Report out TODAY
> >
> > I am unsubscribing until these hatemongers like Zahid Amin are removed or
> > blocked .
> > FYI Zahid Amin , Indian govt rejected the false report already .
> >
> >
> >
> > On Wed, 29 Apr 2020 at 11:58 AM, Gaurav Agarwal 
> > wrote:
> >
> > > Spark moderator supress this user please. Unnecessary Spam or apache
> spark
> > > account is hacked ?
> > >
> > > On Wed, Apr 29, 2020, 11:56 AM Zahid Amin  wrote:
> > >
> > >> How can it be rumours   ?
> > >> Of course you want  to suppress me.
> > >> Suppress USA official Report out TODAY .
> > >>
> > >> > Sent: Wednesday, April 29, 2020 at 8:17 AM
> > >> > From: "Deepak Sharma" 
> > >> > To: "Zahid Amin" 
> > >> > Cc: user@spark.apache.org
> > >> > Subject: Re: India Most Dangerous : USA Religious Freedom Report
> > >> >
> > >> > Can someone block this email ?
> > >> > He is spreading rumours and spamming.
> > >> >
> > >> > On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin 
> > >> wrote:
> > >> >
> > >> > > USA report states that India is now the most dangerous country for
> > >> Ethnic
> > >> > > Minorities.
> > >> > >
> > >> > > Remember Martin Luther King.
> > >> > >
> > >> > >
> > >> > >
> > >>
> https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
> > >> > >
> > >> > > It began with Kasmir and still in locked down Since August 2019.
> > >> > >
> > >> > > The Hindutwa  want to eradicate all minorities .
> > >> > > The Apache foundation is infested with these Hindutwa purists and
> > >> their
> > >> > > sympathisers.
> > >> > > Making Sure all Muslims are kept away from IT industry. Using you
> to
> > >> help
> > >> > > them.
> > >> > >
> > >> > > Those people in IT you deal with are purists yet you are not
> welcome
> > >> India.
> > >> > >
> > >> > > The recognition of  Hindutwa led to the creation of Pakistan in
> 1947.
> > >> > >
> > >> > > Evil propers when good men do nothing.
> > >> > > The genocide is not coming . It is Here.
> > >> > > I ask you please think and act.
> > >> > > Protect the Muslims from Indian Continent.
> > >> > >
> > >> > >
> -
> > >> > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> > >> > >
> > >> > > --
> > >> > Thanks
> > >> > Deepak
> > >> > www.bigdatabig.com
> > >> > www.keosha.net
> > >> >
> > >>
> > >> -
> > >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> > >>
> > >> --
> > Thanks
> > Deepak
> > www.bigdatabig.com
> > www.keosha.net
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Lightbend Scala professional training & certification

2020-04-29 Thread Mich Talebzadeh
Hi,

Has anyone had experience of taking training courses  with Lightbend
training on Scala

I believe they are offering free Scala courses and certifications.

Thanks,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Spark 3 Release

2020-04-29 Thread Michael Edwards
Hi,

We’re using Spark 2.4.5 in our project with Java 1.8.

We wanted to upgrade to Java 11 but it seems we need to upgrade to Spark 3
to make this possible.

I’ve tried it with v3.0.0-preview2 but wanted to ask if there was a release
due in the near future.

Regards,

MIke

-- 
Michael Edwards
Tel: 07790 540417
Email: michael.edwa...@coreplex.co.uk


Re: [Structured Streaming] NullPointerException in long running query

2020-04-29 Thread ZHANG Wei
Is there any chance we also print the least recent failure in stage as the
following most recent failure before Driver statcktrace? 

> >>   Caused by: org.apache.spark.SparkException: Job aborted due to stage
> >> failure: Task 10 in stage 1.0 failed 4 times, most recent failure: Lost
> >> task 10.3 in stage 1.0 (TID 81, spark6, executor 1):
> >> java.lang.NullPointerException
> >> Driver stacktrace:

-- 
Cheers,
-z

On Tue, 28 Apr 2020 23:48:17 -0700
"Shixiong(Ryan) Zhu"  wrote:

> The stack trace is omitted by JVM when an exception is thrown too
> many times. This usually happens when you have multiple Spark tasks on the
> same executor JVM throwing the same exception. See
> https://stackoverflow.com/a/3010106
> 
> Best Regards,
> Ryan
> 
> 
> On Tue, Apr 28, 2020 at 10:45 PM lec ssmi  wrote:
> 
> > It should be a problem of my data quality. It's curious why the
> > driver-side exception stack has no specific exception information.
> >
> > Edgardo Szrajber  于2020年4月28日周二 下午3:32写道:
> >
> >> The exception occured while aborting the stage. It might be interesting
> >> to try to understand the reason for the abortion.
> >> Maybe timeout? How long the query run?
> >> Bentzi
> >>
> >> Sent from Yahoo Mail on Android
> >> 
> >>
> >> On Tue, Apr 28, 2020 at 9:25, Jungtaek Lim
> >>  wrote:
> >> The root cause of exception is occurred in executor side "Lost task 10.3
> >> in stage 1.0 (TID 81, spark6, executor 1)" so you may need to check there.
> >>
> >> On Tue, Apr 28, 2020 at 2:52 PM lec ssmi  wrote:
> >>
> >> Hi:
> >>   One of my long-running queries occasionally encountered the following
> >> exception:
> >>
> >>
> >>   Caused by: org.apache.spark.SparkException: Job aborted due to stage
> >> failure: Task 10 in stage 1.0 failed 4 times, most recent failure: Lost
> >> task 10.3 in stage 1.0 (TID 81, spark6, executor 1):
> >> java.lang.NullPointerException
> >> Driver stacktrace:
> >> at org.apache.spark.scheduler.DAGScheduler.org
> >> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
> >> at
> >> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> >> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
> >> at scala.Option.foreach(Option.scala:257)
> >> at
> >> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
> >> at
> >> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
> >> at
> >> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
> >> at
> >> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
> >> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> >> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
> >> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
> >> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
> >> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
> >> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
> >> at
> >> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:929)
> >> at
> >> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:927)
> >> at
> >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >> at
> >> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> >> at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:927)
> >> at
> >> org.apache.spark.sql.execution.streaming.ForeachSink.addBatch(ForeachSink.scala:49)
> >> at
> >> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:475)
> >> at
> >> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
> >> at
> >> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:473)
> >> at
> >> 

Re: Filtering on multiple columns in spark

2020-04-29 Thread Mich Talebzadeh
Hi Zhang,

Yes the SQL way worked fine

  val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

   filter("length(target_mobile_no) != 10 OR
substring(target_mobile_no,1,1) != '7'")

Many thanks,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 29 Apr 2020 at 09:51, ZHANG Wei  wrote:

> AFAICT, maybe Spark SQL built-in functions[1] can help as below:
>
> scala> df.show()
> ++---+
> | age|   name|
> ++---+
> |null|Michael|
> |  30|   Andy|
> |  19| Justin|
> ++---+
>
>
> scala> df.filter("length(name) == 4 or substring(name, 1, 1) ==
> 'J'").show()
> +---+--+
> |age|  name|
> +---+--+
> | 30|  Andy|
> | 19|Justin|
> +---+--+
>
>
> --
> Cheers,
> -z
> [1] https://spark.apache.org/docs/latest/api/sql/index.html
>
> On Wed, 29 Apr 2020 08:45:26 +0100
> Mich Talebzadeh  wrote:
>
> > Hi,
> >
> >
> >
> > Trying to filter a dataframe with multiple conditions using OR "||" as
> below
> >
> >
> >
> >   val rejectedDF = newDF.withColumn("target_mobile_no",
> > col("target_mobile_no").cast(StringType)).
> >
> >filter(length(col("target_mobile_no")) !== 10 ||
> > substring(col("target_mobile_no"),1,1) !== "7")
> >
> >
> >
> > This throws this error
> >
> >
> >
> > res12: org.apache.spark.sql.DataFrame = []
> >
> > :49: error: value || is not a member of Int
> >
> >   filter(length(col("target_mobile_no")) !== 10
> ||
> > substring(col("target_mobile_no"),1,1) !== "7")
> >
> >
> >
> > Try another way
> >
> >
> >
> > val rejectedDF = newDF.withColumn("target_mobile_no",
> > col("target_mobile_no").cast(StringType)).
> >
> >filter(length(col("target_mobile_no")) !=== 10 ||
> > substring(col("target_mobile_no"),1,1) !=== "7")
> >
> >   rejectedDF.createOrReplaceTempView("tmp")
> >
> >
> >
> > Tried few options but I am still getting this error
> >
> >
> >
> > :49: error: value !=== is not a member of
> > org.apache.spark.sql.Column
> >
> >   filter(length(col("target_mobile_no")) !=== 10
> ||
> > substring(col("target_mobile_no"),1,1) !=== "7")
> >
> >  ^
> >
> > :49: error: value || is not a member of Int
> >
> >   filter(length(col("target_mobile_no")) !=== 10
> ||
> > substring(col("target_mobile_no"),1,1) !=== "7")
> >
> >
> >
> > I can create a dataframe for each filter but that does not look efficient
> > to me?
> >
> >
> >
> > Thanks
> >
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>


Re: Filtering on multiple columns in spark

2020-04-29 Thread ZHANG Wei
AFAICT, maybe Spark SQL built-in functions[1] can help as below:

scala> df.show()
++---+
| age|   name|
++---+
|null|Michael|
|  30|   Andy|
|  19| Justin|
++---+


scala> df.filter("length(name) == 4 or substring(name, 1, 1) == 'J'").show()
+---+--+
|age|  name|
+---+--+
| 30|  Andy|
| 19|Justin|
+---+--+


-- 
Cheers,
-z
[1] https://spark.apache.org/docs/latest/api/sql/index.html

On Wed, 29 Apr 2020 08:45:26 +0100
Mich Talebzadeh  wrote:

> Hi,
> 
> 
> 
> Trying to filter a dataframe with multiple conditions using OR "||" as below
> 
> 
> 
>   val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
> 
>filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
> 
> 
> 
> This throws this error
> 
> 
> 
> res12: org.apache.spark.sql.DataFrame = []
> 
> :49: error: value || is not a member of Int
> 
>   filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
> 
> 
> 
> Try another way
> 
> 
> 
> val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
> 
>filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
> 
>   rejectedDF.createOrReplaceTempView("tmp")
> 
> 
> 
> Tried few options but I am still getting this error
> 
> 
> 
> :49: error: value !=== is not a member of
> org.apache.spark.sql.Column
> 
>   filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
> 
>  ^
> 
> :49: error: value || is not a member of Int
> 
>   filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
> 
> 
> 
> I can create a dataframe for each filter but that does not look efficient
> to me?
> 
> 
> 
> Thanks
> 
> 
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Converting a date to milliseconds with time zone in Scala Eclipse IDE

2020-04-29 Thread Som Lima
Also you may be surprised  to learn that I started programming in scala
just yesterday. I was really please I had a challenge to solve rather than
copying example programmes which can be boring.

Judging from answers received I think some may find this information useful.

 I used a scala specific IDE  I got from http://scala-ide.org.

If you do use it there is a bug I fixed to make it work.
I added -vm to eclipse.ini file . The BUG  is:-
 then on the  NEXT line you put the path to jdk8.
Other jdk versions  can also causes other errors.

Eclipse.ini

-startup
plugins/org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.500.v20170531-1133
-Xmx256m
-Xms200m

-XX:MaxPermSize=384m
-vm
/path/to/java/jdk8u242-b08/bin


On Tue, 28 Apr 2020, 22:22 Mich Talebzadeh, 
wrote:

> Hi,
>
> Thank you all,
>
> I am just thinking of passing that date   06/04/2020 12:03:43  and
> getting the correct format from the module. In effect
>
> This date format  -MM-dd'T'HH:mm:ss.SZ as pattern
>
> in other words rather than new Date()  pass "06/04/2020 12:03:43" as string
>
> REgards,
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 28 Apr 2020 at 21:31, Som Lima  wrote:
>
>> import java.time._
>> import java.util.Date
>> import java.text.SimpleDateFormat
>> import java.util.Locale
>> import java.util.SimpleTimeZone
>>
>> object CalendarDemo extends App {
>>
>> println("Calendar Demo")
>>  val pattern = "E dd M  HH:mm:ss.SSSZ";
>> val simpleDateFormat = (new SimpleDateFormat(pattern, new
>> Locale("en", "UK")));
>> val date = simpleDateFormat.format(new Date());
>> System.out.println(date);
>>
>> val pattern2 = "dd  MM HH:mm:ss.SZ";
>> val simpleDateFormat2 = (new SimpleDateFormat(pattern2, new
>> Locale("en", "UK")));
>> val date2 = simpleDateFormat2.format(new Date());
>> System.out.println(date2);
>>
>> /* *
>> Pattern Syntax
>>
>> You can use the following symbols in your formatting pattern:
>> G Era designator (before christ, after christ)
>> y Year (e.g. 12 or 2012). Use either yy or .
>> M Month in year. Number of M's determine length of format (e.g. MM, MMM
>> or M)
>> d Day in month. Number of d's determine length of format (e.g. d or dd)
>> h Hour of day, 1-12 (AM / PM) (normally hh)
>> H Hour of day, 0-23 (normally HH)
>> m Minute in hour, 0-59 (normally mm)
>> s Second in minute, 0-59 (normally ss)
>> S Millisecond in second, 0-999 (normally SSS)
>> E Day in week (e.g Monday, Tuesday etc.)
>> D Day in year (1-366)
>> F Day of week in month (e.g. 1st Thursday of December)
>> w Week in year (1-53)
>> W Week in month (0-5)
>> a AM / PM marker
>> k Hour in day (1-24, unlike HH's 0-23)
>> K Hour in day, AM / PM (0-11)
>> z Time Zone
>> ' Escape for text delimiter
>> ' Single quote
>> **/
>>
>> }
>>
>>
>> On Tue, 28 Apr 2020, 19:18 Edgardo Szrajber, 
>> wrote:
>>
>>> Hi
>>> please check combining unix_timestamp and from_unixtime,
>>> Something like:
>>> from_unixtime(unix_timestamp( "06-04-2020 12:03:43"),"-MM-dd'T'HH:mm:ss
>>> Z")
>>>
>>> please note that I just wrote without any validation.
>>>
>>> In any case, you might want to check the documentation of both functions
>>> to check all valid formats. Also note that this functions are universal
>>> (not only in Spark, Hive) so you have a huge amount of documentation
>>> available.
>>>
>>> Bentzi
>>>
>>>
>>> On Tuesday, April 28, 2020, 08:32:18 PM GMT+3, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
>>>
>>> Unfortunately that did not work.
>>>
>>> any other suggestions?
>>>
>>> thanks
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 28 Apr 2020 at 17:41, Mich Talebzadeh 
>>> wrote:
>>>
>>> Thanks Neeraj, I'll check it out. !
>>>

Re: Filtering on multiple columns in spark

2020-04-29 Thread Som Lima
>From your email the obvious seems to be that
10  is an Int because it is not surrounded in quotes ""
10 should be "10".

Although I can't image a telephone number with only 10 because that is what
you are trying to program.


In *Scala*, you can check *if *two operands *are equal* ( == ) or *not* (
!= ) *and* it returns true *if* the condition *is* met, false *if not* (
else ). By itself, ! *is *called the Logical *NOT* Operator.

On Wed, 29 Apr 2020, 08:45 Mich Talebzadeh, 
wrote:

> Hi,
>
>
>
> Trying to filter a dataframe with multiple conditions using OR "||" as
> below
>
>
>
>   val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
>
>filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
>
>
>
> This throws this error
>
>
>
> res12: org.apache.spark.sql.DataFrame = []
>
> :49: error: value || is not a member of Int
>
>   filter(length(col("target_mobile_no")) !== 10 ||
> substring(col("target_mobile_no"),1,1) !== "7")
>
>
>
> Try another way
>
>
>
> val rejectedDF = newDF.withColumn("target_mobile_no",
> col("target_mobile_no").cast(StringType)).
>
>filter(length(col("target_mobile_no")) !=== 10 ||
> substring(col("target_mobile_no"),1,1) !=== "7")
>
>   rejectedDF.createOrReplaceTempView("tmp")
>
>
>
> Tried few options but I am still getting this error
>
>
>
> :49: error: value !=== is not a member of
> org.apache.spark.sql.Column
>
>   filter(length(col("target_mobile_no")) !=== 10
> || substring(col("target_mobile_no"),1,1) !=== "7")
>
>  ^
>
> :49: error: value || is not a member of Int
>
>   filter(length(col("target_mobile_no")) !=== 10
> || substring(col("target_mobile_no"),1,1) !=== "7")
>
>
>
> I can create a dataframe for each filter but that does not look efficient
> to me?
>
>
>
> Thanks
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


unsubscribe

2020-04-29 Thread Zeming Yu
unsubscribe

Get Outlook for Android



Filtering on multiple columns in spark

2020-04-29 Thread Mich Talebzadeh
Hi,



Trying to filter a dataframe with multiple conditions using OR "||" as below



  val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

   filter(length(col("target_mobile_no")) !== 10 ||
substring(col("target_mobile_no"),1,1) !== "7")



This throws this error



res12: org.apache.spark.sql.DataFrame = []

:49: error: value || is not a member of Int

  filter(length(col("target_mobile_no")) !== 10 ||
substring(col("target_mobile_no"),1,1) !== "7")



Try another way



val rejectedDF = newDF.withColumn("target_mobile_no",
col("target_mobile_no").cast(StringType)).

   filter(length(col("target_mobile_no")) !=== 10 ||
substring(col("target_mobile_no"),1,1) !=== "7")

  rejectedDF.createOrReplaceTempView("tmp")



Tried few options but I am still getting this error



:49: error: value !=== is not a member of
org.apache.spark.sql.Column

  filter(length(col("target_mobile_no")) !=== 10 ||
substring(col("target_mobile_no"),1,1) !=== "7")

 ^

:49: error: value || is not a member of Int

  filter(length(col("target_mobile_no")) !=== 10 ||
substring(col("target_mobile_no"),1,1) !=== "7")



I can create a dataframe for each filter but that does not look efficient
to me?



Thanks



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Unsubscribe

2020-04-29 Thread Yotto Koga
unsubscribe


Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread Zahid Amin
EVIL PROSPERS ONLY WHEN GOOD MEN DO NOTHING.

I have done some good today. I unveiled Evil.

FACT:
 10 million Kasmiris Muslim and Chinese on Lockdown since August 2019.

FACT : Citizen Amendment Bill
Cast out non Hindu from India Beginning with Muslims.

FACT:  OFFICIAL USA Report Religious Freedom :Recognition of those two Facts. 
India most Dangerous country for ethnic minorities.

FACT  Pakistan created in 1947 for Muslims and ethnic minorities to live 
separate.

FACT:  you Indians in IT industry are all brahmin etc. The purists the Hindutwa.


FACT: My tribe fought in the 1857 Indian Rebellion and I am my next five 
generations without home.











Sent: Wednesday, April 29, 2020 at 8:32 AM
> From: "Deepak Sharma" 
> To: "Gaurav Agarwal" 
> Cc: "Zahid Amin" , "user" 
> Subject: Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious 
> Freedom Report out TODAY
>
> I am unsubscribing until these hatemongers like Zahid Amin are removed or
> blocked .
> FYI Zahid Amin , Indian govt rejected the false report already .
>
>
>
> On Wed, 29 Apr 2020 at 11:58 AM, Gaurav Agarwal 
> wrote:
>
> > Spark moderator supress this user please. Unnecessary Spam or apache spark
> > account is hacked ?
> >
> > On Wed, Apr 29, 2020, 11:56 AM Zahid Amin  wrote:
> >
> >> How can it be rumours   ?
> >> Of course you want  to suppress me.
> >> Suppress USA official Report out TODAY .
> >>
> >> > Sent: Wednesday, April 29, 2020 at 8:17 AM
> >> > From: "Deepak Sharma" 
> >> > To: "Zahid Amin" 
> >> > Cc: user@spark.apache.org
> >> > Subject: Re: India Most Dangerous : USA Religious Freedom Report
> >> >
> >> > Can someone block this email ?
> >> > He is spreading rumours and spamming.
> >> >
> >> > On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin 
> >> wrote:
> >> >
> >> > > USA report states that India is now the most dangerous country for
> >> Ethnic
> >> > > Minorities.
> >> > >
> >> > > Remember Martin Luther King.
> >> > >
> >> > >
> >> > >
> >> https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
> >> > >
> >> > > It began with Kasmir and still in locked down Since August 2019.
> >> > >
> >> > > The Hindutwa  want to eradicate all minorities .
> >> > > The Apache foundation is infested with these Hindutwa purists and
> >> their
> >> > > sympathisers.
> >> > > Making Sure all Muslims are kept away from IT industry. Using you to
> >> help
> >> > > them.
> >> > >
> >> > > Those people in IT you deal with are purists yet you are not welcome
> >> India.
> >> > >
> >> > > The recognition of  Hindutwa led to the creation of Pakistan in 1947.
> >> > >
> >> > > Evil propers when good men do nothing.
> >> > > The genocide is not coming . It is Here.
> >> > > I ask you please think and act.
> >> > > Protect the Muslims from Indian Continent.
> >> > >
> >> > > -
> >> > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >> > >
> >> > > --
> >> > Thanks
> >> > Deepak
> >> > www.bigdatabig.com
> >> > www.keosha.net
> >> >
> >>
> >> -
> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>
> >> --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [Structured Streaming] NullPointerException in long running query

2020-04-29 Thread Shixiong(Ryan) Zhu
The stack trace is omitted by JVM when an exception is thrown too
many times. This usually happens when you have multiple Spark tasks on the
same executor JVM throwing the same exception. See
https://stackoverflow.com/a/3010106

Best Regards,
Ryan


On Tue, Apr 28, 2020 at 10:45 PM lec ssmi  wrote:

> It should be a problem of my data quality. It's curious why the
> driver-side exception stack has no specific exception information.
>
> Edgardo Szrajber  于2020年4月28日周二 下午3:32写道:
>
>> The exception occured while aborting the stage. It might be interesting
>> to try to understand the reason for the abortion.
>> Maybe timeout? How long the query run?
>> Bentzi
>>
>> Sent from Yahoo Mail on Android
>> 
>>
>> On Tue, Apr 28, 2020 at 9:25, Jungtaek Lim
>>  wrote:
>> The root cause of exception is occurred in executor side "Lost task 10.3
>> in stage 1.0 (TID 81, spark6, executor 1)" so you may need to check there.
>>
>> On Tue, Apr 28, 2020 at 2:52 PM lec ssmi  wrote:
>>
>> Hi:
>>   One of my long-running queries occasionally encountered the following
>> exception:
>>
>>
>>   Caused by: org.apache.spark.SparkException: Job aborted due to stage
>> failure: Task 10 in stage 1.0 failed 4 times, most recent failure: Lost
>> task 10.3 in stage 1.0 (TID 81, spark6, executor 1):
>> java.lang.NullPointerException
>> Driver stacktrace:
>> at org.apache.spark.scheduler.DAGScheduler.org
>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
>> at
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>> at
>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
>> at scala.Option.foreach(Option.scala:257)
>> at
>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:929)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:927)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>> at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:927)
>> at
>> org.apache.spark.sql.execution.streaming.ForeachSink.addBatch(ForeachSink.scala:49)
>> at
>> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:475)
>> at
>> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>> at
>> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:473)
>> at
>> org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
>> at
>> org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
>> at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org
>> $apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:472)
>> at
>> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:133)
>> at
>> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
>> at
>> 

Re: How can I add extra mounted disk to HDFS

2020-04-29 Thread JB Data31
Use Hadoop NFSv3 gateway to mount FS.

@*JB*Δ 



Le mar. 28 avr. 2020 à 23:18, Chetan Khatri  a
écrit :

> Hi Spark Users,
>
> My spark job gave me an error No Space left on the device
>


Unsubscribe

2020-04-29 Thread Deepak Sharma
-- 
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread Deepak Sharma
I am unsubscribing until these hatemongers like Zahid Amin are removed or
blocked .
FYI Zahid Amin , Indian govt rejected the false report already .



On Wed, 29 Apr 2020 at 11:58 AM, Gaurav Agarwal 
wrote:

> Spark moderator supress this user please. Unnecessary Spam or apache spark
> account is hacked ?
>
> On Wed, Apr 29, 2020, 11:56 AM Zahid Amin  wrote:
>
>> How can it be rumours   ?
>> Of course you want  to suppress me.
>> Suppress USA official Report out TODAY .
>>
>> > Sent: Wednesday, April 29, 2020 at 8:17 AM
>> > From: "Deepak Sharma" 
>> > To: "Zahid Amin" 
>> > Cc: user@spark.apache.org
>> > Subject: Re: India Most Dangerous : USA Religious Freedom Report
>> >
>> > Can someone block this email ?
>> > He is spreading rumours and spamming.
>> >
>> > On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin 
>> wrote:
>> >
>> > > USA report states that India is now the most dangerous country for
>> Ethnic
>> > > Minorities.
>> > >
>> > > Remember Martin Luther King.
>> > >
>> > >
>> > >
>> https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
>> > >
>> > > It began with Kasmir and still in locked down Since August 2019.
>> > >
>> > > The Hindutwa  want to eradicate all minorities .
>> > > The Apache foundation is infested with these Hindutwa purists and
>> their
>> > > sympathisers.
>> > > Making Sure all Muslims are kept away from IT industry. Using you to
>> help
>> > > them.
>> > >
>> > > Those people in IT you deal with are purists yet you are not welcome
>> India.
>> > >
>> > > The recognition of  Hindutwa led to the creation of Pakistan in 1947.
>> > >
>> > > Evil propers when good men do nothing.
>> > > The genocide is not coming . It is Here.
>> > > I ask you please think and act.
>> > > Protect the Muslims from Indian Continent.
>> > >
>> > > -
>> > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>> > >
>> > > --
>> > Thanks
>> > Deepak
>> > www.bigdatabig.com
>> > www.keosha.net
>> >
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>> --
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


Re: OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread Gaurav Agarwal
Spark moderator supress this user please. Unnecessary Spam or apache spark
account is hacked ?

On Wed, Apr 29, 2020, 11:56 AM Zahid Amin  wrote:

> How can it be rumours   ?
> Of course you want  to suppress me.
> Suppress USA official Report out TODAY .
>
> > Sent: Wednesday, April 29, 2020 at 8:17 AM
> > From: "Deepak Sharma" 
> > To: "Zahid Amin" 
> > Cc: user@spark.apache.org
> > Subject: Re: India Most Dangerous : USA Religious Freedom Report
> >
> > Can someone block this email ?
> > He is spreading rumours and spamming.
> >
> > On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin  wrote:
> >
> > > USA report states that India is now the most dangerous country for
> Ethnic
> > > Minorities.
> > >
> > > Remember Martin Luther King.
> > >
> > >
> > >
> https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
> > >
> > > It began with Kasmir and still in locked down Since August 2019.
> > >
> > > The Hindutwa  want to eradicate all minorities .
> > > The Apache foundation is infested with these Hindutwa purists and their
> > > sympathisers.
> > > Making Sure all Muslims are kept away from IT industry. Using you to
> help
> > > them.
> > >
> > > Those people in IT you deal with are purists yet you are not welcome
> India.
> > >
> > > The recognition of  Hindutwa led to the creation of Pakistan in 1947.
> > >
> > > Evil propers when good men do nothing.
> > > The genocide is not coming . It is Here.
> > > I ask you please think and act.
> > > Protect the Muslims from Indian Continent.
> > >
> > > -
> > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> > >
> > > --
> > Thanks
> > Deepak
> > www.bigdatabig.com
> > www.keosha.net
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


OFFICIAL USA REPORT TODAY India Most Dangerous : USA Religious Freedom Report out TODAY

2020-04-29 Thread Zahid Amin
How can it be rumours   ?
Of course you want  to suppress me.
Suppress USA official Report out TODAY .

> Sent: Wednesday, April 29, 2020 at 8:17 AM
> From: "Deepak Sharma" 
> To: "Zahid Amin" 
> Cc: user@spark.apache.org
> Subject: Re: India Most Dangerous : USA Religious Freedom Report
>
> Can someone block this email ?
> He is spreading rumours and spamming.
>
> On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin  wrote:
>
> > USA report states that India is now the most dangerous country for Ethnic
> > Minorities.
> >
> > Remember Martin Luther King.
> >
> >
> > https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
> >
> > It began with Kasmir and still in locked down Since August 2019.
> >
> > The Hindutwa  want to eradicate all minorities .
> > The Apache foundation is infested with these Hindutwa purists and their
> > sympathisers.
> > Making Sure all Muslims are kept away from IT industry. Using you to help
> > them.
> >
> > Those people in IT you deal with are purists yet you are not welcome India.
> >
> > The recognition of  Hindutwa led to the creation of Pakistan in 1947.
> >
> > Evil propers when good men do nothing.
> > The genocide is not coming . It is Here.
> > I ask you please think and act.
> > Protect the Muslims from Indian Continent.
> >
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
> > --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: India Most Dangerous : USA Religious Freedom Report out Today

2020-04-29 Thread Zahid Amin
> Sent: Wednesday, April 29, 2020 at 8:17 AM
> From: "Deepak Sharma" 
> To: "Zahid Amin" 
> Cc: user@spark.apache.org
> Subject: Re: India Most Dangerous : USA Religious Freedom Report
>
> Can someone block this email ?
> He is spreading rumours and spamming.
>
> On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin  wrote:
>
> > USA report states that India is now the most dangerous country for Ethnic
> > Minorities.
> >
> > Remember Martin Luther King.
> >
> >
> > https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
> >
> > It began with Kasmir and still in locked down Since August 2019.
> >
> > The Hindutwa  want to eradicate all minorities .
> > The Apache foundation is infested with these Hindutwa purists and their
> > sympathisers.
> > Making Sure all Muslims are kept away from IT industry. Using you to help
> > them.
> >
> > Those people in IT you deal with are purists yet you are not welcome India.
> >
> > The recognition of  Hindutwa led to the creation of Pakistan in 1947.
> >
> > Evil propers when good men do nothing.
> > The genocide is not coming . It is Here.
> > I ask you please think and act.
> > Protect the Muslims from Indian Continent.
> >
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
> > --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: India Most Dangerous : USA Religious Freedom Report

2020-04-29 Thread Deepak Sharma
Can someone block this email ?
He is spreading rumours and spamming.

On Wed, 29 Apr 2020 at 11:46 AM, Zahid Amin  wrote:

> USA report states that India is now the most dangerous country for Ethnic
> Minorities.
>
> Remember Martin Luther King.
>
>
> https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3
>
> It began with Kasmir and still in locked down Since August 2019.
>
> The Hindutwa  want to eradicate all minorities .
> The Apache foundation is infested with these Hindutwa purists and their
> sympathisers.
> Making Sure all Muslims are kept away from IT industry. Using you to help
> them.
>
> Those people in IT you deal with are purists yet you are not welcome India.
>
> The recognition of  Hindutwa led to the creation of Pakistan in 1947.
>
> Evil propers when good men do nothing.
> The genocide is not coming . It is Here.
> I ask you please think and act.
> Protect the Muslims from Indian Continent.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
> --
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


India Most Dangerous : USA Religious Freedom Report

2020-04-29 Thread Zahid Amin
USA report states that India is now the most dangerous country for Ethnic 
Minorities.

Remember Martin Luther King.

https://www.mail.com/int/news/us/9880960-religious-freedom-watchdog-pitches-adding-india-to.html#.1258-stage-set1-3

It began with Kasmir and still in locked down Since August 2019.

The Hindutwa  want to eradicate all minorities .
The Apache foundation is infested with these Hindutwa purists and their 
sympathisers.
Making Sure all Muslims are kept away from IT industry. Using you to help them.

Those people in IT you deal with are purists yet you are not welcome India.

The recognition of  Hindutwa led to the creation of Pakistan in 1947.

Evil propers when good men do nothing.
The genocide is not coming . It is Here.
I ask you please think and act.
Protect the Muslims from Indian Continent.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org