Re: Mesos + Spark users going forward?

2021-04-07 Thread dmcwhorter
We are using the mesos integration at Premier (https://www.premierinc.com/). 
Obviously with the move to the attic we will likely move away from Mesos in
the future.  I think deprecating the mesos integration makes sense.  We
would probably continue to utilize the spark mesos components for another
release or two if its possible to include them for a little bit longer
before they're removed.



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Mesos + Spark users going forward?

2021-04-07 Thread Mridul Muralidharan
Unfortunate about Mesos, +1 on deprecation of mesos integration.

Regards,
Mridul


On Wed, Apr 7, 2021 at 7:12 AM Sean Owen  wrote:

> I noted that Apache Mesos is moving to the attic, so won't be actively
> developed soon:
>
> https://lists.apache.org/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E
>
> That doesn't mean people will stop using it as a Spark resource manager
> soon. But it suggests the Spark + Mesos integration is a candidate for
> deprecation and eventual removal in Spark at some point.
>
> This is mostly an informal poll: are there Mesos users out there planning
> to continue using it for a while? moving off? just seeing if it's
> reasonable to even start deprecation in 3.2.0.
>
> Sean
>
>


Mesos + Spark users going forward?

2021-04-07 Thread Sean Owen
I noted that Apache Mesos is moving to the attic, so won't be actively
developed soon:
https://lists.apache.org/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E

That doesn't mean people will stop using it as a Spark resource manager
soon. But it suggests the Spark + Mesos integration is a candidate for
deprecation and eventual removal in Spark at some point.

This is mostly an informal poll: are there Mesos users out there planning
to continue using it for a while? moving off? just seeing if it's
reasonable to even start deprecation in 3.2.0.

Sean


Re: BOOK review of Spark: WARNING to spark users

2020-05-21 Thread Jacek Laskowski
Hi Emma,

I'm curious about the purpose of the email. Mind elaborating?

Pozdrawiam,
Jacek Laskowski

https://about.me/JacekLaskowski
"The Internals Of" Online Books 
Follow me on https://twitter.com/jaceklaskowski




On Wed, May 20, 2020 at 10:43 PM emma davis 
wrote:

>
> *Book:* Machine Learning with Apache Spark Quick Start Guide
> *publisher* : packt>
>
>
> *F**ollow**ing* this Getting Started with Python in VS Code
> https://code.visualstudio.com/docs/python/python-tutorial
>
> I realised Jillur Qudus has written and published a book without any
> knowledge
> of subject matter, amongst other things Python.
>
>
>
> *Highlighted proof with further details further down the email. *
>
> import findspark # these lines of code are unnecessary see link above for
> setup
> findspark.init()
>
> Setting SPARK_HOME or any other spark variables are unnecessary because
> Spark like any
> frameworks is self contained and has its own conf directory for startup 
> persistent
> configuration settings.
> Obviously the software would find its own current directory upon starting
> i.e. sbin/start-master.sh
>
> Spark is a BIG DATA tool ( heavy distributed ,parallelism processing) so
> clearly you would expect its hello world demo programs to demonstrate
> that.
>
> what is the point of setting num_samples=100. something like 10**10 would
> make sense to test performance.
>
>
>
> *This is my warning do not end up wasting your valuable time as I did .  I
> fee your time is valuable. *
> *I realise the scam as I got a better understanding of the product by just
> doing the correct hello world program from correct source. *
>
> “Research by CISQ found that, in 2018, poor quality software cost
> organizations $2.8 trillion in the US alone. “
>
> I attribute this to the Indian IT industry claiming they can do job better
> than the natives [US , Europeans.] Implying Indian Education or IT people
> is superior. For example People like me born, live and educated  in the
> western Europe
>
> *https://www.it-cisq.org/the-cost-of-poor-quality-software-in-the-us-a-2018-report/The-Cost-of-Poor-Quality-Software-in-the-US-2018-Report.pdf
> *
>
>
> *Contributors: About the Author*
> “*Jillur Qudus* is a lead technical architect, polygot software engineer
> and data scientist
> with over 10 years of hand-on experience in architecting and engineering
> distributed,
> scalable , high performance .. to combat serious organised crime. Jillur
> has extensive experience working with government, intelligence,law
> enforcement and banking, and has worked across the world including
> Japan,Singapore,Malysia,Hong Kong and New Zealand .. founder of keisan, a
> UK-based company specializing in open source distributed technologies and
> machine learning…“
> This obviously means a lot to many but when I look at his work Judge for
> yourself based on evidence.
>
> *Page 54*
> * ”*
> Additional Python Packages
> > conda install -c conda-forge findspark
> > conda install -c conda-forge pykafka
> ...”**
>
> The remainder of the program was copied from spark website so that wasn’t
> wrong.
> *Page 63*
>
> * “*
> > cd *etc*/profile.d
> vi spark.sh
> $ export SPARK_HOME=/opt/spark-2.3.2-bin-hadoop2.7
> > source spark.sh
>
> .. in order for the SPARK_HOME environment variable to be successfully
> recognized and registered by findspark ...
> ….
>
> We are now ready to write out first spark application in Python ! …..
>
> # (1) import required Python dependencies
> import findspark
> findspark.init()
>
> (3)
> ….
> num_samples = 100 *“ ***
>
>
> emma davis
> emma.davi...@aol.com
>
>


BOOK review of Spark: WARNING to spark users

2020-05-20 Thread emma davis
  p { margin-bottom: 0.25cm; line-height: 115%; background: transparent } 
a:link { color: #80; so-language: zxx; text-decoration: underline }
Book: MachineLearning with Apache Spark Quick Start Guide publisher :packt> 

Following this Getting Started with Python inVS 
Codehttps://code.visualstudio.com/docs/python/python-tutorial
I realised JillurQudus has written and published a book without any knowledgeof 
subject matter,amongst other things Python. 

Highlighted proof with further details further down the email. 

import findspark #these lines of code are unnecessary see link above for 
setupfindspark.init()
SettingSPARK_HOME or any other sparkvariables are unnecessarybecause Spark like 
any frameworksis self contained and has its own conf directory for 
startuppersistent configuration settings. Obviously the software would find its 
own current directory uponstarting i.e. sbin/start-master.sh
Sparkis a BIG DATA tool ( heavy distributed ,parallelism processing) soclearly 
you would expect its helloworld demo programs todemonstrate that.
what is the point of setting num_samples=100. something like 10**10 would make 
sense to test performance.

Thisis mywarning do not end up wasting yourvaluable time as I did .  I fee your 
time is valuable.
Irealise the scam as I got a better understanding of the product byjust 
doingthe correct hello world program from correct source. 
“Research byCISQ found that, in 2018, poor quality software cost 
organizations$2.8 trillion in the US alone. “
I attribute thisto the Indian IT industry claiming they can do job better than 
thenatives [US , Europeans.] Implying Indian Education or IT people issuperior. 
For example People like me born, live and educated  in the western Europe

https://www.it-cisq.org/the-cost-of-poor-quality-software-in-the-us-a-2018-report/The-Cost-of-Poor-Quality-Software-in-the-US-2018-Report.pdf
 
Contributors:About the Author“Jillur Qudusis a lead technical architect, 
polygot software engineer and datascientistwith over 10 yearsof hand-on 
experience in architecting and engineering distributed,scalable , 
highperformance .. to combat serious organised crime. Jillur hasextensive 
experience working with government, intelligence,lawenforcement and banking, 
and has worked across the world includingJapan,Singapore,Malysia,Hong Kong and 
New Zealand .. founder ofkeisan, a UK-based company specializing in open source 
distributedtechnologies and machine learning…“This obviously meansa lot to many 
but when I look at his work Judge for yourself based onevidence.
Page 54 ”Additional PythonPackages> conda install -c conda-forge 
findspark> conda install -c conda-forge pykafka ...”
Theremainder of the program was copied from spark website sothat wasn’t wrong. 
Page 63
 “> cdetc/profile.dvi spark.sh  $export 
SPARK_HOME=/opt/spark-2.3.2-bin-hadoop2.7>source spark.sh
..in order for the SPARK_HOME environment variable to be successfullyrecognized 
and registered by findspark ...….
Weare now ready to write out first spark application in Python ! …..
#(1) import required Python dependenciesimportfindsparkfindspark.init()
(3)….num_samples= 100 “ 
 
emma davis
emma.davi...@aol.com


Please add the Chicago Spark Users' Group to the community page

2015-07-06 Thread Dean Wampler
Here's our home page: http://www.meetup.com/Chicago-Spark-Users/

Thanks,
Dean

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
http://shop.oreilly.com/product/0636920033073.do (O'Reilly)
Typesafe http://typesafe.com
@deanwampler http://twitter.com/deanwampler
http://polyglotprogramming.com


Re: Please add the Chicago Spark Users' Group to the community page

2015-07-06 Thread Denny Lee
Hey Dean,
Sure, will take care of this.
HTH,
Denny

On Tue, Jul 7, 2015 at 10:07 Dean Wampler deanwamp...@gmail.com wrote:

 Here's our home page: http://www.meetup.com/Chicago-Spark-Users/

 Thanks,
 Dean

 Dean Wampler, Ph.D.
 Author: Programming Scala, 2nd Edition
 http://shop.oreilly.com/product/0636920033073.do (O'Reilly)
 Typesafe http://typesafe.com
 @deanwampler http://twitter.com/deanwampler
 http://polyglotprogramming.com



Re: Spark users

2015-05-20 Thread Akhil Das
Yes, this is the user group. Feel free to ask your questions in this list.

Thanks
Best Regards

On Wed, May 20, 2015 at 5:58 AM, Ricardo Goncalves da Silva 
ricardog.si...@telefonica.com wrote:

  Hi
 I'm learning spark focused on data and machine learning. Migrating from
 SAS.

 There is a group for it? My questions are basic for now and I having very
 few answers.

 Tal

 Rick.



 Enviado do meu smartphone Samsung Galaxy.

 --

 Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario,
 puede contener información privilegiada o confidencial y es para uso
 exclusivo de la persona o entidad de destino. Si no es usted. el
 destinatario indicado, queda notificado de que la lectura, utilización,
 divulgación y/o copia sin autorización puede estar prohibida en virtud de
 la legislación vigente. Si ha recibido este mensaje por error, le rogamos
 que nos lo comunique inmediatamente por esta misma vía y proceda a su
 destrucción.

 The information contained in this transmission is privileged and
 confidential information intended only for the use of the individual or
 entity named above. If the reader of this message is not the intended
 recipient, you are hereby notified that any dissemination, distribution or
 copying of this communication is strictly prohibited. If you have received
 this transmission in error, do not read it. Please immediately reply to the
 sender that you have received this communication in error and then delete
 it.

 Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário,
 pode conter informação privilegiada ou confidencial e é para uso exclusivo
 da pessoa ou entidade de destino. Se não é vossa senhoria o destinatário
 indicado, fica notificado de que a leitura, utilização, divulgação e/ou
 cópia sem autorização pode estar proibida em virtude da legislação vigente.
 Se recebeu esta mensagem por erro, rogamos-lhe que nos o comunique
 imediatamente por esta mesma via e proceda a sua destruição



Spark users

2015-05-19 Thread Ricardo Goncalves da Silva
Hi
I'm learning spark focused on data and machine learning. Migrating from SAS.

There is a group for it? My questions are basic for now and I having very few 
answers.

Tal

Rick.



Enviado do meu smartphone Samsung Galaxy.



Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede 
contener información privilegiada o confidencial y es para uso exclusivo de la 
persona o entidad de destino. Si no es usted. el destinatario indicado, queda 
notificado de que la lectura, utilización, divulgación y/o copia sin 
autorización puede estar prohibida en virtud de la legislación vigente. Si ha 
recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente 
por esta misma vía y proceda a su destrucción.

The information contained in this transmission is privileged and confidential 
information intended only for the use of the individual or entity named above. 
If the reader of this message is not the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this communication 
is strictly prohibited. If you have received this transmission in error, do not 
read it. Please immediately reply to the sender that you have received this 
communication in error and then delete it.

Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode 
conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa 
ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica 
notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização 
pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem 
por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e 
proceda a sua destruição


Add to the spark users list

2015-01-06 Thread Bilna Govind
Hi,

Organization name: Amrita Center for Cyber Security Systems and Networks

URL: https://www.amrita.edu/center/cyber-security

We use Spark for BigData analytics and ML/Data Mining.
Spark Streaming in IoT Platform

-- 
Regards,
Bilna P


Fwd: Please add us to the Spark users page

2014-12-09 Thread Abhik Majumdar
Hi,

My name is Abhik Majumdar and I am a co-founder of Vidora Corp. We use
Spark at Vidora to power our machine learning stack and we are requesting
to be included on your Powered by Spark page:
https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark

Here is the information you requested:

*organization name:* Vidora

*URL:* http://www.vidora.com

*a list of which Spark components you are using:* Spark Core, MLlib, Spark
Streaming.

*a short description of your use case:* Vidora personalized the online
experiences for content companies and provides a platform to tailor and
adapt their consumer experiences to each of their users. Our machine
learning stack, running on Spark, is able to completely personalize the
entire webpage or mobile app for any kind of content with the objective of
optimizing the metric that the customer cares about.


Please let me know if there is any additional information we can provide.

Thanks
Abhik


Abhik Majumdar, Co-Founder Vidora
Website : www.vidora.com
E-mail : ab...@vidora.com
Follow us on Twitter https://twitter.com/#!/vidoracorp or LinkedIn
http://www.linkedin.com/company/vidora
--