Peace dear all,

I hope you all are well and healthy...

I am brand new to Spark/Hadoop. My env. is: Windows 7 with Jupyter/Anaconda
and Spark/Hadoop all installed on my laptop. How can I run the following
without errors:

import findspark
findspark.init()
findspark.find()
from pyspark.sql import SparkSession

This is the error msg. I get:

ModuleNotFoundError: No module named 'findspark'


It seems I missing something for Spark to run well with
Jupyter/Anaconda on Windows 7.


Cheers





Cheers

Reply via email to