[ 
https://issues.apache.org/jira/browse/GRIFFIN-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607643#comment-16607643
 ] 

Cory Woytasik commented on GRIFFIN-190:
---------------------------------------

Thank you Lionel.  We made changes to our spark and livy config files to match 
your docker files and we seem to get a bit farther. 

We changed the sparkJob.properties file to include: 

"

# spark required
sparkJob.file=hdfs:///griffin/griffin-measure.jar
sparkJob.className=org.apache.griffin.measure.Application
sparkJob.args_1=hdfs:///env/env.json
sparkJob.args_3=hdfs,raw
sparkJob.jars_1 = hdfs:///livy/datanucleus-api-jdo-3.2.6.jar
sparkJob.jars_2 = hdfs:///livy/datanucleus-core-3.2.10.jar
sparkJob.jars_3 = hdfs:///livy/datanucleus-rdbms-3.2.9.jar
#sparkJob.uri = http://<your IP>:8998/batches


sparkJob.name=griffin
sparkJob.queue=default

# options
sparkJob.numExecutors=2
sparkJob.executorCores=1
sparkJob.driverMemory=1g
sparkJob.executorMemory=1g

# other dependent jars
sparkJob.jars =

# hive-site.xml location, as configured in spark conf if ignored here
spark.yarn.dist.files = hdfs:///conf/hive-site.xml

# livy
livy.uri=http://localhost:8998/batches

# spark-admin
spark.uri=http://localhost:8088

 

We are now throwing the following error every time a profile job runs:

18/09/07 14:54:02.523 main ERROR Application$: Can not deserialize instance of 
org.apache.griffin.measure.config.params.env.EmailParam out of START_ARRAY token

at [Source: 
[org.apache.hadoop.hdfs.client.HdfsDataInputStream@3fae596|mailto:org.apache.hadoop.hdfs.client.HdfsDataInputStream@3fae596];
 line: 40, column: 4] (through reference chain: 
org.apache.griffin.measure.config.params.env.EnvParam["mail"])

18/09/07 14:54:02.526 Thread-1 INFO ShutdownHookManager: Shutdown hook called

18/09/07 14:54:02.526 Thread-1 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-dffc3893-9e6b-46d8-9f88-1e78ad227904

18/09/07 14:54:02.960 
[SparProcApp_com.cloudera.livy.utils.SparkProcApp@31a994b6|mailto:SparProcApp_com.cloudera.livy.utils.SparkProcApp@31a994b6]
 ERROR SparkProcApp: spark-submit exited with code 254

 

In Jira we found a reference by you to email and sms 
([https://www.mail-archive.com/[email protected]/msg01715.html)] 
but we aren't exactly sure what the users did to fix their env.json file?   
That's also if our problem is similar to their problem.  

We appreciate all of your help on this one. 

> Blank Health and DQ Metrics Screen
> ----------------------------------
>
>                 Key: GRIFFIN-190
>                 URL: https://issues.apache.org/jira/browse/GRIFFIN-190
>             Project: Griffin (Incubating)
>          Issue Type: Bug
>    Affects Versions: 0.2.0-incubating
>            Reporter: Cory Woytasik
>            Priority: Major
>
> Griffin is up and running.  We have both an accuracy measure and a profiling 
> measure that is set to run every minute via jobs.  When we click the chart 
> icon next to the job we receive a "no content" message.  When we click on the 
> Health link or DQ Metrics link they think for a second and then display a 
> blank screen.  We are thinking this might be ES related, but aren't 
> completely sure.  Need some help.  We assume it's a path or property setup 
> issue.  Here are the versions we are running:
> Hive - 3.1.0
> Elasticsearch - 5.3.1
> griffin - 0.2.0
> hadoop - 3.1.1
> livy - 0.3.0
> spark - 2.3.1
> Using postgres too



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to