Re: Error: Vignette re-building failed. Execution halted

2020-06-24 Thread Anwar AliKhan
THANKS !


It appears that was the last dependency for the build.
sudo apt-get install -y r-cran-e1071.

Shout out to  ZOOM
https://zoomadmin.com/HowToInstall/UbuntuPackage/r-cran-e1071  again
like they say it was "It’s Super Easy! "

package  knitr was the previous missing dependency which I was able to work
out from build error message
sudo apt install knitr

'e1071' doesn't appear to be a package name or namespace.
package 'e1071' seems to be a formidable package for machine learning
algorithms.


*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation
path
* DONE (SparkR)
/opt/spark/R
+ popd
+ mkdir /opt/spark/dist/conf
+ cp /opt/spark/conf/fairscheduler.xml.template
/opt/spark/conf/log4j.properties.template
/opt/spark/conf/metrics.properties.template /opt/spark/conf/slaves.template
/opt/spark/conf/spark-defaults.conf.template
/opt/spark/conf/spark-env.sh.template /opt/spark/dist/conf
+ cp /opt/spark/README.md /opt/spark/dist
+ cp -r /opt/spark/bin /opt/spark/dist
+ cp -r /opt/spark/python /opt/spark/dist
+ '[' true == true ']'
+ rm -f /opt/spark/dist/python/dist/pyspark-3.1.0.dev0.tar.gz
+ cp -r /opt/spark/sbin /opt/spark/dist
+ '[' -d /opt/spark/R/lib/SparkR ']'
+ mkdir -p /opt/spark/dist/R/lib
+ cp -r /opt/spark/R/lib/SparkR /opt/spark/dist/R/lib
+ cp /opt/spark/R/lib/sparkr.zip /opt/spark/dist/R/lib
+ '[' true == true ']'
+ TARDIR_NAME=spark-3.1.0-SNAPSHOT-bin-custom-spark
+ TARDIR=/opt/spark/spark-3.1.0-SNAPSHOT-bin-custom-spark
+ rm -rf /opt/spark/spark-3.1.0-SNAPSHOT-bin-custom-spark
+ cp -r /opt/spark/dist /opt/spark/spark-3.1.0-SNAPSHOT-bin-custom-spark
+ tar czf spark-3.1.0-SNAPSHOT-bin-custom-spark.tgz -C /opt/spark
spark-3.1.0-SNAPSHOT-bin-custom-spark
+ rm -rf /opt/spark/spark-3.1.0-SNAPSHOT-bin-custom-spark



On Wed, 24 Jun 2020, 11:07 Hyukjin Kwon,  wrote:

> Looks like you haven't installed the 'e1071' package.
>
> 2020년 6월 24일 (수) 오후 6:49, Anwar AliKhan 님이 작성:
>
>> ./dev/make-distribution.sh --name custom-spark --pip --r --tgz -Psparkr
>> -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
>> 
>>
>>
>> minor error Spark r test failed , I don't use r so it doesn't effect me.
>>
>> ***installing help indices
>> ** building package indices
>> ** installing vignettes
>> ** testing if installed package can be loaded from temporary location
>> ** testing if installed package can be loaded from final location
>> ** testing if installed package keeps a record of temporary installation
>> path
>> * DONE (SparkR)
>> ++ cd /opt/spark/R/lib
>> ++ jar cfM /opt/spark/R/lib/sparkr.zip SparkR
>> ++ popd
>> ++ cd /opt/spark/R/..
>> ++ pwd
>> + SPARK_HOME=/opt/spark
>> + . /opt/spark/bin/load-spark-env.sh
>> ++ '[' -z /opt/spark ']'
>> ++ SPARK_ENV_SH=spark-env.sh
>> ++ '[' -z '' ']'
>> ++ export SPARK_ENV_LOADED=1
>> ++ SPARK_ENV_LOADED=1
>> ++ export SPARK_CONF_DIR=/opt/spark/conf
>> ++ SPARK_CONF_DIR=/opt/spark/conf
>> ++ SPARK_ENV_SH=/opt/spark/conf/spark-env.sh
>> ++ [[ -f /opt/spark/conf/spark-env.sh ]]
>> ++ set -a
>> ++ . /opt/spark/conf/spark-env.sh
>> +++ export SPARK_LOCAL_IP=192.168.0.786
>> +++ SPARK_LOCAL_IP=192.168.0.786
>> ++ set +a
>> ++ export SPARK_SCALA_VERSION=2.12
>> ++ SPARK_SCALA_VERSION=2.12
>> + '[' -f /opt/spark/RELEASE ']'
>> + SPARK_JARS_DIR=/opt/spark/assembly/target/scala-2.12/jars
>> + '[' -d /opt/spark/assembly/target/scala-2.12/jars ']'
>> + SPARK_HOME=/opt/spark
>> + /usr/bin/R CMD build /opt/spark/R/pkg
>> * checking for file ‘/opt/spark/R/pkg/DESCRIPTION’ ... OK
>> * preparing ‘SparkR’:
>> * checking DESCRIPTION meta-information ... OK
>> * installing the package to build vignettes
>> * creating vignettes ... ERROR
>> --- re-building ‘sparkr-vignettes.Rmd’ using rmarkdown
>>
>> Attaching package: 'SparkR'
>>
>> The following objects are masked from 'package:stats':
>>
>> cov, filter, lag, na.omit, predict, sd, var, window
>>
>> The following objects are masked from 'package:base':
>>
>> as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
>> rank, rbind, sample, startsWith, subset, summary, transform, union
>>
>> Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>> Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>> 20/06/24 10:23:54 WARN NativeCodeLoader: Unable to load native-hadoop
>> library for your platform... using builtin-java classes where applicable
>> Setting default log level to "WARN".
>> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
>> setLogLevel(newLevel).
>>
>> [Stage 0:>  (0 +
>> 1) / 1]
>>
>>
>>
>>
>> [Stage 9:=>  (88 + 1)
>> / 100]
>>
>>
>>
>>
>> [Stage 13:===>  (147 + 

Re: Error: Vignette re-building failed. Execution halted

2020-06-24 Thread Hyukjin Kwon
Looks like you haven't installed the 'e1071' package.

2020년 6월 24일 (수) 오후 6:49, Anwar AliKhan 님이 작성:

> ./dev/make-distribution.sh --name custom-spark --pip --r --tgz -Psparkr
> -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
> 
>
>
> minor error Spark r test failed , I don't use r so it doesn't effect me.
>
> ***installing help indices
> ** building package indices
> ** installing vignettes
> ** testing if installed package can be loaded from temporary location
> ** testing if installed package can be loaded from final location
> ** testing if installed package keeps a record of temporary installation
> path
> * DONE (SparkR)
> ++ cd /opt/spark/R/lib
> ++ jar cfM /opt/spark/R/lib/sparkr.zip SparkR
> ++ popd
> ++ cd /opt/spark/R/..
> ++ pwd
> + SPARK_HOME=/opt/spark
> + . /opt/spark/bin/load-spark-env.sh
> ++ '[' -z /opt/spark ']'
> ++ SPARK_ENV_SH=spark-env.sh
> ++ '[' -z '' ']'
> ++ export SPARK_ENV_LOADED=1
> ++ SPARK_ENV_LOADED=1
> ++ export SPARK_CONF_DIR=/opt/spark/conf
> ++ SPARK_CONF_DIR=/opt/spark/conf
> ++ SPARK_ENV_SH=/opt/spark/conf/spark-env.sh
> ++ [[ -f /opt/spark/conf/spark-env.sh ]]
> ++ set -a
> ++ . /opt/spark/conf/spark-env.sh
> +++ export SPARK_LOCAL_IP=192.168.0.786
> +++ SPARK_LOCAL_IP=192.168.0.786
> ++ set +a
> ++ export SPARK_SCALA_VERSION=2.12
> ++ SPARK_SCALA_VERSION=2.12
> + '[' -f /opt/spark/RELEASE ']'
> + SPARK_JARS_DIR=/opt/spark/assembly/target/scala-2.12/jars
> + '[' -d /opt/spark/assembly/target/scala-2.12/jars ']'
> + SPARK_HOME=/opt/spark
> + /usr/bin/R CMD build /opt/spark/R/pkg
> * checking for file ‘/opt/spark/R/pkg/DESCRIPTION’ ... OK
> * preparing ‘SparkR’:
> * checking DESCRIPTION meta-information ... OK
> * installing the package to build vignettes
> * creating vignettes ... ERROR
> --- re-building ‘sparkr-vignettes.Rmd’ using rmarkdown
>
> Attaching package: 'SparkR'
>
> The following objects are masked from 'package:stats':
>
> cov, filter, lag, na.omit, predict, sd, var, window
>
> The following objects are masked from 'package:base':
>
> as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
> rank, rbind, sample, startsWith, subset, summary, transform, union
>
> Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> 20/06/24 10:23:54 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
> setLogLevel(newLevel).
>
> [Stage 0:>  (0 +
> 1) / 1]
>
>
>
>
> [Stage 9:=>  (88 + 1)
> / 100]
>
>
>
>
> [Stage 13:===>  (147 + 1)
> / 200]
>
>
>
> 20/06/24 10:24:04 WARN Instrumentation: [79237008] regParam is zero, which
> might cause numerical instability and overfitting.
> 20/06/24 10:24:04 WARN BLAS: Failed to load implementation from:
> com.github.fommil.netlib.NativeSystemBLAS
> 20/06/24 10:24:04 WARN BLAS: Failed to load implementation from:
> com.github.fommil.netlib.NativeRefBLAS
> 20/06/24 10:24:04 WARN LAPACK: Failed to load implementation from:
> com.github.fommil.netlib.NativeSystemLAPACK
> 20/06/24 10:24:04 WARN LAPACK: Failed to load implementation from:
> com.github.fommil.netlib.NativeRefLAPACK
> 20/06/24 10:24:09 WARN package: Truncated the string representation of a
> plan since it was too large. This behavior can be adjusted by setting
> 'spark.sql.debug.maxToStringFields'.
>
> [Stage 67:>  (45 + 1)
> / 200]
>
> [Stage 67:=> (62 + 1)
> / 200]
>
> [Stage 67:==>(80 + 1)
> / 200]
>
> [Stage 67:==>(98 + 1)
> / 200]
>
> [Stage 67:==>   (114 + 1)
> / 200]
>
> [Stage 67:===>  (132 + 1)
> / 200]
>
> [Stage 67:===>  (148 + 1)
> / 200]
>
> [Stage 67:> (166 + 1)
> / 200]
>
> [Stage 67:=>(184 + 1)
> / 200]
>
>
>
>
> [Stage 69:>  (44 + 1)
> / 200]
>
> [Stage 69:>  (61 + 1)
> / 200]
>
> [Stage 69:=> (79 + 1)
> / 200]
>
> [Stage 69:==>(97 + 1)
> / 200]
>
> [Stage 69:===>  (116 + 1)
> / 200]
>
> [Stage 69:> (134 + 1)
> / 200]
>
> [Stage 69:=>