Hi everybody,
I got a error about the name of the columns is not following the rule. 
Please tell me the way to fix it.
Here is my code

metricFields
Here is a Seq of metrics: spent, proceed, click, impression
 sqlContext
  .sql(s"select * from hourly where time between '$dateStr-00' and 
'$dateStr-23' ")
  .groupBy("time", dimensions.filter(!"time".contains(_)): _*)
  .agg(metricFields.map(a => a -> "sum").toMap)
Error message was:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Attribute 
name "sum(proceeds)" contains invalid character(s) among " ,;{}()\n\t=". Please 
use alias to rename it.        ;
        at 
org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$.checkConversionRequirement(CatalystSchemaConverter.scala:556)
        at 
org.apache.spark.sql.execution.datasources.parquet.CatalystSchemaConverter$.checkFieldName(CatalystSchemaConverter.scala:542)
        at 
org.apache.spark.sql.execution.datasources.parquet.CatalystWriteSupport$$anonfun$setSchema$2.apply(CatalystWriteSupport.scala:430)
        at 
org.apache.spark.sql.execution.datasources.parquet.CatalystWriteSupport$$anonfun$setSchema$2.apply(CatalystWriteSupport.scala:430)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
org.apache.spark.sql.execution.datasources.parquet.CatalystWriteSupport$.setSchema(CatalystWriteSupport.scala:430)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.prepareJobForWrite(ParquetRelation.scala:258)
        at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.driverSideSetup(WriterContainer.scala:103)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:147)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
        at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
        at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
        at 
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
        at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
        at 
org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:334)
        at jobs.DailyJob$delayedInit$body.apply(DailyJob.scala:46)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.App$$anonfun$main$1.apply(App.scala:71)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
        at scala.App$class.main(App.scala:71)
        at jobs.DailyJob$.main(DailyJob.scala:12)
        at jobs.DailyJob.main(DailyJob.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Reply via email to