Hi, Chunen:

It is hard for me to check the fact table content because it is huge. Bug-3828 
was about “ArrayIndexOutOfBoundsException thrown when build a streaming cube 
with empty data in its first dimension
“. It is about a different type of exception.

I am wondering if a data column is of type “bigint” in kylin. Could the metric 
as TOPN or SUM cause sparke engine to use different data type. Because with 
TOPN aggregation, I did not see this ParseDouble exception. But with SUM 
aggregation, the spark engine can throw ParseDouble exception.

BTW: “java.lang.NumberFormatException: For input string: "\N"”, is “\N” the 
same as “\n”?

Kang-sen

From: [email protected] <[email protected]> On Behalf Of nichunen
Sent: Monday, April 08, 2019 9:55 PM
To: [email protected]
Subject: Re:RE: Re:question about kylin cube build failure

________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________

Hi Kang-Sen,

Have you check your hive table, whether there is any dirty data?

By the way, empty data in your first dimension may also cause such an 
exception. It has been fixed in 2.6.1 
https://issues.apache.org/jira/browse/KYLIN-3828<https://issues.apache.org/jira/browse/KYLIN-3828>
--

Best regards,

Ni Chunen / George

At 2019-04-09 00:04:40, "Lu, Kang-Sen" <[email protected]<mailto:[email protected]>> 
wrote:

Hi, Chunen:

Thanks for your reply.

I am puzzled by the fact that based on the same data model, I created two 
cubes, one for computing TOPN metric, and the other for all other aggregation. 
The reason I separate the TOPN cube creation from the other normal cube is 
because the TOPN is related to a dimension with high cardinality like 
SUBSCRIBER_ID.

The same fact table is used to build cuboids for both cube spec. I don’t have 
problem when building the TOPN cube, with spark engine.
But when I build a cube with spark engine for the normal cube, I had this “\N” 
format exception. In addition, if I build this normal cube with MR engine, 
there is no format exception.

Does it make sense to you?

Kang-sen

From: [email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>> On Behalf Of nichunen
Sent: Monday, April 08, 2019 11:35 AM
To: [email protected]<mailto:[email protected]>
Subject: Re:question about kylin cube build failure

________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________

Hi Kang-Sen,

It looks like there is a "\n" in your source data of a column with double type.

--


Best regards,

Ni Chunen / George

At 2019-04-08 23:10:05, "Lu, Kang-Sen" <[email protected]<mailto:[email protected]>> 
wrote:


I am running kylin 2.5.1.

When I am building a cube with spark engine, I got the following error at “#4 
Step Name: Extract Fact Table Distinct Columns”.

The log shows the following exception:

2019-04-08 12:59:10,375 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 
0.0 (TID 0, hadoop9, executor 1): java.lang.NumberFormatException: For input 
string: "\N"
        at 
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
        at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
        at java.lang.Double.parseDouble(Double.java:538)
        at 
org.apache.kylin.engine.mr.steps.SelfDefineSortableKey.init(SelfDefineSortableKey.java:57)
        at 
org.apache.kylin.engine.mr.steps.SelfDefineSortableKey.init(SelfDefineSortableKey.java:66)
        at 
org.apache.kylin.engine.spark.SparkFactDistinct$FlatOutputFucntion.addFieldValue(SparkFactDistinct.java:444)
        at 
org.apache.kylin.engine.spark.SparkFactDistinct$FlatOutputFucntion.call(SparkFactDistinct.java:315)
        at 
org.apache.kylin.engine.spark.SparkFactDistinct$FlatOutputFucntion.call(SparkFactDistinct.java:226)
        at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
        at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Anybody saw this same problem?

Thanks.

Kang-sen

________________________________
Notice: This e-mail together with any attachments may contain information of 
Ribbon Communications Inc. that is confidential and/or proprietary for the sole 
use of the intended recipient. Any review, disclosure, reliance or distribution 
by others or forwarding without express permission is strictly prohibited. If 
you are not the intended recipient, please notify the sender immediately and 
then delete all copies, including any attachments.
________________________________











Reply via email to