[ 
https://issues.apache.org/jira/browse/KYLIN-4143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhouKang updated KYLIN-4143:
----------------------------
    Description: 
 

truncate spark job's output when the job exec ret is not equal 0, which made 
the execute output is too large.

  was:
Kylin: 2.5.2

job engine: spark

 

After long time running, job executor will record plenty of content in job 
output file, which make ExecutableManager get output fail.  And the job will be 
marked as "error".
{code:java}
// code placeholder
2019-08-15 11:58:34,442 ERROR [Scheduler 1278051386 Job 
db9b01d0-fd8c-d168-8cfa-2e11d807972c-128] spark.SparkExecutable:359 : error run 
spark job:
 java.lang.RuntimeException: 
org.apache.kylin.job.exception.PersistentException: 
com.fasterxml.jackson.databind.JsonMappingException: Unexpected end-of-input in 
VALUE_STRING
 at [Source: (DataInputStream); line: 5, column: 10728969]
 at [Source: (DataInputStream); line: 5, column: 15] (through reference chain: 
org.apache.kylin.job.dao.ExecutableOutputPO["content"])
 at 
org.apache.kylin.job.execution.ExecutableManager.getOutput(ExecutableManager.java:164)
 at 
org.apache.kylin.job.execution.AbstractExecutable.getOutput(AbstractExecutable.java:414)
 at 
org.apache.kylin.job.execution.AbstractExecutable.isDiscarded(AbstractExecutable.java:525)
 at 
org.apache.kylin.engine.spark.SparkExecutable.doWork(SparkExecutable.java:304)
 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:178)
 at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:178)
 at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: org.apache.kylin.job.exception.PersistentException: 
com.fasterxml.jackson.databind.JsonMappingException: Unexpected end-of-input in 
VALUE_STRING
 at [Source: (DataInputStream); line: 5, column: 10728969]
 at [Source: (DataInputStream); line: 5, column: 15] (through reference chain: 
org.apache.kylin.job.dao.ExecutableOutputPO["content"])
 at org.apache.kylin.job.dao.ExecutableDao.getJobOutput(ExecutableDao.java:356)
 at 
org.apache.kylin.job.execution.ExecutableManager.getOutput(ExecutableManager.java:159)

 

... 10 more
 Caused by: com.fasterxml.jackson.databind.JsonMappingException: Unexpected 
end-of-input in VALUE_STRING
 at [Source: (DataInputStream); line: 5, column: 10728969]
 at [Source: (DataInputStream); line: 5, column: 15] (through reference chain: 
org.apache.kylin.job.dao.ExecutableOutputPO["content"])
 at 
com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:391)
 at 
com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:351)
 at 
com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1704)
 at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:290)
 at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
 at 
com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4001)
 at 
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3058)
 at org.apache.kylin.common.util.JsonUtil.readValue(JsonUtil.java:69)
 at 
org.apache.kylin.common.persistence.JsonSerializer.deserialize(JsonSerializer.java:46)
 at 
org.apache.kylin.common.persistence.ResourceStore.getResource(ResourceStore.java:182)
{code}
 


> truncate spark executable job output 
> -------------------------------------
>
>                 Key: KYLIN-4143
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4143
>             Project: Kylin
>          Issue Type: Bug
>    Affects Versions: v2.5.2
>            Reporter: ZhouKang
>            Priority: Major
>
>  
> truncate spark job's output when the job exec ret is not equal 0, which made 
> the execute output is too large.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to