jetaime-chen edited a comment on issue #2891:
URL: 
https://github.com/apache/incubator-dolphinscheduler/issues/2891#issuecomment-638764302


   The API process is normal. There are more than 400 lines of log files. Click 
the web page to display the task flow of the log. The log printed by the API 
process has no obvious registration. The data processing part is omitted in the 
middle
   
   
   
   
   [BEGIN] 2020/6/4 18:25:10
   [INFO] 2020-06-04 18:25:13.527 cn.escheduler.api.service.SessionService:[69] 
- get session: d7c9e5c9-90c6-40d6-b24c-96cbc9536c08, ip: 172xxxxx
   [INFO] 2020-06-04 18:25:13.529 
cn.escheduler.api.controller.LoggerController:[72] - login user admin, view 
12294 task instance log ,skipLineNum 0 , limit 10000
   [INFO] 2020-06-04 18:25:13.529 cn.escheduler.api.service.LoggerService:[57] 
- log host : 172.xxxxxxx.104 , logPath : 
/alidatsssss/scheduler/logs/178/5523/12294.log , logServer port : 50051
   [INFO] 2020-06-04 18:25:13.530 cn.escheduler.api.log.LogClient:[79] - roll 
view log : path /alidatsxxx/scheduler/logs/178/5523/12294.log,skipLineNum 0 
,limit 10000
   [INFO] 2020-06-04 18:25:13.535 cn.escheduler.api.service.LoggerService:[65] 
- [INFO] 2020-06-04 14:56:55.564 
cn.escheduler.server.worker.log.TaskLogger:[178] - 
[taskAppId=TASK_178_5523_12294]  -> 2020-06-03
        
        DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
        Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.
        
        
        2020-06-04 14:56:55.530 [main] INFO  VMInfo - VMInfo# operatingSystem 
class => sun.management.OperatingSystemImpl
        2020-06-04 14:56:55.537 [main] INFO  Engine - the machine info  => 
        
                osInfo: Oracle Corporation 1.8 25.202-b08
                jvmInfo:        Linux amd64 3.10
                cpu num:        8
        
                totalPhysicalMemory:    -0.00G
                freePhysicalMemory:     -0.00G
                maxFileDescriptorCount: -1
                currentOpenFileDescriptorCount: -1
        
                GC Names        [PS MarkSweep, PS Scavenge]
        
                MEMORY_NAME                    | allocation_size                
| init_size                      
                PS Eden Space                  | 256.00MB                       
| 256.00MB                       
                Code Cache                     | 240.00MB                       
| 2.44MB                         
                Compressed Class Space         | 1,024.00MB                     
| 0.00MB                         
                PS Survivor Space              | 42.50MB                        
| 42.50MB                        
                PS Old Gen                     | 683.00MB                       
| 683.00MB                       
                Metaspace                      | -0.00MB                        
| 0.00MB                         
        
        
        2020-06-04 14:56:55.564 [main] INFO  Engine - 
        {
                "content":[
                        {
                                "reader":{
                                        "name":"hdfsreader",
                                        "parameter":{
                                                "column":[
                                                        {
                                                                "index":1,
                                                                "type":"string"
                                                        },
                                                        {
                                                                "index":2,
                                                                "type":"string"
                                                        },
                                                        {
                                                                "index":3,
                                                                "type":"string"
   
        
        2020-06-04 14:56:55.582 [main] WARN  Engine - prioriy set to 0, because 
NumberFormatException, the value is: null
        2020-06-04 14:56:55.584 [main] INFO  PerfTrace - PerfTrace 
traceId=job_-1, isEnable=false, priority=0
        2020-06-04 14:56:55.584 [main] INFO  JobContainer - DataX jobContainer 
starts job.
        2020-06-04 14:56:55.587 [main] INFO  JobContainer - Set jobId = 0
        2020-06-04 14:56:55.602 [job-0] INFO  HdfsReader$Job - init() begin...
        2020-06-04 14:56:55.887 [job-0] INFO  HdfsReader$Job - hadoopConfig 
details:{"finalParameters":[]}
        2020-06-04 14:56:55.887 [job-0] INFO  HdfsReader$Job - init() ok and 
end...
   
   [INFO] 2020-06-04 14:59:08.788 
cn.escheduler.server.worker.log.TaskLogger:[178] - 
[taskAppId=TASK_178_5523_12294]  -> 2020-06-04 14:59:01.336 [job-0] INFO  
JobContainer - Job set Channel-Number to 1 channels.
        2020-06-04 14:59:01.336 [job-0] INFO  HdfsReader$Job - split() begin...
        2020-06-04 14:59:01.338 [job-0] INFO  JobContainer - DataX Reader.Job 
[hdfsreader] splits to [2] tasks.
        2020-06-04 14:59:01.340 [job-0] INFO  JobContainer - DataX Writer.Job 
[mysqlwriter] splits to [2] tasks.
        2020-06-04 14:59:01.357 [job-0] INFO  JobContainer - jobContainer 
starts to do schedule ...
        2020-06-04 14:59:01.364 [job-0] INFO  JobContainer - Scheduler starts 
[1] taskGroups.
        2020-06-04 14:59:01.366 [job-0] INFO  JobContainer - Running by 
standalone Mode.
        2020-06-04 14:59:01.372 [taskGroup-0] INFO  TaskGroupContainer - 
taskGroupId=[0] start [1] channels for [2] tasks.
        2020-06-04 14:59:01.377 [taskGroup-0] INFO  Channel - Channel set 
byte_speed_limit to -1, No bps activated.
        2020-06-04 14:59:01.377 [taskGroup-0] INFO  Channel - Channel set 
record_speed_limit to -1, No tps activated.
        2020-06-04 14:59:01.388 [taskGroup-0] INFO  TaskGroupContainer - 
taskGroup[0] taskId[1] attemptCount[1] is started
   
   10:16:22","type":"STRING"},{"byteSize":19,"index":41,"rawData":"2020-06-03 
10:16:22","type":"STRING"},{"byteSize":32,"index":42,"rawData":"c826498d1642bd72703e6ac50df6d667","type":"STRING"},{"byteSize":10,"index":43,"rawData":"2020-06-03","type":"STRING"}],"type":"writer"}
        2020-06-04 14:59:11.386 [job-0] INFO  
StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 
records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.000s |  All 
Task WaitReaderTime 0.000s | Percentage 0.00%
        2020-06-04 14:59:21.387 [job-0] INFO  
StandAloneJobContainerCommunicator - Total 19843 records, 66906892 bytes | 
Speed 6.38MB/s, 1984 records/s | Error 1 records, 46087 bytes |  All Task 
WaitWriterTime 5.959s |  All Task WaitReaderTime 1.354s | Percentage 50.00%
   [INFO] 2020-06-04 14:59:28.212 
cn.escheduler.server.worker.log.TaskLogger:[178] - 
[taskAppId=TASK_178_5523_12294]  -> 2020-06-04 14:59:28.211 [0-0-0-reader] INFO 
 Reader$Task - end read source files...
   [INFO] 2020-06-04 14:59:31.389 
cn.escheduler.server.worker.log.TaskLogger:[178] - 
[taskAppId=TASK_178_5523_12294]  -> 2020-06-04 14:59:28.949 [taskGroup-0] INFO  
TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[19718]ms
        2020-06-04 14:59:28.950 [taskGroup-0] INFO  TaskGroupContainer - 
taskGroup[0] completed it's tasks.
        2020-06-04 14:59:31.389 [job-0] INFO  
StandAloneJobContainerCommunicator - Total 65951 records, 224324087 bytes | 
Speed 15.01MB/s, 4610 records/s | Error 1 records, 46087 bytes |  All Task 
WaitWriterTime 22.654s |  All Task WaitReaderTime 2.936s | Percentage 100.00%
   [INFO] 2020-06-04 14:59:31.526 
cn.escheduler.server.worker.log.TaskLogger:[178] - 
[taskAppId=TASK_178_5523_12294]  -> 2020-06-04 14:59:31.389 [job-0] INFO  
AbstractScheduler - Scheduler accomplished all tasks.
        2020-06-04 14:59:31.389 [job-0] INFO  JobContainer - DataX Writer.Job 
[mysqlwriter] do post work.
        2020-06-04 14:59:31.390 [job-0] INFO  JobContainer - DataX Reader.Job 
[hdfsreader] do post work.
        2020-06-04 14:59:31.390 [job-0] INFO  JobContainer - DataX jobId [0] 
completed successfully.
        2020-06-04 14:59:31.391 [job-0] INFO  HookInvoker - No hook invoked, 
because base dir not exists or is a file: /alidata1/jobs/soft/datax/hook
        2020-06-04 14:59:31.391 [job-0] INFO  JobContainer - 
                 [total cpu info] => 
                        averageCpu                     | maxDeltaCpu            
        | minDeltaCpu                    
                        -1.00%                         | -1.00%                 
        | -1.00%
                                
        
                 [total gc info] => 
                         NAME                 | totalGCCount       | 
maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime   
  | minDeltaGCTime     
                         PS MarkSweep         | 3                  | 3          
        | 3                  | 0.193s             | 0.193s             | 0.193s 
            
                         PS Scavenge          | 49                 | 49         
        | 49                 | 0.630s             | 0.630s             | 0.630s 
            
        
        2020-06-04 14:59:31.392 [job-0] INFO  JobContainer - PerfTrace not 
enable!
        2020-06-04 14:59:31.392 [job-0] INFO  
StandAloneJobContainerCommunicator - Total 65951 records, 224324087 bytes | 
Speed 7.13MB/s, 2198 records/s | Error 1 records, 46087 bytes |  All Task 
WaitWriterTime 22.654s |  All Task WaitReaderTime 2.936s | Percentage 100.00%
        2020-06-04 14:59:31.393 [job-0] INFO  JobContainer - 
        任务启动时刻                    : 2020-06-04 14:56:55
        任务结束时刻                    : 2020-06-04 14:59:31
        任务总计耗时                    :                155s
        任务平均流量                    :            7.13MB/s
        记录写入速度                    :           2198rec/s
        读出记录总数                    :               65951
        读写失败总数                    :                   1
        
   [END] 2020/6/4 18:25:23


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to