No, it is not normal. I expect that the MySQL transaction issues are causing lots of problems.
Karl On Sun, Feb 10, 2019 at 7:13 PM Cihad Guzel <[email protected]> wrote: > Hi Karl, > > I use MySQL. I'll also try with PostgreSQL. > > All docs are processed one day ago. Is it normal for the aborting process > or finishing up threads to take so long? > > Thanks, > Cihad Guzel > > > Karl Wright <[email protected]>, 11 Şub 2019 Pzt, 02:37 tarihinde şunu > yazdı: > >> What database is this? >> Basically, the "unexpected job status" means that the framework found >> something that should not have been possible, if the database had been >> properly enforcing ACID transactional constraints. Is this MySQL? Because >> if so it's known to have this problem. >> >> It also looks like MCF is trying to recover from some other problem >> (usually a database error). I can tell this because that's what the >> particular thread in question does. In order to recover, all worker >> threads must finish up with what they are doing and then everything can >> resync -- and that's not working because the database isn't in agreement >> that all the worker threads are shut down. >> >> Karl >> >> >> On Sun, Feb 10, 2019 at 6:23 PM Cihad Guzel <[email protected]> wrote: >> >>> Hi, >>> >>> I try external TIKA extractor. I have 4 continuously file crawler jobs. >>> Two of them have external tika extractor. One of them processed all >>> documents that is only 98 docs. The job is hanging in "Aborting" state when >>> manually abort. I waited more than 1 day and then the state changed. >>> >>> How can I find the problem? >>> >>> mysql> SELECT status, errortext, type, startmethod, id FROM jobs; >>> +--------+-----------+------+-------------+---------------+ >>> | status | errortext | type | startmethod | id | >>> +--------+-----------+------+-------------+---------------+ >>> | N | NULL | C | D | 1549371059083 | >>> | X | NULL | C | D | 1549371135463 | >>> | N | NULL | C | D | 1549371226082 | >>> | N | NULL | C | D | 1549805173512 | >>> +--------+-----------+------+-------------+---------------+ >>> >>> I'm not sure this is relevant to it, but I have too many error logging >>> like this: >>> >>> ERROR 2019-02-10T22:47:28,178 (Job reset thread) - Exception tossed: >>> Unexpected job status encountered: 33 >>> org.apache.manifoldcf.core.interfaces.ManifoldCFException: Unexpected >>> job status encountered: 33 >>> at >>> org.apache.manifoldcf.crawler.jobs.Jobs.returnJobToActive(Jobs.java:2145) >>> ~[mcf-pull-agent.jar:?] >>> at >>> org.apache.manifoldcf.crawler.jobs.JobManager.resetJobs(JobManager.java:8608) >>> ~[mcf-pull-agent.jar:?] >>> at >>> org.apache.manifoldcf.crawler.system.JobResetThread.run(JobResetThread.java:77) >>> [mcf-pull-agent.jar:?] >>> ERROR 2019-02-10T22:47:28,182 (Job reset thread) - Exception tossed: >>> Unexpected job status encountered: 33 >>> org.apache.manifoldcf.core.interfaces.ManifoldCFException: Unexpected >>> job status encountered: 33 >>> at >>> org.apache.manifoldcf.crawler.jobs.Jobs.returnJobToActive(Jobs.java:2145) >>> ~[mcf-pull-agent.jar:?] >>> at >>> org.apache.manifoldcf.crawler.jobs.JobManager.resetJobs(JobManager.java:8608) >>> ~[mcf-pull-agent.jar:?] >>> at >>> org.apache.manifoldcf.crawler.system.JobResetThread.run(JobResetThread.java:77) >>> [mcf-pull-agent.jar:?] >>> >>> >>> Regards, >>> Cihad Güzel >>> >>
