Then I cannot explain the behavior you are seeing.  Also, debug output is
quite verbose so clearly you are not setting that up right either.

If you want me to give a further analysis, please provide a thread dump of
the manifoldcf process.

Karl


On Tue, Jun 12, 2018 at 10:38 AM Bisonti Mario <mario.biso...@vimar.com>
wrote:

> For Job “A” it use as repository “Windows Share” connector  and for output
> “Solr”
>
> For Job “B” it use as repository “Generic Web” connector and for output
> “Solr”
>
>
>
> No own connector
>
>
>
> I set DEBUG but I have no log
>
>
>
>
>
> *Da:* Karl Wright <daddy...@gmail.com>
> *Inviato:* martedì 12 giugno 2018 16:22
> *A:* user@manifoldcf.apache.org
> *Oggetto:* Re: Job in aborting status
>
>
>
> Hi Mario,
>
>
>
> What repository connector are you using for Job "B"?  Is it your own
> connector?  If so, you likely have bugs in it that are causing problems
> with the entire framework.  Please verify that this is the case; ManifoldCF
> In Action is freely available online and you should read it before writing
> connectors.
>
>
> The problems are not likely due to HSQLDB internal locks.
>
> Major errors should be logged already in manifoldcf.log by default.  If
> you want to set up connector debug logging, you need to set a
> properties.xml property, not a logging.xml property:
>
> <property name="org.apache.manifoldcf.connectors" value="DEBUG"/>
>
>
>
> See: https://www.mail-archive.com/user@manifoldcf.apache.org/msg01034.html
> <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mail-archive.com%2Fuser%40manifoldcf.apache.org%2Fmsg01034.html&data=01%7C01%7CMario.Bisonti%40vimar.com%7Cf5a495395cab4e4ab06208d5d06fe5ce%7Ca1f008bcd59b4c668f8760fd9af15c7f%7C1&sdata=yvKCChtnl8h0pK6A6%2BrZURckQz41DCQreng9XJbiVzQ%3D&reserved=0>
>
>
>
>
>
>
>
> On Tue, Jun 12, 2018 at 10:03 AM Bisonti Mario <mario.biso...@vimar.com>
> wrote:
>
> I setup jobs :
>
> Job “A” to crawls “Windows Shares”
>
> Job “B” to crawl my internal site
>
>
>
> The problem was when I tried to aborted the second job “B”
>
> It hang in aborting state
>
>
>
> After, I tried to start job “A” but it hanged in “Starting” state, and not
> start, after, I tried to abort it too and it hanged in Aborting state as of
> “B” job.
>
>
>
> I increased log level of logging.xml to “info” but when I start manifoldcf
> as standalone I do not have many info on the logs/manifoldcf.log
>
>
>
> I read:
>
> INFO 2018-06-12T15:58:02,748 (main) - dataFileCache open start
>
> INFO 2018-06-12T15:58:02,753 (main) - dataFileCache open end
>
>
>
> And nothing more
>
>
>
> So, I think that there could be a lock situation in the internal HSQLDB
> that I am not able to solve.
>
>
>
>
>
>
>
>
>
> *Da:* Karl Wright <daddy...@gmail.com>
> *Inviato:* martedì 12 giugno 2018 15:46
> *A:* user@manifoldcf.apache.org
> *Oggetto:* Re: Job in aborting status
>
>
>
> Hi Mario,
>
>
>
> If you are using the single-process model, then stuck locks are not the
> problem and the lock-clean script is inappropriate to use.  Locks are all
> internal in that model.  That is why lock-clean is only distributed as part
> of the file-based multiprocess example.
>
>
>
> Please tell me more about what you have set up for your jobs on this
> example.  How many are there, and how many documents are involved?  The
> embedded HSQLDB database has limits because it caches all tables in memory,
> so the single-process example is not going to be able to handle huge jobs.
>
> Please have a look at the log to be sure there are no serious errors in it.
>
>
>
> Thanks,
>
> Karl
>
>
>
>
>
>
>
>
>
> On Tue, Jun 12, 2018 at 9:26 AM Bisonti Mario <mario.biso...@vimar.com>
> wrote:
>
> No, I am testing on the /example directory so I am using local HSQLDB
>
> I copied lock-clean.sh script from the
> /usr/share/manifoldcf/multiprocess-file-example to the
> /usr/share/manifoldcf/example to try to clean-up my situation, but perhaps
> the script isn’t good for me because I am using jetty on the example
> directory?
>
>
>
> Thanks
>
>
>
>
>
>
>
>
>
> *Da:* Karl Wright <daddy...@gmail.com>
> *Inviato:* martedì 12 giugno 2018 15:23
> *A:* user@manifoldcf.apache.org
> *Oggetto:* Re: Job in aborting status
>
>
>
> Hi Mario,
>
>
>
> It appears you are trying to use embedded HSQLDB in a multiprocess
> environment.  That is not possible.
>
> In a multiprocess environment, you have the following choices:
>
> (1) standalone HSQLDB
>
> (2) postgresql
> (3) mysql
>
>
>
> Thanks,
>
> Karl
>
>
>
>
>
> On Tue, Jun 12, 2018 at 9:06 AM Bisonti Mario <mario.biso...@vimar.com>
> wrote:
>
> Thanks Karl.
>
> I tried to execute lock-clean from my example directory after I stop
> manifoldcf but I obtain:
>
>
>
>
> administrator@sslrvivv01:/usr/share/manifoldcf/example$ sudo -E
> ./lock-clean.sh
>
> Configuration file successfully read
>
> Synchronization storage cleaned up
>
> 2018-06-12 15:03:35,395 Shutdown thread FATAL Unable to register shutdown
> hook because JVM is shutting down. java.lang.IllegalStateException: Cannot
> add new shutdown hook as this is not started. Current state: STOPPED
>
>         at
> org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.addShutdownCallback(DefaultShutdownCallbackRegistry.java:113)
>
>         at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:271)
>
>         at
> org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
>
>         at
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
>
>         at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:146)
>
>         at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
>
>         at
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:270)
>
>         at
> org.apache.log4j.Logger$PrivateManager.getContext(Logger.java:59)
>
>         at org.apache.log4j.Logger.getLogger(Logger.java:37)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:498)
>
>         at org.hsqldb.lib.FrameworkLogger.<init>(Unknown Source)
>
>         at org.hsqldb.lib.FrameworkLogger.getLog(Unknown Source)
>
>         at org.hsqldb.lib.FrameworkLogger.getLog(Unknown Source)
>
>         at org.hsqldb.persist.Logger.getEventLogger(Unknown Source)
>
>         at org.hsqldb.persist.Logger.logInfoEvent(Unknown Source)
>
>         at org.hsqldb.persist.DataFileCache.logInfoEvent(Unknown Source)
>
>         at org.hsqldb.persist.DataFileCache.open(Unknown Source)
>
>         at org.hsqldb.persist.Log.getCache(Unknown Source)
>
>         at org.hsqldb.persist.Logger.getCache(Unknown Source)
>
>         at org.hsqldb.persist.Logger.newStore(Unknown Source)
>
>         at
> org.hsqldb.persist.PersistentStoreCollectionDatabase.getStore(Unknown
> Source)
>
>         at org.hsqldb.Table.getRowStore(Unknown Source)
>
>         at org.hsqldb.TableBase.isEmpty(Unknown Source)
>
>         at org.hsqldb.TableWorks.addIndex(Unknown Source)
>
>         at org.hsqldb.StatementSchema.getResult(Unknown Source)
>
>         at org.hsqldb.StatementSchema.execute(Unknown Source)
>
>         at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
>
>        at org.hsqldb.scriptio.ScriptReaderText.readDDL(Unknown Source)
>
>         at org.hsqldb.scriptio.ScriptReaderBase.readAll(Unknown Source)
>
>         at org.hsqldb.persist.Log.processScript(Unknown Source)
>
>         at org.hsqldb.persist.Log.open(Unknown Source)
>
>         at org.hsqldb.persist.Logger.open(Unknown Source)
>
>         at org.hsqldb.Database.reopen(Unknown Source)
>
>         at org.hsqldb.Database.open(Unknown Source)
>
>         at org.hsqldb.DatabaseManager.getDatabase(Unknown Source)
>
>         at org.hsqldb.DatabaseManager.newSession(Unknown Source)
>
>         at org.hsqldb.jdbc.JDBCConnection.<init>(Unknown Source)
>
>         at org.hsqldb.jdbc.JDBCDriver.getConnection(Unknown Source)
>
>         at org.hsqldb.jdbc.JDBCDriver.connect(Unknown Source)
>
>         at java.sql.DriverManager.getConnection(DriverManager.java:664)
>
>         at java.sql.DriverManager.getConnection(DriverManager.java:247)
>
>         at
> org.apache.manifoldcf.core.database.DBInterfaceHSQLDB.closeDatabase(DBInterfaceHSQLDB.java:161)
>
>         at
> org.apache.manifoldcf.core.system.ManifoldCF$DatabaseShutdown.closeDatabase(ManifoldCF.java:1680)
>
>         at
> org.apache.manifoldcf.core.system.ManifoldCF$DatabaseShutdown.doCleanup(ManifoldCF.java:1664)
>
>         at
> org.apache.manifoldcf.core.system.ManifoldCF.cleanUpEnvironment(ManifoldCF.java:1540)
>
>         at
> org.apache.manifoldcf.core.system.ManifoldCF$ShutdownThread.run(ManifoldCF.java:1718)
>
>
>
> What could I do?
>
>
>
> Thank you very very much for your help.
>
>
>
> Mario
>
>
>
>
>
> *Da:* Karl Wright <daddy...@gmail.com>
> *Inviato:* martedì 12 giugno 2018 14:26
> *A:* user@manifoldcf.apache.org
> *Oggetto:* Re: Job in aborting status
>
>
>
> Hi Mario,
>
>
>
> Two things you should know.  First, if you have very large jobs, it can
> take a while to abort them.  This is because the documents need to have
> their document priority cleared, and that can take a while for a large
> job.  Second, what you describe sounds like you may have stuck locks.  This
> can happen if you are using a multiprocess setup and are using file-based
> synchronization and you kill jobs with kill -9.  To clean this up, you need
> to perform the lock-clean procedure:
>
>
>
> (1) Shut down all manifoldcf processes
>
> (2) Execute the lock-clean script
>
> (3) Start up the manifoldcf processes
>
>
>
> Thanks,
>
> Karl
>
>
>
>
>
> On Tue, Jun 12, 2018 at 7:11 AM Bisonti Mario <mario.biso...@vimar.com>
> wrote:
>
> Hallo.
>
>
>
> I have jobs in aborting status and it hangs.
>
> I tried to restart manifoldcf, I restarted the machine, but the job hangs
> in aborting status.
>
>
>
> Now, I am not able to start every job because they stay in starting status
>
>
>
> How could I solve it?
>
>
>
> Thanks.
>
>

Reply via email to