P.S. I forgot to mention that editing the redirect page again fixed some 
data in SMW (no more property values stored) but it still did not set 
the redirect marker set in smw_object_ids did not store the redirect in 
smw_fpt_redi. I did not test if this update might have stopped the 
cycle, but I doubt it (the earlier presence of spurious properties about 
illegal property values should not have affected the jobs). -- Markus

On 13.10.2014 11:29, Markus Krötzsch wrote:
> Hi again,
>
> I have isolated a single specimen of an infinite job cycle on my wiki.
> The details are attached (I hope the attachment makes it to the list as
> well). In short, the loop is triggered by an update job on a redirect
> page. For some reason, the update job creates another instance of a new
> update job of the same page.
>
> The SMW tables do not contain correct information about the redirect
> page: it has data stored about itself and is not marked as a redirect. I
> do not know if this is the cause of the problem or a side effect.
> However, since the page was stored with a #REDIRECT on it, this regular
> storing of the data should already have created correct data -- this
> should not depend on any update job.
>
> The file also contains a sample of a job queue that I had at first,
> where one of the indestructible jobs has two more copies of itself. They
> were never modified during my tests, but this might explain why a job
> queue can get longer and longer in such a case (new job instances,
> whereever they come from, are protected by their indestructible copies).
>
> The wrong SMW data might be the deeper issue here, but in any case it
> should be able to make the UpdateJob execution robust against this kind
> of cycle to address the main problem. Maybe the UpdateJob would (if
> successful) actually fix the data, though it can hardly be its cause.
>
> Regards,
>
> Markus
>
>
> On 25.09.2014 00:02, James HK wrote:
>> Hi,
>>
>>> I have executed runJobs several times and the job_attempts remains at
>>> 1 for
>>> those five jobs. We were thinking of doing a database backup today, then
>>
>> I'm curious about the "job_attempts" field as I would have expected to
>> see an increment for when the job (actually there has been an attempt
>> to execute and not only display a line on command shell) and to see
>> whether the job actually gets execute when running `runJobs`, just add
>> a simple `var_dump( 'hello world' )` line to [0] and verify a
>> `SMW\UpdateJob` activity.
>>
>> [0]
>> https://github.com/SemanticMediaWiki/SemanticMediaWiki/blob/master/includes/src/MediaWiki/Jobs/UpdateJob.php#L118
>>
>>
>> Cheers
>>
>> On 9/25/14, Daren Welsh <darenwe...@gmail.com> wrote:
>>> I have executed runJobs several times and the job_attempts remains at
>>> 1 for
>>> those five jobs. We were thinking of doing a database backup today, then
>>> delete those five jobs from the table, then run the SMW "repair and
>>> upgrade" via the admin special page.
>>>
>>> Even if this clears the job queue, we'd like to understand what
>>> caused this
>>> in the first place. I realize that's a very open-ended question :)
>>>
>>> Daren
>>>
>>>
>>> On Wed, Sep 24, 2014 at 4:30 PM, James HK <jamesin.hongkon...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>>> job_attempts.
>>>>>
>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>> template page.
>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>> are
>>>> for
>>>>> "standard" pages. These pages do not seem to be related based on
>>>>> category
>>>>> or template.
>>>>
>>>> Just to make sure that I interpret the meaning of "stuck" correctly,
>>>> after finishing `runJobs` those four jobs (five with the
>>>> `refreshLinks` jobs) are still visible in the job table with an
>>>> "job_attempts" of 1. When running `runJobs` again the same four
>>>> `SMW\UpdateJob` (same as in the same title and same Id) jobs are
>>>> executed and increment the "job_attempts" to 2?
>>>>
>>>> If you empty the job table and execute `runJobs` does the same five
>>>> jobs appear again after the run with "job_attempts" = 1?
>>>>
>>>> Cheers
>>>>
>>>> On 9/25/14, Daren Welsh <darenwe...@gmail.com> wrote:
>>>>> We currently have five jobs that are "stuck". All of them have 1 for
>>>>> job_attempts.
>>>>>
>>>>> One has job_cmd of refreshLinks in job namespace 10 and it is for a
>>>>> template page.
>>>>> The other four have job_cmd of SMW\UpdateJob in job namespace 0 and
>>>>> are
>>>> for
>>>>> "standard" pages. These pages do not seem to be related based on
>>>>> category
>>>>> or template.
>>>>>
>>>>> On Wed, Sep 24, 2014 at 3:37 PM, James HK
>>>>> <jamesin.hongkon...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>>> are
>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>> all
>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>
>>>>>> I don't think the problem is with the `SMW\UpdateJob` because it does
>>>>>> a simple "shallow update" of the store while the management of job
>>>>>> status (including how many attempts, id's etc.) are done by the MW
>>>>>> JobQueue (which has first change in 1.22 and then again in 1.23).
>>>>>>
>>>>>> It does beg the question whether all `SMW\UpdateJob`'s are "stuck" or
>>>>>> only certain jobs belonging to a group of pages or single page?
>>>>>>
>>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>>> and
>>>>>> over.
>>>>>>
>>>>>> How do you know that the same job is run over and over again because
>>>>>> based and above discussion ("job_attempts") a job with too many
>>>>>> attempts is retired after some time.
>>>>>>
>>>>>> If the same job is run over and over again, what is displayed for the
>>>>>> "job_attempts" counter?
>>>>>>
>>>>>> [0] went into SMW 2.0 to counteract any possible job duplicates for
>>>>>> the same `root title`.
>>>>>>
>>>>>> [0] https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/307
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> On 9/25/14, James Montalvo <jamesmontal...@gmail.com> wrote:
>>>>>>> I'm not sure if this is related, but on my wiki I'm occasionally
>>>>>>> getting
>>>>>>> "stuck" jobs. I've only noticed this since upgrading to MW 1.23 and
>>>> SMW
>>>>>> 2.0
>>>>>>> from 1.22/1.8.0.5.
>>>>>>>
>>>>>>> What I mean by "stuck" is that the jobs don't get executed when I do
>>>>>>> runJobs.php, but for some reason they keep attempting to run over
>>>>>>> and
>>>>>> over.
>>>>>>> runJobs.php will literally run forever. After the non-offending jobs
>>>>>>> are
>>>>>>> cleared it's easy to see which are the offenders. Thus far I think
>>>>>>> all
>>>>>>> offenders have been of type SMW::UpdateJob.
>>>>>>>
>>>>>>> Is there some way to debug runJobs.php so I can provide better info?
>>>>>>>
>>>>>>> --James
>>>>>>> On Sep 24, 2014 10:55 AM, "Yaron Koren" <ya...@wikiworks.com> wrote:
>>>>>>>
>>>>>>>> I certainly hope so too - or that there's some other standard way
>>>>>>>> to
>>>>>>>> get
>>>>>>>> previously-attempted jobs to be run again. I only know that I tried
>>>>>>>> that
>>>>>>>> SQL trick once, and it worked. Perhaps this is another reason why
>>>>>>>> the
>>>>>>>> question should have instead been sent to the mediawiki-l mailing
>>>>>>>> list.
>>>>>>>> :)
>>>>>>>>
>>>>>>>> On Wed, Sep 24, 2014 at 11:35 AM, James HK <
>>>>>> jamesin.hongkon...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>>> if
>>>>>>>>>> you
>>>>>>>>> just
>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>> job_attempts =
>>>>>>>>>> 0", they will get run again.
>>>>>>>>>
>>>>>>>>> In case this solves the issue, I sincerely hope there is a
>>>> different
>>>>>>>>> way (a more standard way) to reset the "job_attempts" field other
>>>>>>>>> than
>>>>>>>>> by using a SQL statement to manipulate the job table.
>>>>>>>>>
>>>>>>>>> Cheers
>>>>>>>>>
>>>>>>>>> On 9/25/14, Yaron Koren <ya...@wikiworks.com> wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I believe the issue is the "job_attempts" field in the "job"
>>>>>>>>>> table.
>>>>>> I
>>>>>>>>>> believe each job is only attempted a certain number of times
>>>>>>>>>> before
>>>>>>>>>> MediaWiki basically just gives up and ignores it. My guess is
>>>> that
>>>>>>>>>> that
>>>>>>>>>> column is greater than 0 for all the rows in the table; I think
>>>> if
>>>>>>>>>> you
>>>>>>>>> just
>>>>>>>>>> go into the database and call something like "UPDATE job SET
>>>>>>>>> job_attempts =
>>>>>>>>>> 0", they will get run again.
>>>>>>>>>>
>>>>>>>>>> -Yaron
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> WikiWorks · MediaWiki Consulting · http://wikiworks.com
>>>>>>>>
>>>>>>>>
>>>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>>>> Reports
>>>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White
>>>>>>>> paper
>>>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog
>>>>>>>> Analyzer
>>>>>>>>
>>>>>>>>
>>>>>>
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Semediawiki-user mailing list
>>>>>>>> semediawiki-u...@lists.sourceforge.net
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>>>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>>>>>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS
>>>>>> Reports
>>>>>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>>>>>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>>>>>>
>>>>>>
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>>>
>>>>>> _______________________________________________
>>>>>> Semediawiki-devel mailing list
>>>>>> Semediawiki-devel@lists.sourceforge.net
>>>>>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> __________________
>>>>> http://mixcloud.com/darenwelsh
>>>>> http://www.beatportfolio.com
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> __________________
>>> http://mixcloud.com/darenwelsh
>>> http://www.beatportfolio.com
>>>
>>
>> ------------------------------------------------------------------------------
>>
>> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
>> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
>> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
>> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
>>
>> _______________________________________________
>> Semediawiki-devel mailing list
>> Semediawiki-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>>
>


------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel

Reply via email to