aaron added a comment.
I noticed that addUsages() uses
$this->db->replication()->wait(); which uses
LBFactory::waitForReplication(). Doesn't that mean it's waiting
for replicas without committing each batch? It seems like it would just hold
more and more locks while waiting for
aaron closed this task as a duplicate of T225969: Per template/Lua module
profiling with Grafana dashboards.
TASK DETAIL
https://phabricator.wikimedia.org/T237249
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: Aklapper, Lydia_Pintscher
aaron added a comment.
I don't think wikibase, or MediaWiki code generally, should be so tightly
coupled to the driver returning strings.
Ideally, I somewhat prefer using native PHP integers...but the our
sqlite/mysql/postgres Database subclasses should be consistent. We can possibly
aaron added a comment.
\Wikibase\Repo\Store\Sql\SqlIdGenerator definitely looks prone to deadlocks.
It should probably work more like TableNameStore (named locks + auto-commit
trx).
TASK DETAIL
https://phabricator.wikimedia.org/T298682
EMAIL PREFERENCES
https
aaron moved this task from Inbox to Radar on the Performance-Team board.
aaron edited projects, added Performance-Team (Radar); removed Performance-Team.
TASK DETAIL
https://phabricator.wikimedia.org/T195792
WORKBOARD
https://phabricator.wikimedia.org/project/board/1212/
EMAIL PREFERENCES
aaron merged a task: T293536: MediaWiki should support setting a read query
time limit.
aaron added subscribers: dpifke, Kormat, CDanis.
Restricted Application added a project: Performance-Team.
TASK DETAIL
https://phabricator.wikimedia.org/T195792
EMAIL PREFERENCES
https
aaron added a comment.
Regarding RedisLockManager (it only needs 2 of the 3 host to be reachable).
If one of them is depooled or refuses connections, no one should notice any
disruption. For otherwise unreachable servers, there is a 2 second timeout (and
the redis server will be avoided
aaron added a comment.
From the perspective of popular/major articles, likely to have infoboxes, the
extra 42.1 KB for loading the "app" JS doesn't seem crazy. I've looked through
code several times and it seems reasonable. Testing with fast/slow 3G doesn't
reveal obnoxious reflow
aaron added a comment.
I've been looking at this from time to time, and haven't found anything real
problems yet. Some of the things I'm looking out for are:
- Pageview critical path effects:
- Bytes (JS)
- Bytes (CSS+images)
- Page load delay
- First input delay
aaron added a comment.
In T246456#6116523 <https://phabricator.wikimedia.org/T246456#6116523>,
@darthmon_wmde wrote:
> hey @aaron, @Gilles,
>
> could you, please, give us an update on this task? could you also tell us
something we could tackle proactively that may h
aaron closed this task as "Resolved".
TASK DETAIL
https://phabricator.wikimedia.org/T248147
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: Krinkle, Jdforrester-WMF, jijiki, aaron, Bawolff, Aklapper, thcipriani,
hashar,
aaron added a comment.
In T157651#6039302 <https://phabricator.wikimedia.org/T157651#6039302>, @Tgr
wrote:
> I would suggest the opposite: keep `sql.php`, drop `patchSql.php`. I don't
think many people are familiar with the latter (compare patchSql
<https://www.med
aaron claimed this task.
TASK DETAIL
https://phabricator.wikimedia.org/T248147
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: Krinkle, Jdforrester-WMF, jijiki, aaron, Bawolff, Aklapper, thcipriani,
hashar, Marostegui, NavinRizwi
aaron added a comment.
In T183993#5801637 <https://phabricator.wikimedia.org/T183993#5801637>,
@Addshore wrote:
>> If they are more than one case ( I checked and it seems it's two cases in
the past 7 days) We need to make sure tools or Wikibase itself put some time
betw
aaron added a comment.
Note that CdnCacheUpdate queues a purge to happen X seconds later to help
deal with lag (mediawiki-config has $wgCdnReboundPurgeDelay at 11). If lag gets
near that amount, then $wgCdnMaxageLagged will kick in.
TASK DETAIL
https://phabricator.wikimedia.org/T227758
aaron closed this task as "Resolved".
TASK DETAIL
https://phabricator.wikimedia.org/T221577
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: kchapman, DannyS712, WMDE-leszek, alaa_wmde, Ladsgroup, Lydia_Pintscher,
daniel,
aaron added a comment.
In T212550#5124323 <https://phabricator.wikimedia.org/T212550#5124323>,
@Smalyshev wrote:
> @aaron Is there any docs how to use the client ID on the client side? I see
there's support for `cpPosIndex` cookie which is `$index@$time#$clientId` but
if I
aaron added a comment.
In T212550#5124280 <https://phabricator.wikimedia.org/T212550#5124280>,
@Krinkle wrote:
> Another question for @aaron as well - If we start storing this
chronologyprotector field in kafka etc. that means it can survive longer and no
longer has an expir
aaron added a comment.
I think you can just add a method, similar to
LBFactory::getChronologyProtectorTouched, that exposes the client ID, maybe
call it LBFactory::getChronologyProtectorClientId();
TASK DETAIL
https://phabricator.wikimedia.org/T212550
EMAIL PREFERENCES
https
aaron added a comment.
Why does the job itself have all of the transformed text rather than just a
revision/page ID and use them to derive the transformed text? I get that some
metadata is not stored elsewhere and would have to go in the job.
TASK DETAIL
https://phabricator.wikimedia.org
aaron added a comment.
Yes, and CacheAwarePropertyInfoStore should use delete() or such for purges
rather than set(). Using set() would only effect one DC.
TASK DETAIL
https://phabricator.wikimedia.org/T218197
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel
aaron added a comment.
That sounds right.TASK DETAILhttps://phabricator.wikimedia.org/T194299EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: Addshore, aaronCc: Fnielsen, Lucas_Werkmeister_WMDE, Ladsgroup, aaron, TerraCodes, Stashbot, Banyek, Paladox, mmodell
aaron added a comment.
I see EntityRevisionCache and CacheAwarePropertyInfoStore seems to use set() on invalidation. Also, EntityRevisionCache
and CachingEntityRevisionLookup, and PopulateInterwiki seems to call delete() on a non-WAN cache instance.TASK DETAILhttps://phabricator.wikimedia.org
aaron added a comment.
In T194299#4714630, @daniel wrote:
In T194299#4714614, @aaron wrote:
openConnection is badly named and still reuses connections. You'd probably want getConnection with CONN_TRX_AUTO
I hate this hack. This may *still* re-use connections, if anything else used
aaron added a comment.
openConnection is badly named and still reuses connections. You'd probably want getConnection with CONN_TRX_AUTOTASK DETAILhttps://phabricator.wikimedia.org/T194299EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: aaron
aaron added a comment.
Ah, right, I read that ternary backwards, <<$maxTime < PHP_INT_MAX ? PHP_INT_MAX : 1>>.TASK DETAILhttps://phabricator.wikimedia.org/T200420EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: Nikki, zeljkofilipin, a
aaron added a comment.
In T200420#4453134, @Addshore wrote:
Something to note, because the locks are no longer in the DB, we end up selecting the same 15 or so wikis that are locked all of the time.
It could be that the other wikis actually don't have locks:
before using the redis lock manager
aaron added a comment.
Yes, https://gerrit.wikimedia.org/r/396546 .TASK DETAILhttps://phabricator.wikimedia.org/T182322EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: Johan, Darwinius, Aklapper, Lea_Lacroix_WMDE, Addshore, Ladsgroup, Joe, Trizek-WMF
aaron added a comment.
In T181385#3806113, @gerritbot wrote:
Change 394779 had a related patch set uploaded (by Aaron Schulz; owner: Aaron Schulz):
[mediawiki/core@master] Try to opportunistically flush statsd data in maintenance scripts
https://gerrit.wikimedia.org/r/394779
This seems more
aaron added a comment.
How long do these run? The sample rate in config is set to be extremely low. So perhaps:
The buffering class buffers things that won't even be saved
The buffering could be disable in CLI mode
TASK DETAILhttps://phabricator.wikimedia.org/T181385EMAIL PREFERENCEShttps
aaron added a comment.
Probably hotTTR is way to high. It's really "expected time till refresh given 1 hit/sec". With 50/min, you'd get maybe 2 updates (new values) per regex. I'll put up a patch for that.TASK DETAILhttps://phabricator.wikimedia.org/T173696EMAIL PREFER
aaron added a comment.
In T173696#3696294, @Lucas_Werkmeister_WMDE wrote:
I did a bunch of requests against https://www.wikidata.org/w/api.php?action="">, which checks a format constraint for “title”. It’s always the same regex and only a handful of different values (17). But whil
aaron added a comment.
In T173696#3690945, @Lucas_Werkmeister_WMDE wrote:
Reopening. This task is supposed to be for caching results in general, which isn’t done yet at all, though we had a lot of discussion on caching regex checks specifically here, which in hindsight should’ve been
aaron closed subtask T42451: "Transaction already in progress" error in sqlite as "Resolved".
TASK DETAILhttps://phabricator.wikimedia.org/T72710EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: Lydia_Pintscher, aaronCc: gerritbot,
aaron added a comment.
In T173696#3620700, @Lucas_Werkmeister_WMDE wrote:
Interesting idea! It feels a bit weird to implement logic like this on top of the cache (I thought that’s the cache’s job?), but you’re the expert :) it sounds like it makes a lot of sense, at least, since the set
aaron added a comment.
If want to avoid flooding cache with rarely used long-tail combinations, maybe something like this could be done:
$textHash = hash( 'sha256', $text );
$cacheMap = $this->cache->getWithSetCallback(
$this->cache->makeKey(
'WikibaseQualit
aaron added a comment.
Those refreshLInks jobs (from wikibase) are the only ones that use multiple titles per job, so they will be a lot slower (seems to be 50 pages/job) than the regular ones from MediaWiki core. That is a bit on the slow side for a run time of a non-rare job type (e.g. TMH
aaron added a comment.
In T173710#3571046, @EBernhardson wrote:
In T173710#3571009, @Legoktm wrote:
Could we always bump page_touched, but only send the purges to varnish if the timestamp is within the past four days? Would that let us run the older jobs faster since if I understand correctly
aaron added a comment.
In T173710#3570037, @Joe wrote:
Correcting myself after a discussion with @ema: since we have up to 4 cache layers (at most), we should process any job with a root timestamp newer than 4 times the cache TTL cap. So anything older than 4 days should be safely discardable
aaron added a comment.
As far as retries go, the attempts hash for wikidatawiki:htmlCacheUpdate has few entries with run counts no greater than 3. The onl incrementing code is doPop() in MediaWiki, the same code that made them go up to 3 to begin with. If the same job ran many times, I'd expect
aaron added a comment.
In T174422#3566350, @Krinkle wrote:
@Ladsgroup @aaron Would $wgUpdateRowsPerQuery be appropiate here, too? Or is it important for this particular query to use a different batch size?
Most of the pure waiting in the job will be for replication (the throttling just makes
aaron added a comment.
Though this bit is problematic:
"page_touched < " . $dbw->addQuotes( $dbw->timestamp( $touchTimestamp ) )
...seems like that comparison should use rootJobTimestamp if present.TASK DETAILhttps://phabricator.wikimedia.org/T173710EMA
aaron added a comment.
Ignored purges still count as work items, yes.
Rebound purges could explain some of the number. Also, given the backlog, lots of them probably had actually different rootJobTimestamps. MediaWiki can de-duplicate those when it's the same backlinked page X being edited
aaron added a comment.
Note that for de-duplication, as long as the job has rootJobTimestamp set, it will ignore rows already touched (page_touched) to a higher/equal value, and likewise not send purges to the corresponding pages. So the CDN aspects *should* already have lots of de-duplication
aaron added a comment.
In T173710#3551156, @aaron wrote:
Secondary purges where for dealing with replication lag scenarios, not lost purges. That was one extra purge (2X).
One easy change I can see to not use CdnCacheUpdate from HtmlCacheUpdateJob (but still for the pages directly being edited
aaron added a comment.
Secondary purges where for dealing with replication lag scenarios, not lost purges. That was one extra purge (2X).
One easy change I can see to not use CdnCacheUpdate from HtmlCacheUpdateJob (but still for the pages directly being edited). There is already processing delay
aaron added a comment.
In T173710#3548223, @daniel wrote:
In T173710#3547580, @aaron wrote:
In other words, base jobs for entities that will divide up and purge all backlinks to the given entity. Note that each job has two entries.
Wait - each job has two entries? You mean
aaron added a comment.
From
mwscript maintenance/runJobs.php wikidatawiki --type htmlCacheUpdate --nothrottle --maxjobs 100 | grep "IsSelf=1"
I can see almost all of the jobs are things like:
2017-08-24 01:15:39 htmlCacheUpdate Q36985371 table=pagelinks recursive=1 rootJ
aaron moved this task from Inbox to Radar on the Performance-Team board.aaron edited projects, added Performance-Team (Radar); removed Performance-Team.
TASK DETAILhttps://phabricator.wikimedia.org/T173710WORKBOARDhttps://phabricator.wikimedia.org/project/board/1212/EMAIL PREFERENCEShttps
aaron added a comment.
I commented on the patch, it's a METHOD mismatch problem, so the commit/wait steps don't happen (just the one big one like before).TASK DETAILhttps://phabricator.wikimedia.org/T164173EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences
aaron added a comment.
In T164173#3420723, @daniel wrote:
@aaron another question: does RefreshLinksJob also purge the CDN cache automatically? should it? It does update the parser cache...
It saves the cache as a convenience in some cases (since the relevant htmlCacheUpdate job uses
aaron added a comment.
I also wonder why some of those log warnings come from close() and others have the proper commitMasterChanges() bit in the stack trace. Normally, there should be nothing to commit by close() and it is just commits for sanity.TASK DETAILhttps://phabricator.wikimedia.org
aaron removed aaron as the assignee of this task.
TASK DETAILhttps://phabricator.wikimedia.org/T164173EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: aaron, MZMcBride, daniel, Ladsgroup, hoo, Marostegui, Aklapper, jcrespo, GoranSMilovanovic
aaron added a comment.
In T164173#3343495, @aaron wrote:
@daniel , can you look into the amount of purges happening in ChangeNotification jobs? I don't see any throttling or lag checks on in the job code.
/rpc/RunJobs.php?wiki=commonswiki=ChangeNotification=60=300M
Expectation (maxAffected
aaron added a subscriber: daniel.aaron added a comment.
@daniel , can you look into the amount of purges happening in ChangeNotification jobs? I don't see any throttling or lag checks on in the job code.TASK DETAILhttps://phabricator.wikimedia.org/T164173EMAIL PREFERENCEShttps
aaron triaged this task as "Normal" priority.
TASK DETAILhttps://phabricator.wikimedia.org/T164173EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: Ladsgroup, hoo, Marostegui, Aklapper, jcrespo, GoranSMilovanovic, Th3d3v1ls, Hfbn0, QZanden,
aaron added a project: Wikidata.
TASK DETAILhttps://phabricator.wikimedia.org/T164173EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: Marostegui, Aklapper, jcrespo, GoranSMilovanovic, Th3d3v1ls, Hfbn0, QZanden, Vali.matei, Minhnv-2809, Zppix, Izno
aaron removed aaron as the assignee of this task.
TASK DETAILhttps://phabricator.wikimedia.org/T124418EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: GWicke, ArielGlenn, Krinkle, Peter, EBernhardson, Smalyshev, gerritbot, Legoktm, daniel, hoo, aude
of fascinating things
with a model like this. Honestly, I think the criteria is coming together
quite nicely and we're just starting a pilot labeling campaign to work
through a set of issues before starting the primary labeling drive.
1. https://ores.wikimedia.org
-Aaron
On Wed, Mar 22, 2017 at 6:39 AM
aaron edited projects, added Wikidata; removed WMF-deploy-2016-10-25_(1.28.0-wmf.23), WMF-deploy-2016-09-13_(1.28.0-wmf.19), MW-1.28-release-notes.
TASK DETAILhttps://phabricator.wikimedia.org/T154596EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc
aaron added a comment.
If the owner fatals, the lock will have to expire (the TTL depends on the LockManager instance config and/or whether the context is CLI or web).TASK DETAILhttps://phabricator.wikimedia.org/T151993EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel
aaron added a comment.
There is no analogous method. Maybe a non-blocking engageClientLock() call can replace the isClientLockUsed() call? Seems like chd_lock is used to determine whether to do a non-blocking check first before a blocking acquisition (which could race anyway, which I guess just
aaron added a comment.
There is also a flip-side to automatically dropping on connection loss, which is that loss can happen (possibly due to the net_wait_timeout options) while the connection to DBs actually being updated stays alive. In that case, multiple threads could run on the same client
aaron added a comment.
How often would locks be dropped? Using ScopedLock would handle exceptions in non-lock code. The shutdown handler usually catches SIGINT. I guess there are still fatal errors, though I'd hope that sort of thing would be rare. In that case, the redis lock manager used by our
aaron added a comment.
Does getLazyConnectionRef() help here?TASK DETAILhttps://phabricator.wikimedia.org/T147169EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: thiemowmde, Addshore, aaron, Aklapper, Smalyshev, Lydia_Pintscher, jcrespo, aude, daniel
aaron added a comment.
Are they are more useful/direct traces?TASK DETAILhttps://phabricator.wikimedia.org/T148419EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: WMDE-Fisch, Tobi_WMDE_SW, aaron, Addshore, Aklapper, D3r1ck01, Izno, Wikidata-bugs, aude
aaron added a comment.
It should be a goal to replace usage of the maintenance config script, but not high priority IMO.TASK DETAILhttps://phabricator.wikimedia.org/T145819EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: matmarex, thcipriani, Anomie
aaron added a comment.
Also seeing:
Expectation (masterConns <= 0) by ApiMain::setRequestExpectations not met:
[connect to 10.64.16.144 (wikidatawiki)]
#0 /srv/mediawiki/php-1.28.0-wmf.20/includes/libs/rdbms/TransactionProfiler.php(156): TransactionProfiler->reportExpectationViolated()
#
aaron added a comment.
Does addUsages() get called when no other writes are pending commit? If so, you can do the usual getEmptyTransactionTicket/commitAndWaitForReplication dance. If not, you'd have to pass the ticked down from above...TASK DETAILhttps://phabricator.wikimedia.org/T146079EMAIL
aaron added a comment.
The backport should get the maintenance script call rate back to the old status quo (rare), so re-deploy is worth attempting.TASK DETAILhttps://phabricator.wikimedia.org/T145819EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc
aaron created this task.aaron added a project: Wikidata.Herald added a subscriber: Aklapper.
TASK DESCRIPTIONI keep seeing this in DBPerformance.log:
Expectation (writeQueryTime <= 5) by JobRunner::run not met (actual: 5.3946626186371):
[transaction 3d09beb22c0e writes to 10.64.16.30 (row
aaron merged a task: T140967: WikiPageEntityStore::updateWatchlist breaks implicit transactions.
TASK DETAILhttps://phabricator.wikimedia.org/T140955EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: hashar, aaron, hoo, Lydia_Pintscher, daniel, aude
aaron closed this task as a duplicate of T140955: Wikibase\Repo\Store\WikiPageEntityStore::updateWatchlist: Automatic transaction with writes in progress (from DatabaseBase::query (LinkCache::addLinkObj)), performing implicit commit!.
TASK DETAILhttps://phabricator.wikimedia.org/T140967EMAIL
aaron added a comment.
The watchlist one is likely fixed by d484555db6b734ef56edf2d521dbcfb54170c7a6 in core . The extension code looks OK.
The other one is caused by using deadlockLoop(), which I'd suggest removing.TASK DETAILhttps://phabricator.wikimedia.org/T140955EMAIL PREFERENCEShttps
aaron added a comment.
Premature commits like this make multi-DB transactions less safe. Only Job/DeferrableUpdate/Maintenance code should be flushing transactions. If something needs its own transaction it should use some sort of deferred update.TASK DETAILhttps://phabricator.wikimedia.org
aaron created this task.aaron added projects: Wikidata, MediaWiki-Database.Herald added a subscriber: Aklapper.
TASK DESCRIPTIONFrom logstash:
DatabaseBase::deadlockLoop: Automatic transaction with writes in progress (from DatabaseBase::query (LinksUpdate::acquirePageLock)), performing implicit
aaron created this task.aaron added projects: Wikidata, MediaWiki-Database.Herald added a subscriber: Aklapper.
TASK DESCRIPTIONFrom logstash:
Wikibase\Repo\Store\WikiPageEntityStore::updateWatchlist: Automatic transaction with writes in progress (from DatabaseBase::query (LinkCache::addLinkObj
aaron added a comment.
Editing needs to do conflict detection anyway, since fresh data sent to the client becomes stale while they wait anyway, which has to be solved with conflict detection.
The second case should use slaves too, just like templates/files use slaves, which works fine.TASK
aaron edited the task description.TASK DETAILhttps://phabricator.wikimedia.org/T110399EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: aaronCc: hoo, aude, Joe, Krenair, PleaseStand, gerritbot, Aklapper, aaron, Gilles, Nemo_bis, MZMcBride, Glaisher, D3r1ck01
aaron changed the title from "WikiPageEntityMetaDataLookup querying DB master on GET" to "WikiPageEntityMetaDataLookup querying DB master on HTTP GET".TASK DETAILhttps://phabricator.wikimedia.org/T110399EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpre
aaron changed the title from "SpecialModifyEntity querying DB master on GET" to "WikiPageEntityMetaDataLookup querying DB master on GET".aaron edited the task description.TASK DETAILhttps://phabricator.wikimedia.org/T110399EMAIL PREFERENCEShttps://phabricator.wikimedi
aaron raised the priority of this task from "High" to "Unbreak Now!".
Herald added subscribers: Luke081515, TerraCodes, Urbanecm.
TASK DETAIL
https://phabricator.wikimedia.org/T135485
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To
aaron added a comment.
If the # of rows it might delete has no sane upper bound, then it should be a
job. If it's just 100s at most, then it could use a DeferredUpdate class IMO.
TASK DETAIL
https://phabricator.wikimedia.org/T135485
EMAIL PREFERENCES
https://phabricator.wikimedia.org
aaron added a comment.
This is now one of the two main remaining causes of warnings in this log.
TASK DETAIL
https://phabricator.wikimedia.org/T110399
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: aude, Joe, Krenair, PleaseStand
aaron added a comment.
Has anyone had a chance to look at this lately?
The patch doesn't seem to load for me.
TASK DETAIL
https://phabricator.wikimedia.org/T108929
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: hoo, aaron
Cc: aaron
aaron placed this task up for grabs.
TASK DETAIL
https://phabricator.wikimedia.org/T133422
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: Joe, Krenair, PleaseStand, gerritbot, bd808, Aklapper, Gilles, Nemo_bis,
MZMcBride, Glaisher, Johan
aaron created this task.
TASK DESCRIPTION
Expectation (masterConns <= 0) by MediaWiki::main not met:
[connect to 10.64.16.144 (wikidatawiki)]
TransactionProfiler.php line 311 calls wfBacktrace()
TransactionProfiler.php line 146 calls
TransactionProfiler->reportExpectationVi
aaron removed a blocked task: T88445: MediaWiki multi-datacenter investigation
and work.
TASK DETAIL
https://phabricator.wikimedia.org/T88986
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: aaron, Addshore, Lydia_Pintscher, JanZerebecki
aaron added a comment.
In https://phabricator.wikimedia.org/T88986#1692565, @aude wrote:
> @aaron https://phabricator.wikimedia.org/T108929 is the one issue I am
aware of that should be fixed asap. I'm not sure what else...
Is anyone one looking at that? I think I still
aaron closed this task as "Declined".
aaron claimed this task.
Herald removed a subscriber: Liuxinyu970226.
TASK DETAIL
https://phabricator.wikimedia.org/T69117
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: Jimkont, Wikidata-
aaron created this task.
aaron added subscribers: aaron, BBlack.
aaron added a project: Wikidata.
Herald added subscribers: StudiesWorld, Aklapper.
TASK DESCRIPTION
I see errors in the logs like:
```
LoadBalancer::commitAll 10.64.48.26 2006MySQL server has gone
away
aaron changed the task status from "Open" to "Stalled".
aaron added a comment.
Was this deployed already? Perhaps this can just be closed.
TASK DETAIL
https://phabricator.wikimedia.org/T103912
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpre
aaron closed blocking task T45575: PHP notices: Explicit commit of implicit
transaction and Transaction already in progress as "Resolved".
TASK DETAIL
https://phabricator.wikimedia.org/T75456
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailp
aaron added a subscriber: aaron.
aaron added a comment.
See "+channel:DBPerformance +message:"*connections made*"" at
logstash.wikimedia.org. I see lots of concurrent connections from mw1152
scripts (if reuseConnection was called properly I'd assume there would be ~7 or
so,
aaron added a comment.
I'll try to look into whether the problem is in core or not, though I need to
spend time on some other tasks too.
TASK DETAIL
https://phabricator.wikimedia.org/T118162
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc
aaron added a subscriber: aaron.
aaron added a comment.
Sounds like the the result of fixing the rpc/RunJobs to properly run jobs till
the 30 sec limit rather than 1 at a time (which wasted huge amounts of time in
setup overhead and caused massive job backlogs, particularly for the 'enqueue
aaron added a subscriber: aaron.
aaron added a comment.
If this bug is wikidata-specific, can someone update the title to reflect that?
Also, is anyone working on this yet?
TASK DETAIL
https://phabricator.wikimedia.org/T88986
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings
aaron placed this task up for grabs.
aaron set Security to None.
TASK DETAIL
https://phabricator.wikimedia.org/T110399
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: aaron
Cc: aude, Joe, Krenair, PleaseStand, gerritbot, bd808, Aklapper, aaron
aaron added a comment.
The JOIN for query #2 does not seem to have an index.
wbqev_identifier_properties has:
PRIMARY KEY (identifier_pid, dump_id)
...but nothing like (dump_id). Is the wbqev_identifier_properties table going
to pruned of older dumps or will it just keep growing?
Query #3
aaron created this task.
aaron claimed this task.
aaron added subscribers: Glaisher, MZMcBride, Nemo_bis, Gilles, aaron,
Aklapper, bd808, gerritbot, PleaseStand, Krenair, Joe.
aaron added projects: Availability, Wikidata.
TASK DESCRIPTION
[GET] Expectation (masterConns = 0) by MediaWiki::main
1 - 100 of 126 matches
Mail list logo