Ashar Voultoiz wrote:
On 06/06/11 00:56, K. Peachey wrote:
snip
Since it skips cache, can not we disable that stub highlighter once for all?
Logged in users don't get cached versions of the page...
I am well aware of that. The root cause being the various options
available to users, my
Federico Leva (Nemo) wrote:
Platonides writes:
No.
Currently it would mean not caching any page view.
The feature would need to be adapted to allow efficient stub linking
(I have some ideas about it, and the new linker makes things easier).
Still, we might not allow stub links for anons
Aryeh Gregor wrote:
On Sat, Jun 4, 2011 at 6:54 PM, Platonidesplatoni...@gmail.com wrote:
No.
Currently it would mean not caching any page view.
The feature would need to be adapted to allow efficient stub linking
(I have some ideas about it, and the new linker makes things easier).
What
Tisza Gergö wrote:
Hi all,
MediaWiki has a user setting to add a CSS class to article links whose length
is
below a certain threshold (preferences/appearance/advanced options/threshold
for
stub link formatting). Is it possible to enable this by default on a Wikimedia
wiki?
No.
Currently
Chad wrote:
On Thu, Jun 2, 2011 at 1:13 PM, Siebrand Mazelands.mazel...@xs4all.nl
wrote:
Looks like this may be something many on this list would like to know asap.
I think the mail is in the moderation queue because Danese may not be
subscribed.
Nothing's in the queue, it must not've
Chad wrote:
On Wed, Jun 1, 2011 at 9:09 PM, Russell N. Nelson - rnnelson
rnnel...@clarkson.edu wrote:
I've done release engineering before. It is my considered opinion that one
person needs to be on point for each release. It's possible that that person
could rotate in and out, but one
Wilfredor wrote:
Hi,
I have a simple question
What is the average size of a wikipedia article in Spanish?
I need this to calculate the amount of articles that can be included
on a CD. Subsequently, this size will be compressed by LZMA2 (OpenZIM
format)
Brion Vibber wrote:
On Thu, Jun 2, 2011 at 2:20 PM, Roan Kattouwroan.katt...@gmail.com wrote:
On Thu, Jun 2, 2011 at 10:56 PM, Brion Vibberbr...@pobox.com wrote:
Is there a way we can narrow down this security check so it doesn't keep
breaking API requests, action=raw requests, and
Max Semenik wrote:
I would wholeheartedly supoport the idea of making tests a mandatory
accompaniment to code. One latest example: we made latest security releases
without tests.
Actually, we have made security releases that *broke* tests.
(ie. tests got outdated)
This is bad not only
Welcome Kevin,
I tried to contact you a few days ago, but was unable to.
Please create a wiki account (with email notifications enabled) and
commit your USERINFO.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Aran Dunkley wrote:
Hello,
I have a lot of deleted pages in my wiki and was wondering how to free
up the database by getting rid of them completely. I don't want to loose
the history of the pages that aren't deleted though, is there an
extension for this?
Thanks,
Aran
There's a script
Jeroen De Dauw wrote:
Hey,
I have some tag extension in which one of the parameters is a SPARQL query.
My code is constructing this query correctly (confirmed by var_dump'ing the
relevant vars), but when I put the tag in a wiki page, several spaces are
getting replaced by #160; which breaks
Chad wrote:
On Sun, May 29, 2011 at 6:41 PM, Ryan Lane rlan...@gmail.com wrote:
I'm not sure we have standards on hook names (someone please correct
me, if this isn't the case).
In core, there certainly is no naming convention for hooks. Some are verb
phrases, some are noun phrases. I
jida...@jidanni.org wrote:
My script to show me the diffs,
svn diff -r BASE:HEAD RELEASE-NOTES|wdiff -d -3
I suppose will now have to be
for i in $(ls -r RELEASE-NOTES-*|sed 2q)
do
echo $i diff:
svn diff -r BASE:HEAD $i|wdiff -d -3
done
K. Peachey wrote:
Easily, no. There was some discussion recently for the ones that
needed it was to create the new one, import the old one into it and
then setup pointers so the old ones were redirected.
I find that harder than a rename...
___
Thomas Gries wrote:
Hello,
is there way similar to my Wikimania 2005 proposal of Getlets [1][2]
- to transclude text content of a file denoted by a (if possible to limit:
trusted) URL
- if possible, after sanitizing the content of the file
on MediaWiki pages ?
Purpose:
To avoid
Not that you are biased... :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I think you want to attach an ACL to a namespace, not to an interwiki,
which is just a pointer to a page living in some different place.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
MZMcBride:
As long as people stop resolving old bugs as wontfix simply because
they're old. I went through and re-opened quite a few bugs that were
improperly marked as resolved over the weekend.
There were *many* bugs closed that weekend. Which included duplicating
to similar bugs and also
Indeed!
Until bug 28984 is resolved we will have to need to continue meeting
every year to repeat this ;)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
(I apologise for breaking the thread)
The reason that I went the route of creating an extension vs a skin was that
I wanted the most flexibility in adapting the content for mobile device
rendering. There are a number of sections that need to be removed from the
final output in order to render the
On Fri, May 13, 2011 at 7:32 PM, Brion Vibber br...@pobox.com wrote:
On Fri, May 13, 2011 at 2:40 PM, Patrick Reilly prei...@wikimedia.orgwrote:
The reason that I went the route of creating an extension vs a skin was
that
I wanted the most flexibility in adapting the content for mobile device
The problem is that we want a CVS-like interface, with per-article
versioning. Only a few operations would be cross-article (renaming,
moving sections to a different article...) and we don't have a way to
express transactions anyway. There's no global status of the code to
keep. The articles
I have gone ahead and renamed them in r87633
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Anthony wrote:
Granted, that's only half the problem. The other (and much more
difficult) problem is how to convert a *set* of pages (templates and
whatnot) into a single chunk of wikitext, which can then be fed into
the wikitext to HTML parser. But even without that part it would
still be
Ryan Lane wrote:
I'd also love to know how it keeps getting unsubscribed. This is the
third time...
You subscribe it.
The mailing list send emails to repor...@kaulen.wikimedia.org
Result of doing that?
Delivery to the following recipient has been delayed:
repor...@kaulen.wikimedia.org
I find fine planning to have a normal, small release. But if the code
development lead to eg. a parser rewrite, that's fine too. It can get
in, or reverted for the branch if too close to the brach point.
As far as it gets reviewed in time, it shouldn't be a problem. (We are
going to get everything
Russell N. Nelson - rnnelson wrote:
Maybe there's a better tool to tell you what function is
defined in what class in PHP, but I couldn't find one in
the time it would take me to write it, so I wrote it.
It's not even a screenful. Give it the class definitions,
in class hierarchy order, on the
Paul Houle wrote:
Note that there is a PHP tokenizer built into PHP which makes it
straightforward to develop tools like this in PHP:
http://php.net/manual/en/book.tokenizer.php
A practical example can be found here
http://gen5.info/q/2009/01/09/an-awesome-autoloader-for-php
Magnus Manske wrote:
So, why not use my WYSIFTW approach? It will only parse the parts of
the wikitext that it can turn back, edited or unedited, into wikitext,
unaltered (including whitespace) if not manually changed. Some parts
may therefore stay as wikitext, but it's very rare (except
Brion Vibber wrote:
A bigger deal will probably be actually changing structures that don't
render consistently, and that'll depend on how brave we are changing nested
template table structures to fit a hierarchical document model.
We've basically got two levels of stuff:
* parsing wiki
Brion Vibber wrote:
Looks like just bad patch reverts that removed the file contents but didn't
actually delete.
-- brion
Gone in r87306.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Brion Vibber wrote:
{{Open template}} text {{Close template}} structures are IMHO a big
problem for any WYSIWYG editor.
But there's no way they are going away.
If we determine they have to go, then we can devise ways to find and migrate
them -- it'll be a process that takes time and a lot
Johannes Weberhofer wrote:
Dear all,
there are two empty files in the distribution. Are they of any usage?
/maintenance/archives/patch-page_no_title_convert.sql
/maintenance/archives/patch-image_reditects.sql
They aren't pointed by the files, so seem safe to delete.
Brion, any reason to
Johannes Weberhofer wrote:
Dear all!
I'm currently packaging mediawiki (for opensuse), and have a question related
to includes/zhtable.
I think, this part can be stored in a seperate package, as many systems do
not require chinese translations. Making the ZhConversion.php works nicely,
jida...@jidanni.org wrote:
http://www.mediawiki.org/wiki/Compatibility#CSS
MediaWiki is compatible with user agents which do not process CSS3
markup. Some additional features are available to browsers which can
process these styles.
I.e., tough luck for browsers who can't.
Happy-melon wrote:
http://www.wefearchange.org/2011/04/release-that-rewrite-on-11.html
MediaWiki 1.18 (or even 1.19) on 11th November? If mailman can manage it;
why can't we...?
--HM
Seems a good date for 1.18
___
Wikitech-l mailing
Some bugzilla entries with relevant issues:
You missed https://bugzilla.wikimedia.org/show_bug.cgi?id=20852
Setup internal wikis as https only, spawned from a previous thread of
this list.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
MZMcBride wrote:
Hi.
At some point the LiquidThreads labs wiki disappeared. It used to be located
at http://liquidthreads.labs.wikimedia.org/. Was it replaced by a
prototype wiki or something? It had a lot of discussion on it about issues
with LiquidThreads. It's also linked from bug
Svip wrote:
On 18 April 2011 20:27, MZMcBride z...@mzmcbride.com wrote:
Raul Kern wrote:
how to add facebook like box to chapters mediawiki homepage:
http://et.wikimedia.org
The like button is usually added with an iframe HTML element. I suppose
it would be simple enough to create this
Assuming that there are no destructive bugs in the reviewed code, we
could have en.alpha.wikipedia.org urls.
Our tests also need to be improved, so that we don't keep hitting the
same boulders.
___
Wikitech-l mailing list
MediaWiki is currently developed in two branches, with everything going
to trunk, and a slower number of revisions getting also copied to the
stable branch.
That's also the way we want to continue working, having a trunk and a
stable branch.
Currently we store the release notes in a file called
Dmitriy Sintsov wrote:
If you are still monitoring the list, I probably would try to merge xml
dumps (only --current should be enough for search), then I'd import
merged dump into common wiki. Should not be too hard, if not
performance issues for very large wikis. If your wikis aren't very
Roan Kattouw wrote:
2011/4/14 Platonides platoni...@gmail.com:
Thus, I propose that, from the point we branch 1.18, we keep the stable
branch release notes in trunk, and add trunk release notes in a
different file. trunk and branch RELEASE-NOTES would effectively be the
same file (a revision
Happy-melon wrote:
What I have done is to move the PHP version check from WebStart.php (which
was unparseable since Tim added a try/catch block in r85327) to the entry
points index.php, api.php, load.php. That way, only those files have to be
PHP 4 compatible.
Don't forget about
From a practical perspective, many more bugs than are being marked as
normal are actually normal by this standard. Most of the bugs in
our database are probably low priority in the sense that we just
can't get around to fixing all of them. I can see the argument for
making the most common
This is just another instance of
https://bugzilla.wikimedia.org/show_bug.cgi?id=21572
Note that there's javascript there for creating curid links, too.
Note those urls may have caching problems.
-1 to the base36 thing.
___
Wikitech-l mailing list
Roan Kattouw wrote:
2011/4/9 Platonides platoni...@gmail.com:
Yes. Calling a template twice will only fetch the text once, won't
increase the 'used templates' counter...
Preprocessing of wikitext over a threshold is cached serialized (it's
easier to reprocess if it's too small).
To clarify
Daniel Friesen wrote:
I believe we do locally (in-process) cache the preprocessor structure for
pages and templates, so multiple use of the same template won't incur as
much preprocessor work. But, the preprocessor parsing is usually one of the
fastest parts of the whole parse.
I could swear
I also think it's cleaner to make move an action than edit a Special page.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Rob Lanphier wrote:
This is an area where the community can help. Of that query, only
about half of them have seen any activity at all this calendar year.
We spend *some* time in the triage meeting with old bugs that are
still a problem, but bugs where no one is even commenting generally
are
Paul Houle wrote:
I did a substantial project that worked from the XML dumps. I
designed a recursive descent parser in C# that, with a few tricks,
almost decodes wikipedia markup correctly. Getting it right is tricky,
for a number of reasons, however, my approach preserved some
Interesting. That would be an -ops project, not a software development one.
It seems a riskier deliverable, although certainly interesting for WMF.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Strainu wrote:
Hi,
Today, some users on ro.wp reported that the citations used more than
once in the text had changed appeareance. Instead of ^a,b, we now
have something like ↑ 34,0 34,1. This was considered disturbing by
some users. Furthermore, the backlinks from the notes section to the
Michael Dale wrote:
Eventually it would be ideal to be able to 'just test your extension'
from the core bootstraper (ie dynamically generate our suite.xml and
namespace the registration of extension tests) ... but for now at least
not having to wait for all the core tests as you write you
I think the source of the problems is the decentralizated process that
wikimedia javascript have followed.
Medium-size wikis will have 2-3 tech people. Small wikis will be lucky
to have 1 js-savy sysop.
Even wikis with no local expert may have just forked the Monobook.js
from its mother project
Ryan Kaldari wrote:
Yeah, the local CSS/JS cruft is definitely a problem. I've tried doing
clean-up on a few wikis, but I usually just get chewed out by the local
admins for not discussing every change in detail (which obviously
doesn't scale for fixing 200+ wikis). I would love to hear
Jamie Morken wrote:
Hi,
Thanks for the info, while I was at it I did some more checking of the
history dump file sizes and compression ratios (as reported by 7-Zip 9.20):
enwiki-20110115-pages-meta-history1.xml.7z 434.99x compression
enwiki-20110115-pages-meta-history2.xml.7z 289.46x
Tim Starling wrote:
In the last week, I've been reviewing extensions that were written
years ago, and were never properly looked at. I don't think it's
appropriate to measure success in code review solely by the number of
new revisions after the last branch point.
Code review of
Ryan Lane wrote:
I've given extensions, core, and deployment access to a new member of
our ops team: Peter Youngmeister. He should be adding his userinfo as
we speak.
- Ryan Lane
Welcome, Peter!
___
Wikitech-l mailing list
Roan Kattouw wrote:
011/3/28 Douglas Gardner douglas.gard...@wikinewsie.org:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Not sure this is the right place, but:
Are the Vector skin versions of the external link icons [1] on Commons
anywhere? I couldn't find them when I looked, and I'm not
Wilfredor wrote:
Dear Platonides,
I'm not sure if that is a good practice. How would identifying the
version (id) revised?
The categories map to page titles, which are equivalents to pageids
(actually, the category table maps to pageids). You would then pass them
to wikitrust in order to get
Wilfredo Rodriguez wrote:
Good morning.
If many people work in the process of verification and selection of
items, what is the proper way to record the list csv without the
following problems occur:
1) Repeat entries (Someone reviewed the same article because he did
not know that was
Tod wrote:
I'm working with a group that has customized their mediawiki
installation so that they can create multiple wiki instances, each with
their own database, that have no knowledge of each others existence.
This was done for security and privacy reasons.
The dumpBackup.php utility
Tim Starling wrote:
I think we should migrate MediaWiki to target HipHop [1] as its
primary high-performance platform. I think we should continue to
support Zend, for the benefit of small installations. But we should
additionally support HipHop, use it on Wikimedia, and optimise our
Trevor Parscal wrote:
On Mar 28, 2011, at 1:31 PM, Rob Lanphier wrote:
You say that as though this were obvious and uncontroversial. The reason
why we've been dancing around this issue is because it is not.
Right now, we have a system whereby junior developers get to commit whatever
they
Bryan Tong Minh wrote:
On Sun, Mar 27, 2011 at 3:33 AM, Platonides wrote:
Come on. It is easy enough to check if your revision is the culprit.
svn up -r r80247
cd tests/phpunit/
make noparser
Which takes approximately one hour to run.
We should fix this, because otherwise nobody is going
Happy-melon wrote:
Platonides platoni...@gmail.com wrote in message
news:imm5c2$rib$1...@dough.gmane.org...
Ilmari Karonen wrote:
I think it might be a good idea to split these two cases into separate
states. My suggestion, off the top of my head, would be to leave
fixme for the latter
Daniel Friesen wrote:
http://www.mediawiki.org/wiki/Special:Code/MediaWiki/80248
Comment gives a Tesla link saying something broke. However the Tesla
link does not identify that commit as the guaranteed commit that
actually broke code. The commit was followed up with several fixmes
already
Roan Kattouw wrote:
2011/3/26 Mark A. Hershberger mhershber...@wikimedia.org:
If code is to survive past a week in the repository, it has to be
reviewed.
This is basically what I suggested in the other thread, except I added
a few other conditions that have to be satisfied before we can
K. Peachey wrote:
├───┼┴───┴─┤
│ 7 │ platonides
Ilmari Karonen wrote:
This made me realize something that's only tangentially related to the
existing thread, namely that we're currently using the fixme status in
Code Review for two different kinds of commits:
1. commits that are broken and need to be fixed or reverted ASAP, and
2.
Andrew Dunbar wrote:
Just a thought, wouldn't it be easier to generate dumps in parallel if
we did away with the assumption that the dump would be in database
order. The metadata in the dump provides the ordering info for the
people that require it.
Andrew Dunbar (hippietrail)
I don't see
Neil Kandalgaonkar wrote:
I added a comment to the talk page.
http://www.mediawiki.org/wiki/User_talk:Akshay.agarwal
Long story short, we had this discussion in IRC... some people find the
concept of AJAX login really alarming from a security perspective, but I
think there could (COULD)
Wilfredor wrote:
If we could support this on our existing server, it should not be too much
work for us to set it up.
I would like to know the exact format of the ids you need (The order
and the necessary parameters for each line of the csv) and if there is
a system to add
What is the
Aryeh Gregor wrote:
My experience with Mercurial is that if you type the wrong commands,
it likes to destroy data. For instance, when doing an hg up with
conflicts once, it opened up some kind of three-way diff in vim that I
had no idea how to use, and so I exited. This resulted in my
Ariel T. Glenn wrote:
Amusingly, splitting based on some number of articles doesn't really
balance out the pieces, at least for history dumps, after the project
has been around long enough with enough activity. Splitting by number
of revisions is what we really want, and the older pages have
Neil Kandalgaonkar wrote:
What are the security problems with a simple AJAX login implementation
that just POSTs, compared to digest authentication?
With digest authentication you can transmit credentials over unencrypted
HTTP without worrying that someone is capturing your plaintext
Yuvi Panda wrote:
Hi, I'm Yuvi, a student looking forward to working with MediaWiki via
this year's GSoC.
I want to work on something dump related, and have been bugging
apergos (Ariel) for a while now. One of the things that popped up into
my head is moving the dump process to another
Daniel Friesen wrote:
- Brion mentioned there is prior art in hosting large numbers of git
repos. Gitorious' codebase is open-source and can be re-used. Wikimedia
could potentially host it's own gitorious for MediaWiki git repos.
I realised today that we are trying to adapt our layout to the
Stephan Gambke wrote:
I work on an extension that used to call parse() directly. Then after
some advice from mw developers this was changed to a call to
recursiveTagParse because parse should not be called directly.
Only problem is, the method that used to call parse() is used to
populate a
Daniel Friesen wrote:
((And before anyone says anything, there's no way I'm putting private
keys on servers operated by a 3rd party, or doing development of things
on my local machine -- reconfiguring apache and trying to get packaged
apache and mysql to NOT start up on my laptop except
I'd prefer if those superb review tools were named instead of vague
references about greener pastures and how wonderful it will be reviewing
code with git.
And no, nobody wants our review paradigm to be let's spend several
months on the backlog every time we want to release. It was just the
best
Joseph Roberts wrote:
Hello!
What is the current concensus on HTML5?
Are we going to fully support it and use as many features as we can or
are we going to keep just using javascript alterntives?
I would assume that we would continue to use javascript in the case
that the client does not
Roan Kattouw wrote:
The only thing that may be different, depending on what our workflow
ends up being, is that messages that have been added in some branch
that hasn't been merged to trunk yet will not automatically be picked
up by TWN for translation. This is technically already the case,
Marcin Cieslak wrote:
So having a possibility to have a pre-flight test of the translation
(or even watch the demo of the original in action) is something
Selenium could deinitely help. In many cases, translators
do not have permission to experience some interface in the live
environment
Mark Wonsil wrote:
I haven't used git yet but after reading the excellent article that
Rob Lanphier posted (http://hginit.com/00.html), I think I will. That
article also explains why there wouldn't have to be as many updates to
SVN as is done today.
I don't see that conclusion. A DCVS allows
Platonides wrote:
Mark A. Hershberger wrote:
23126 Locked and hidden accounts can unify new local accounts
https://bugzilla.wikimedia.org/23126
Tim says this is apparently a deliberate chage, and later
comments leave me confused: should this be closed
Mark A. Hershberger wrote:
23126 Locked and hidden accounts can unify new local accounts
https://bugzilla.wikimedia.org/23126
Tim says this is apparently a deliberate chage, and later
comments leave me confused: should this be closed or the fix
Roan Kattouw wrote:
2011/3/16 MZMcBride z...@mzmcbride.com:
I don't remember
off-hand if creating a wiki is something that a normal shell user can do,
so those might be an option for root bugs as well. Just a thought.
Normal shell users can execute all but one of the steps required for
wiki
Uwe Baumbach wrote:
Hello,
Urgent helpis needed!
We carry our genealogical wiki GenWiki
http://genwiki.genealogy.net
in version 1.14.1.
We planned to update soon to 1.17.1.
So we checked out to http://wiki-test.genealogy.net
Aryeh Gregor wrote:
On Wed, Mar 16, 2011 at 6:38 AM, Roan Kattouw roan.katt...@gmail.com wrote:
Normal shell users can execute all but one of the steps required for
wiki creation: root access is needed to create the DNS entry for the
new subdomain.
Why doesn't Wikimedia just set up a
Marcin Cieslak wrote:
This new layout and system will likely be done using a planned xml/html
based template syntax.
http://www.mediawiki.org/wiki/User:Dantman/Skinning_system/Monobook_template
http://www.mediawiki.org/wiki/User:Dantman/Skinning_system#xml.2Fhtml_template_syntax
Did you
William Allen Simpson wrote:
Whatever happened to Domas' idea that, for things already in the cache and
about to be invalidated, a newly parsed instance be stored in memcache before
invalidating other caches, so that the article is only parsed once?
The mentioned Pool Counter is a daemon which
William Allen Simpson wrote:
While I'm thinking about it: why doesn't universal login work from secure?
After all, the centralauth credentials seem to exist, so the various *wiki
sites should be able to find them automatically?
(...)
But the link from
data sharing issue.
Later information from Platonides leads me to think this is a software issue,
that leads to an inconsistent database.
How did transwikified edits not point back to the original site and user?
Because the author field is not prepared for doing so. Thus the
transwiking just
William Allen Simpson wrote:
On 3/10/11 10:34 PM, William Allen Simpson wrote:
To give a little more detail, logged http (not https) into en: on one
page,
and am showing the Login successful page.
Then, bring up a de: page in a second window -- the de: page has
Anmelden / Benutzerkonto
William Allen Simpson wrote:
And just as an aside, I did find out why fr: stopped working for awhile.
The message I left on a Talk was about Fuck You! (the song), and a
censorship bot suspended my login there
Somebody kindly fixed it. So far, fr: has worked fine since then.
It's funny
Gerard Meijssen wrote:
Hoi,
Would the export to ODF work for languages that are not in the Latin
script??
Thanks,
GerardM
Why wouldn't it? ODF uses UTF-8 internally...
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Dear Members,
I am Ramesh, pursuing my PhD in Monash University, Malaysia. My
Research is on blog classification using Wikipedia Categories.
As for my experiment, I use 12 main categories of Wikipedia.
I want to identify which particular article belongs to which main 12
categories?.
So I
601 - 700 of 1204 matches
Mail list logo