Re: [Wikitech-l] Revisiting becoming an OpenID Provider

2010-05-28 Thread Daniel Friesen
Ryan Lane wrote:
 On Thu, May 27, 2010 at 7:08 PM, Jon Davis w...@konsoletek.com wrote:
   
 I could see some real use cases for OAuth.  Especially with regards to the
 cases mentioned above.  People could potentially build apps like AWB and
 Huggle using OAuth.  In general I think this would be a cool thing to have
 for all MediaWiki installs.

 As for being an OpenID provider... only one major thought:  Having this
 Foundation be a provider would be a lot of additional server load (It is
 100% non-cacheable) without any benefit to the main goal of providing free
 information.

 

 The biggest immediate benefit to becoming a provider is for
 non-MediaWiki based apps that the foundation uses. If we become a
 provider, our Wordpress, Bugzilla, Ideatorrent, etc. apps don't need
 to have separate username/password databases. As someone mentioned
 earlier, it would be extremely useful for the toolserver.

 Even for third-party applications, if we just provide OAuth, they
 would still need to handle user account databases, and that isn't
 optimal. It is especially less optimal for WMF users, who would need
 to have user accounts in a number of spots, and possibly have to
 remember multiple passwords.

 Respectfully,

 Ryan Lane
   
You sure you can't use pure OAuth similarly to the way you can with OpenID?
I know they have their own user management, but disqus is using OAuth to
turn twitter accounts into a login.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]



-- 
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Reasonably efficient interwiki transclusion

2010-05-28 Thread Peter17
I have updated my proposal with a fourth version [1]

I am still waiting for comments from Tim Starling. I have contacted
him on IRC for this.

[1] 
http://www.mediawiki.org/wiki/User:Peter17/Reasonably_efficient_interwiki_transclusion#Fourth_version_.(to_be_discussed)

--
Peter Potrowl
http://www.mediawiki.org/wiki/User:Peter17

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Anyone with CSS fu that can help out on Flagged Revs?

2010-05-28 Thread Roan Kattouw
2010/5/28 Rob Lanphier ro...@wikimedia.org:
 ..and there are screenshots of the problem here:
 http://www.pivotaltracker.com/story/show/2937207

For some reason that won't let me click on the screenshots. It shows a
hand when hovering over them, but clicking simply doesn't do anything
(using FF 3.6 on Linux). Could you just put these screenshots
somewhere else, e.g. on a wiki?

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Anyone with CSS fu that can help out on Flagged Revs?

2010-05-28 Thread Roan Kattouw
2010/5/28 Rob Lanphier ro...@wikimedia.org:
 Is there anyone here who can look at the CSS and offer up a better version
 of what's there?

I think I've discovered the main problem.

The div with the lock icon (div#mw-fr-revisiontag) is positioned with
position: absolute; . This doesn't really mean it's gonna be
positioned with absolute coordinates, but it means it'll be
postitioned relative to the closest parent element that has a
position: property set. In Monobook, this is div#content, whereas in
Vector this is div#bodyContent (a child of div#content). Because there
are children of div#content preceding div#bodyContent (div#siteNotice
and h1#firstHeading), they start at different heights, so the same CSS
(something like position: absolute; top: -2.5em;) will cause the icons
to be positioned differently in each skin.

The way I see this (and, mind you, I'm not a CSS ninja at all, just
someone with a more-than-basic knowledge of CSS), there's two ways to
solve this (assuming you want the lock icon to be next to the page
title, above the horizontal line):

1) Use different positioning offsets for each skin. This is ugly, but
will probably work. For Monobook, top: 3.6em; seems to work for me,
whereas on Vector, top: -2.5em; produces the same effect. You can do
skin-specific CSS with rules like body.skin-monobook
.flaggedrevs_short { top: 3.6em; } , but again, you don't want to be
doing this (there's another half dozen skins out there) and I have no
idea whether these offsets will work well in other browsers.

2) Move div#mw-fr-revisiontag to just before or just after
h1#firstHeading and position it right from there. That way you're sure
the closest positioned parent will be div#content, but it's probably
even easier to position it relatively.

Roan Kattouw (Catrop

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread bawolff
Hi all,

For those who don't know me, I'm one of the GSOC students this year.
My mentor is ^demon, and my project is to enhance support for metadata
in uploaded files. Similar to the recent thread on interwiki
transclusions, I'd thought I'd ask for comments about what I propose
to do.

Currently metadata is stored in img_metadata field of the image table
as a serialized php array. Well this works fine for the primary use
case - listing the metadata in a little box on the image description
page, its not very flexible. Its impossible to do queries like get a
list of images with some specific metadata property equal to some
specific value, or get a list of images ordered by what software
edited them.

So as part of my project I would like to move the metadata to its own
table. However I think the structure of the table will need to be a
little more complicated then just page id, name, value triples,
since ideally it would be able to store XMP metadata, which can
contain nested structures. XMP metadata is pretty much the most
complex metadata format currently popular (for metadata stored inside
images anyways), and can store pretty much all other types of
metadata. Its also the only format that can store multi-lingual
content, which is a definite plus as those commons folks love their
languages. Thus I think it would be wise to make the table store
information in a manner that is rather close to the XMP data model.

So basically my proposed metadata table looks like:

*meta_id - primary key, auto-incrementing integer
*meta_page - foreign key for page_id - what image is this for
*meta_type - type of entry - simple value or some sort of compound
structure. XMP supports ordered/unordered lists, associative array
type structures, alternate array's (things like arrays listing the
value of the property in different languages).
*meta_schema - xmp uses different namespaces to prevent name
collisions. exif properties have their own namespace, IPTC properties
have their own namespace, etc
*meta_name - The name of the property
*meta_value - the value of the property (or null for some compound
things, see below)
*meta_ref - a reference to a meta_id of a different row for nested
structures, or null if not applicable (or 0 perhaps)
*meta_qualifies - boolean to denote if this property is a qualifier
(in XMP there are normal properties and qualifiers)

(see http://www.mediawiki.org/wiki/User:Bawolff/metadata_table for a
longer explanation of the table structure)

Now, before everyone says eww nested structures in a db are
inefficient and what not, I don't think its that bad (however I'm new
to the whole scalability thing, so hopefully someone more
knowledgeable than me will confirm or deny that).

The XMP specification specifically says that there is no artificial
limit on nesting depth, however in general practise its not nested
very deeply. Furthermore in most cases the tree structure can be
safely ignored. Consider:
*Use-case 1 (primary usecase), displaying a metadata info box on an
image page. Most of the time that'd be translating specific name and
values into html table cells. The tree structure is totally
unnecessary. for example the exif property DateTimeOriginal can only
appear once per image (also it can only appear at the root of the tree
structure but thats beside the point). There is no need to reconstruct
the tree, just look through all the props for the one you need. If the
tree structure is important  it can be reconstructed on the php side,
and would typically be only the part of the tree that is relevant, not
the entire nested structure.
*Use-case 2 (secondary usecase). Get list of images ordered by some
property starting at foo. or get list of images where property bar =
baz. In this case its a simple select. It does not matter where in the
tree structure the property is.

Thus, all the nestedness of XMP is preserved (So we could re-output it
into xmp form if we so desired), and there is no evil joining the
metadata table with itself over and over again (or at all), which from
what i understand, self-joining to reconstruct nested structures is
what makes them inefficient in databases.

I also think this schema would be future proof because it can store
pretty much all metadata we can think of. We can also extend it with
custom properties we make up that are guaranteed to not conflict with
anything (The X in xmp is for extensible).

As a side-note, based on my rather informal survey of commons (aka the
couple people who happened to be on #wikimedia-commons at that moment)
another use-case people think would be cool and useful is metadata
intersections, and metadata-category intersections. I'm not planning
to do this as part of my project, as I believe that would have
performance issues. However doing a metadata table like this does
leave the possibility open for people to do such intersection things
on the toolserver or in a DPL-like extension.

I'd love to get some feedback on this. Is this a reasonable 

Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread church.of.emacs.ml
Hi bawolff,

thanks for your work.
I'm not very happy about the name metadata for the table. As far as I
understand it, this is about file metadata. metadata suggests it
contains information on pages (e.g. statistics).
Please consider using a name that contains 'file', e.g. file_metadata.

Thanks,
Church of emacs



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread bawolff
On Fri, May 28, 2010 at 10:12 AM, church.of.emacs.ml
church.of.emacs...@googlemail.com wrote:
 Hi bawolff,

 thanks for your work.
 I'm not very happy about the name metadata for the table. As far as I
 understand it, this is about file metadata. metadata suggests it
 contains information on pages (e.g. statistics).
 Please consider using a name that contains 'file', e.g. file_metadata.

 Thanks,
 Church of emacs



Hi.

Thanks for your response. You make a very good point. Now that you
mentioned it, I can very easily see that being confusing. I definitely
agree that either file_metadata or image_metadata would be better.
(file_metadata would be good because it contains metadata about files
that aren't image, and is consistent with the renaming of image
namespace to file, but image_metadata is more consistent with the db
table naming scheme as other table is the image table. I guess in the
end it doesn't really matter either way as long as its clear the
tables about uploaded media).

cheers,
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread Markus Krötzsch
Hi Bawolff,

interesting project! I am currently preparing a light version of SMW that 
does something very similar, but using wiki-defined properties for adding 
metadata to normal pages (in essence, SMW is an extension to store and 
retrieve page metadata for properties defined in the wiki -- like XMP for MW 
pages; though our data model is not quite as sophisticated ;-).

The use cases for this light version are just what you describe: simple 
retrieval (select) and basic inverse searches. The idea is to thus have a 
solid foundation for editing and viewing data, so that more complex functions 
like category intersections or arbitrary metadata conjunctive queries would be 
done on external servers based on some data dump.

It would be great if the table you design could be used for such metadata as 
well. As you say, XMP already requires extensibility by design, so it might 
not be too much work to achieve this. SMW properties are usually identified by 
pages in the wiki (like categories), so page titles can be used to refer to 
them. This just requires that the meta_name field is long enough to hold MW 
page title names. Your meta_schema could be used to separate wiki properties 
from other XMP properties. SMW Light does not require nested structures, but 
they could be interesting for possible extensions (the full SMW does support 
one-level of nesting for making compound values).

Two things about your design I did not completely understand (maybe just 
because I don't know much about XMP):

(1) You use mediumblob for values. This excludes range searches for numerical 
image properties (Show all images of height 1000px or more) which do not 
seem to be overly costly if a suitable schema were used. If XMP has a typing 
scheme for property values anyway, then I guess one could find the numbers and 
simply put them in a table where the value field is a number. Is this use case 
out of scope for you, or do you think the cost of reading from two tables too 
high? One could also have an optional helper field meta_numvalue used for 
sorting/range-SELECT when it is known from the input that the values that are 
searched for are numbers.

(2) Each row in your table specifies property (name and schema), type, and the 
additional meta_qualifies. Does this mean that one XMP property can have 
values of many different types and with different flags for meta_qualifies? 
Otherwise it seems like a lot of redundant data. Also, one could put stuff 
like type and qualifies into the mediumblob value field if they are closely 
tied together (I guess, when searching for some value, you implicitly specify 
what type the data you search for has, so it is not problematic to search for 
the value + type data at once). Maybe such considerations could simplify the 
table layout, and also make it less specific to XMP.

But overall, I am quite excited to see this project progressing. Maybe we 
could have some more alignment between the projects later on (How about 
combining image metadata and custom wiki metadata about image pages in 
queries? :-) but for GSoC you should definitely focus on your core goals and 
solve this task as good as possible.

Best regards,

Markus


On Freitag, 28. Mai 2010, bawolff wrote:
 Hi all,
 
 For those who don't know me, I'm one of the GSOC students this year.
 My mentor is ^demon, and my project is to enhance support for metadata
 in uploaded files. Similar to the recent thread on interwiki
 transclusions, I'd thought I'd ask for comments about what I propose
 to do.
 
 Currently metadata is stored in img_metadata field of the image table
 as a serialized php array. Well this works fine for the primary use
 case - listing the metadata in a little box on the image description
 page, its not very flexible. Its impossible to do queries like get a
 list of images with some specific metadata property equal to some
 specific value, or get a list of images ordered by what software
 edited them.
 
 So as part of my project I would like to move the metadata to its own
 table. However I think the structure of the table will need to be a
 little more complicated then just page id, name, value triples,
 since ideally it would be able to store XMP metadata, which can
 contain nested structures. XMP metadata is pretty much the most
 complex metadata format currently popular (for metadata stored inside
 images anyways), and can store pretty much all other types of
 metadata. Its also the only format that can store multi-lingual
 content, which is a definite plus as those commons folks love their
 languages. Thus I think it would be wise to make the table store
 information in a manner that is rather close to the XMP data model.
 
 So basically my proposed metadata table looks like:
 
 *meta_id - primary key, auto-incrementing integer
 *meta_page - foreign key for page_id - what image is this for
 *meta_type - type of entry - simple value or some sort of compound
 structure. XMP supports ordered/unordered 

Re: [Wikitech-l] Revisiting becoming an OpenID Provider

2010-05-28 Thread Aryeh Gregor
On Thu, May 27, 2010 at 8:08 PM, Jon Davis w...@konsoletek.com wrote:
 As for being an OpenID provider... only one major thought:  Having this
 Foundation be a provider would be a lot of additional server load (It is
 100% non-cacheable) without any benefit to the main goal of providing free
 information.

I imagine the load wouldn't be a big deal.  An OpenID server is pretty
simple, no?

On Thu, May 27, 2010 at 10:48 PM, Ryan Lane rlan...@gmail.com wrote:
 The biggest immediate benefit to becoming a provider is for
 non-MediaWiki based apps that the foundation uses. If we become a
 provider, our Wordpress, Bugzilla, Ideatorrent, etc. apps don't need
 to have separate username/password databases.

Assuming all of these actually support OpenID as consumers, without
annoying limitations.  Do they?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Revisiting becoming an OpenID Provider

2010-05-28 Thread Robb Shecter


  I imagine the load wouldn't be a big deal.  An OpenID server is pretty
  simple, no?
 

 Yeah. I couldn't imagine it adding much load.


I've done several OpenID client implementations; so watching the
conversation with the server, it seems like there's no overhead at all
beyond a normal login sequence.  So in a sense, that's where the overhead
is:  additional requests to the login pages and scripts corresponding with
new traffic to the new openid client apps.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread Michael Dale
More important than file_metadata and page asset metadata working with 
the same db table backed, its important that you can query export all 
the properties in the same way.

Within SMW you already have some special properties like pagelinks, 
langlinks, category properties etc, that are not stored the same as the 
other SMW page properties ...  The SMW system should name-space all 
these file_metadata properties along with all the other structured data 
available and enable universal querying / RDF exporting all the 
structured wiki data. This way file_metadata would just be one more 
special data type with its own independent tables. ...

SMW should abstract the data store so it works with the existing 
structured tables. I know this was already done for categories correct?  
Was enabling this for all the other links and usage tables explored?

This also make sense from an architecture perspective, where 
file_metadata is tied to the file asset and SMW properties are tied to 
the asset wiki description page.  This way you know you don't have to 
think about that subset of metadata properties on page updates since 
they are tied to the file asset not the wiki page propriety driven from 
structured user input. Likewise uploading a new version of the file 
would not touch the page data tables.

--michael

Markus Krötzsch wrote:
 Hi Bawolff,

 interesting project! I am currently preparing a light version of SMW that 
 does something very similar, but using wiki-defined properties for adding 
 metadata to normal pages (in essence, SMW is an extension to store and 
 retrieve page metadata for properties defined in the wiki -- like XMP for MW 
 pages; though our data model is not quite as sophisticated ;-).

 The use cases for this light version are just what you describe: simple 
 retrieval (select) and basic inverse searches. The idea is to thus have a 
 solid foundation for editing and viewing data, so that more complex functions 
 like category intersections or arbitrary metadata conjunctive queries would 
 be 
 done on external servers based on some data dump.

 It would be great if the table you design could be used for such metadata as 
 well. As you say, XMP already requires extensibility by design, so it might 
 not be too much work to achieve this. SMW properties are usually identified 
 by 
 pages in the wiki (like categories), so page titles can be used to refer to 
 them. This just requires that the meta_name field is long enough to hold MW 
 page title names. Your meta_schema could be used to separate wiki properties 
 from other XMP properties. SMW Light does not require nested structures, but 
 they could be interesting for possible extensions (the full SMW does support 
 one-level of nesting for making compound values).

 Two things about your design I did not completely understand (maybe just 
 because I don't know much about XMP):

 (1) You use mediumblob for values. This excludes range searches for numerical 
 image properties (Show all images of height 1000px or more) which do not 
 seem to be overly costly if a suitable schema were used. If XMP has a typing 
 scheme for property values anyway, then I guess one could find the numbers 
 and 
 simply put them in a table where the value field is a number. Is this use 
 case 
 out of scope for you, or do you think the cost of reading from two tables too 
 high? One could also have an optional helper field meta_numvalue used for 
 sorting/range-SELECT when it is known from the input that the values that are 
 searched for are numbers.

 (2) Each row in your table specifies property (name and schema), type, and 
 the 
 additional meta_qualifies. Does this mean that one XMP property can have 
 values of many different types and with different flags for meta_qualifies? 
 Otherwise it seems like a lot of redundant data. Also, one could put stuff 
 like type and qualifies into the mediumblob value field if they are closely 
 tied together (I guess, when searching for some value, you implicitly specify 
 what type the data you search for has, so it is not problematic to search for 
 the value + type data at once). Maybe such considerations could simplify the 
 table layout, and also make it less specific to XMP.

 But overall, I am quite excited to see this project progressing. Maybe we 
 could have some more alignment between the projects later on (How about 
 combining image metadata and custom wiki metadata about image pages in 
 queries? :-) but for GSoC you should definitely focus on your core goals and 
 solve this task as good as possible.

 Best regards,

 Markus


 On Freitag, 28. Mai 2010, bawolff wrote:
   
 Hi all,

 For those who don't know me, I'm one of the GSOC students this year.
 My mentor is ^demon, and my project is to enhance support for metadata
 in uploaded files. Similar to the recent thread on interwiki
 transclusions, I'd thought I'd ask for comments about what I propose
 to do.

 Currently metadata is stored in 

[Wikitech-l] Ideatorrent

2010-05-28 Thread Ryan Lane
A few months ago I created an Ideatorrent site on request of the
Wikipedia Usability Initiative team. We wanted to have integrated
authentication with the rest of Wikimedia before promoting it, but
that will likely be a while, and we think Ideatorrent could be useful
to the community. For now, you'll need to create an account. So,
here's the link:

http://prototype.wikimedia.org/en-idea/

I thought I'd start it off by adding an initial idea:

http://prototype.wikimedia.org/en-idea/ideatorrent/idea/4/

1-3 were tests. As of right now I am the only moderator/admin. If
anyone has ideas on how we should handle moderation and/or admin
rights, please let me know.

Respectfully,

Ryan Lane

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread Markus Krötzsch
(This gets a little bit off the topic, but it should still be helpful for the 
current discussion. But if we want to discuss a more general data management 
architecture for MW, then it might be sensible to make a new thread ;-)

On Freitag, 28. Mai 2010, Michael Dale wrote:
 More important than file_metadata and page asset metadata working with
 the same db table backed, its important that you can query export all
 the properties in the same way.
 
 Within SMW you already have some special properties like pagelinks,
 langlinks, category properties etc, that are not stored the same as the
 other SMW page properties ...  The SMW system should name-space all
 these file_metadata properties along with all the other structured data
 available and enable universal querying / RDF exporting all the
 structured wiki data. This way file_metadata would just be one more
 special data type with its own independent tables. ...
 
 SMW should abstract the data store so it works with the existing
 structured tables. I know this was already done for categories correct?

More recent versions of SMW actually do no longer use MW's category table for 
this, mostly to improve query performance.
[In a nutshell: SMW properties can refer to non-existing pages, and the full 
version of SMW therefore has its own independent page id management (because 
we want to use numerical IDs for all pages that are used as property values, 
whether or not they exist). Using IDs everywhere improves our query 
performance and reduces SQL query size, but it creates a barrier for including 
MW table data since more joins would be needed to translate between IDs. This 
is one reason SMW Light will not support queries: it uses a much simpler DB 
layout and less code, but the resulting DB is not as suitable for querying.]

 Was enabling this for all the other links and usage tables explored?

Having a unified view on the variety of MediaWiki data (page metadata, user-
edited content data, file metadata, ...) would of course be great. But 
accomplishing this would require a more extensive effort than our little SMW 
extension. What SMW tries to provide for now is just a way of storing user-
edited data in a wiki (and also for displaying/exporting it).

Of course SMW already has a PHP abstraction for handling the property-value 
pairs that were added to some page, and this abstraction layer completely 
hides the underlying DB tables. This allows us to make more data accessible 
even if it is in other tables, and even to change the DB layout of our custom 
tables if required. You are right that such an abstraction could be extended 
to cover more of the native data of MediaWiki, so that data dumps can include 
it as well.

I think this idea is realistic, and I hope that SMW helps to accomplish this 
in some future. Yet, this is not a small endeavour given that not even most 
basic data management features are deployed on the big Wikimedia projects 
today. To get there, we first need a code review regarding security and 
performance, and so for the moment we are pressed to reduce features and to 
shrink your code base. This is why we are currently building the Light 
version that only covers data input (without link syntax extensions), storage, 
look-up, and basic export/dump. For this step, I really think that sharing a 
data table with the EXIF extension would make sense, since the data looks very 
similar and a more complex DB layout is not necessary for the initial goals. 
We can always consider using more tables if the need arises.

But I would be very happy if there were more people who want to make concrete 
progress toward the goal you describe. Meanwhile, we are planning in smaller 
steps ;-)

 
 This also make sense from an architecture perspective, where
 file_metadata is tied to the file asset and SMW properties are tied to
 the asset wiki description page.  This way you know you don't have to
 think about that subset of metadata properties on page updates since
 they are tied to the file asset not the wiki page propriety driven from
 structured user input. Likewise uploading a new version of the file
 would not touch the page data tables.

Right, it might be useful to distinguish the internal handles (and external 
URIs) of the Image page and of the image file. But having a dedicated 
meta_schema value for user-edited properties of the page might suffice to 
accomplish this on the DB level. I am fairly agnostic about the details, but I 
have a tendency to wait with developing a more sophisticated DB layout until 
we have some usage statistics from the initial deployment to guide us.

-- Markus


 
 Markus Krötzsch wrote:
  Hi Bawolff,
 
  interesting project! I am currently preparing a light version of SMW
  that does something very similar, but using wiki-defined properties for
  adding metadata to normal pages (in essence, SMW is an extension to store
  and retrieve page metadata for properties defined in the wiki -- like XMP
  for MW 

Re: [Wikitech-l] [gsoc] splitting the img_metadata field into a new table

2010-05-28 Thread Neil Kandalgaonkar
On 05/28/2010 08:03 AM, bawolff wrote:
 Hi all,
 
 For those who don't know me, I'm one of the GSOC students this year.
 My mentor is ^demon, and my project is to enhance support for metadata
 in uploaded files. Similar to the recent thread on interwiki
 transclusions, I'd thought I'd ask for comments about what I propose
 to do.

Excellent! We're glad to have you on board. (FWIW I'm working on
Multimedia Usability, so I'll be watching what you come up with closely).

 So as part of my project I would like to move the metadata to its own
 table.

Great, although perhaps we could consider other things too (see below).

 ideally it would be able to store XMP metadata, which can
 contain nested structures.

 Now, before everyone says eww nested structures in a db are
 inefficient and what not, I don't think its that bad (however I'm new
 to the whole scalability thing, so hopefully someone more
 knowledgeable than me will confirm or deny that).

Okay, I just wrote a little novel here, but please take it as just
opening a discussion. I think you should try for a simpler design, but
I'm open to discussion.

I'm familiar with how MySQL scales (particularly for large image
collections). Commons has a respectable collection of  6.6 million
media files, but just to put that into perspective, Facebook gets that
in just a few hours. If we're successful in improving Commons' usability
we'll probably get some multiple of our current intake rate. So we have
to plan for hundreds of millions of media files, at least.

POTENTIAL ISSUES WITH TREE STRUCTURES

Tree structures in MySQL can be deceptively fast especially on
single-machine tests. But it tends to be a nightmare in production
environments. Turning what could be one query into eight or nine isn't
so bad on a single machine, but consider when the database and web
server are relatively far apart or loaded and thus and have high latency.

Also, due to its crappy locking, MySQL sucks at keeping tree structures
consistent. If we were doing this on Oracle it would be a different
story -- they have some fancy features that make trees easy -- but we're
on MySQL.

The most scalable architectures use MySQL's strengths. MySQL is weak at
storing trees. It's good at querying really simple, flat, schema.

BLOBS OF SERIALIZED PHP ARE GOOD

You should not be afraid of storing (some) data as serialized PHP,
*especially* if it's a complex data structure. If the database doesn't
need to query or index on a particular field, then it's a huge win NOT
to parse it out into columns and reassemble it into PHP data structures
on every access.

GO FOR MEANINGFUL DATA, NOT DATA PRESERVATION

Okay onto the next topic -- how you want to parse XMP out into a flat
structure, with links between them. I think you were clever in how you
tried to make the cost of storing the tree relatively minimal, but I
just question whether it's necessary to store it at all, and whether
this meets our needs.

It seems to me (correct me if I'm wrong) that your structure is two
steps beyond id-key-val in abstractness: it's id-schema-key-type-val. So
the meaning of keys depends on the schema? So we might have a set of
rows like this, for two images, id 1234 and 5678:

id: 1234
schema: AcmeMetadata
key: cameraType
type: string
val: canon-digital-rebel-xt

id: 1234
schema: AcmeMetadata
key: resolution
type: string
val: 1600x1200

id: 5678
schema: SomeOtherSchema
key: recordingDevice
type: string
val: Canon Digital Rebel XT

id: 5678
schema: SomeOtherSchema
key: width
type: int
val: 1600

id:5678
schema: SomeOtherSchema
key: height
type: int
val: 1200

The point is that between schemas, we'd use different keys and values to
represent the same thing. While you've done a good job of preserving the
exact data we received, this makes for an impossibly complicated query
if we want to learn anything.

When you find yourself defining a 'type' column you should be wary,
because you're engaging in Inner-Platform Effect. MySQL has types already.

It seems to me that we have no requirement to preserve exact key and
value names in our database. What we need is *meaning* not data.

So we shouldn't attempt to make a meta-metadata-format that has all the
features of all possible metadata formats. Instead we should just
standardize on one, hardcoded, metadata format that's useful for our
purposes, and then translate other formats to that format. The simplest
thing is just a flat series of columns. In other words, something like this:

id: 1234
cameraType: canon-digital-rebel-xt
width: 1600
height: 1200

id: 5678
cameraType: canon-digital-rebel-xt
width: 1600
height: 1200

WHY EVEN HAVE A SEPARATE TABLE?

If we're doing one row per media file (the ideal!) then there is no
reason why you can't simply append these new metadata columns onto the
existing image table. This would make querying REALLY easy, and it would
simplify database management.

And of course metadata formats differ, and not all metadata fields need
to be queryable or 

Re: [Wikitech-l] Anyone with CSS fu that can help out on Flagged Revs?

2010-05-28 Thread Rob Lanphier
Hi Roan,

Thanks for looking into this!  More below:

On Fri, May 28, 2010 at 3:02 AM, Roan Kattouw roan.katt...@gmail.comwrote:

 The way I see this (and, mind you, I'm not a CSS ninja at all, just
 someone with a more-than-basic knowledge of CSS), there's two ways to
 solve this (assuming you want the lock icon to be next to the page
 title, above the horizontal line):

 1) Use different positioning offsets for each skin. This is ugly, but
 will probably work. For Monobook, top: 3.6em; seems to work for me,
 whereas on Vector, top: -2.5em; produces the same effect. You can do
 skin-specific CSS with rules like body.skin-monobook
 .flaggedrevs_short { top: 3.6em; } , but again, you don't want to be
 doing this (there's another half dozen skins out there) and I have no
 idea whether these offsets will work well in other browsers.


I think this is the crux of the problem.  From what Aaron tells me, Monobook
and Vector are already using different offsets here.  I've not had a chance
to do a deep dive myself, so that I think you've identified what's the most
challenging here.  This is one of those areas in CSS where different
browsers handle things very differently, so getting this right has been a
bit of a mess.

At any rate, thanks again for looking, and if you have more ideas or do more
digging, please let us know.  Aaron and Adam (from the Usability team) are
hopefully going to get a chance to look at this in more detail next week,
but I'm sure they'd be delighted if someone figures out the right way of
dealing with this before then.

Rob
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Anyone with CSS fu that can help out on Flagged Revs?

2010-05-28 Thread Platonides
Why do you need to use CSS absolute positioning?
Since it's output from an extension, you could place it on an appropiate
place outside the bodyContent.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l