Amazing! Thank you Leon and Federico for your quick help. As always I'm happy to see more and more features to the Analytics tool :)
*Regards,Itzik Edri* Chairperson, Wikimedia Israel +972-54-5878078 | http://www.wikimedia.org.il Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment! On Sun, Feb 26, 2017 at 11:09 PM, Leon Ziemba <[email protected]> wrote: > Alrighty, I've added GLAMorous as a new source to Massviews. To use just > enter the Commons category name. So for your category: > http://tools.wmflabs.org/massviews-test/?platform=all- > access&agent=user&source=glamorous&target=Wikimedia_ > Israel_-_Channel_2_videos&range=latest-20&sort=views& > direction=1&view=list&debug=true > > Hopefully this is what you were looking for, I know some other GLAM folks > requested this (T150507 <https://phabricator.wikimedia.org/T150507>). > > This is only available on the test version of Massviews because I would > still very much consider this a quick demo. It seems to work fine for your > example but it is by no means production-ready, and you may encounter > random errors, especially with larger datasets. Some translations are also > missing. > > The source is called "GLAMorous" but that's actually a misnomer. I ended > up implementing everything myself, so it does not use the GLAMorous tool at > all. It also lacks features like selectively choosing projects, and it > assumes you only want mainspace pages. I imagine this accounts for most use > cases, though. > > Let me know if you run into any problems or have any feedback. I'm sure > you'd like to see pageviews totals grouped by the source file, which I will > try to work on soon, among other features, before officially releasing this. > > > Best, > > ~MA > > On Sun, Feb 26, 2017 at 12:32 PM, Federico Leva (Nemo) <[email protected] > > wrote: > >> Itzik - Wikimedia Israel, 26/02/2017 17:56: >> >>> ammm.. maybe a easier way for someone who don't want to play with code >>> and download dumps? :) >>> >> >> Does a standard command like grep qualify as easier? :-) On a computer >> with some bandwidth I did something like: >> >> wget -r -np -nH -nd -A bz2 http://ftp.acc.umu.se/mirror/w >> ikimedia.org/other/mediacounts/daily/2016/ ; find -name >> "mediacounts*bz2" -print0 | xargs -0 -P8 -I§ -n1 bzgrep webm § | grep -E >> '/Channel_?2.+webm' > 2016-12-channel2.csv >> >> Which gives me about 650k accesses during December 2016, of which 10500 >> downloads as complete file and 8k streamed plays. >> >> Nemo >> >> _______________________________________________ >> Analytics mailing list >> [email protected] >> https://lists.wikimedia.org/mailman/listinfo/analytics >> >> >
_______________________________________________ Analytics mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/analytics
