I would like to get a sense of people's feelings with regard to actual data on the usage of the site. *Users prefer master* Despite it defaulting to version 1.1.0, nearly 60% of the page views are on master. I think it is pretty clear what the website users want. I've implemented a new site building script <https://github.com/apache/incubator-mxnet/tree/master/docs/build_version_doc#build_site_tagsh> that can easily swap the default version to any tag, and hope to get that integrated with the CI process this week. We could go live with master being default whenever it is agreed.
*Time-travel website is difficult to maintain, confuses search, and is a strange user experience* Switching versions on the whole site and maintaining these time-traveling website builds is not really worth the effort and these old sites introduces a lot of problems with search and poor information. (+1 to Christopher's sentiments on UX). I think that we should keep the versions of MXNet limited to the install page(s) and the API docs. This is where versions have value and user traffic. This accounts to over 50% of all traffic in the legacy versions, and 80%+ of this traffic is coming directly from search, primarily Google. We can enhance visibility to these data points by enabling the Google Analytics Search Console, but in absence of that, experience suggests that if old content is available and linked to, it reinforces itself despite it being deprecated or even inaccurate. By stashing versioned content where it is accessible, but not highlighted, we'll force the search engines to update their links, and over time, any dent to the ranking of search results will heal, and the site's primary and current content will appear at the top of the results. *Maintaining tutorials and examples in master is easier, will help search, and provide a better user experience* There's another 15-25% of legacy version traffic going to tutorials, and all of it comes from search. These old tutorials are not maintained and while they might theoretically work with the specific API version they're coupled with in the build, they are also riddled with broken links, missing datasets and Python 3 incompatibilities. IMO, we should flag each tutorial in master with the minimum required API version and Python version, and no longer support legacy tutorials as a matter of course. If someone wants to fix them, then great. They can make the tutorial in master backwards compatible, or create a separate tutorial that focuses on the legacy version. But it is maintained in master. This shift will force search to update, guiding users to working tutorials and fresh content. In conclusion there are three overlapping proposals here. 1. Make master primary. 2. Remove time travel from the website. Provide specific instructions on installing master, current release, and a subset of legacy versions. Provide versioned API docs on the website. People can still download tagged releases and build the old site and docs if they wish. 3. Maintain tutorials and examples in master. Cheers, Aaron P.S. Any moves or removal of content will be handled by 301 permanent redirects, so we can soften the transition. On Mar 1, 2018 19:55, "Barber, Christopher" <[email protected]> wrote: > I was thinking more along the lines of benchmarks of MXNet vs TensorFlow, > PyTorch, and Caffe2. Benchmarks of edge devices would definitely be > interesting, but I would also want to see benchmarks of training time and > memory use and accuracy on large models. Obviously this would be a > non-trivial amount of work, which is why no one else is doing it, but there > would be a lot of interest in this. Also would like to see benchmarks of > ndarray, vs symbol vs gluon. > > But yes, if you want to drive traffic to the website you should have > content that changes frequently. I have to say I find it really strange to > have the entire website change when I select a different version from the > top tab. The design of the website should be independent of the code > version. > > - Christopher > > On 3/1/18, 4:33 PM, "Marco de Abreu" <[email protected]> > wrote: > > As far as I know, there are plans to make regular benchmarks and > generate > statistics. We could use that data. My personal task after CI is > creating > an infrastructure to automatically perform performance and power > consumption benchmarks on edge devices (raspberry and Nvidia Jetson). > It > would definitely be a good idea to share this data with the community > (especially considering the impressive performance of MXNet). > > Aaron is currently gathering requirements for recreating the website > build > and publish process, so input like this is definitely helpful. This > could > basically be summarized as a requirement to make the website contain > static > parts (e.g. APIs and documentation) as well as dynamic parts (e.g. > news, > statistics, recent papers etc). > > How does that sound? > > -Marco > > Aaron Markham <[email protected]> schrieb am Do., 1. März > 2018, > 22:11: > > > Do you have specific public benchmarks in mind? > > > > On Mar 1, 2018 10:13, "Barber, Christopher" < > [email protected] > > > > > wrote: > > > > > I think one thing that could draw more users would be published > > benchmarks > > > that show that networks implemented using MXNet perform better than > > > comparable ones using other platforms using the same hardware. If > you can > > > definitively show that MXNet is much faster and/or uses much less > memory, > > > you will attract much more interest. > > > > > > - Christopher > > > > > > On 2/25/18, 11:53 PM, "Li, Mu" <[email protected]> wrote: > > > > > > That's a great idea. The thing Simon's team is doing is > publishing > > > more tutorials on both MXNet website and AWS blog, which may > attract a > > lot > > > of traffics. Also Sukwon is tracking the progress of publishing > news more > > > frequently on the homepage. > > > > > > Best, > > > Mu > > > > > > > On Feb 25, 2018, at 8:48 PM, Chris Olivier < > [email protected]> > > > wrote: > > > > > > > > My (probably less-than-$0.02): > > > > > > > > I have as my home page on my > > > > phone, the Google Research Blog, and they frequently release > stuff > > > like > > > > data sets and models do this or that. Usually it seems > pretty > > > interesting > > > > and I am compelled to try it. > > > > > > > > Maybe we do something similar, but I’m not aware of it. I > know we > > > have all > > > > sorts of examples and whatnot, but it doesn’t seem the same > as what > > > at > > > > least appears to be something new to play with scrolling > past every > > > couple > > > > of weeks: > > > > > > > > For example, a few days ago: > > > > > > > > “Introducing the HDR+ Burst Photography Dataset”. > > > > > > > > https://research.googleblog.com/2018/02/introducing-hdr- > > > burst-photography.html?m=1 > > > > > > > > Reading that makes me want to download it and play around. > > > Obviously I > > > > would use Tensorflow by default because it’s ready to roll > as-is > > > with this > > > > dataset/model. > > > > > > > > > > > > -Chris > > > > > > > > > > > > > > > >> On Sun, Feb 25, 2018 at 8:34 PM Mu Li <[email protected]> > wrote: > > > >> > > > >> Not sure why the screenshot of the page view is not there, > attach > > > it again: > > > >> > > > >> > > > >> Best, > > > >> Mu > > > >> > > > >>> On Feb 25, 2018, at 8:32 PM, Li, Mu <[email protected]> > wrote: > > > >>> > > > >>> Hi Team, > > > >>> > > > >>> We are in trouble attracting new users. According to Google > > > analytics, > > > >> there is almost no increase in the number of paper views > for the > > > document > > > >> site mxnet.io. > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> The number of paper views is an important metric for the > > adoption, > > > I > > > >> would like to take actions to improve this number. It > includes > > > improving > > > >> the website so that users can get information easier. > However, the > > > current > > > >> website displays the last stable version instead of the > master > > > branch. Then > > > >> the effect of a modification, namely the user behaviors, > may need > > a > > > few > > > >> months to observe, which is definitely not effective. > > > >>> > > > >>> One aggressive idea is just showing the master branch in > default > > > during > > > >> the website improvement period (may take 3 months). Another > way is > > > >> releasing more frequently, e.g a new release per 2 weeks. > > > >>> > > > >>> What's your thoughts? > > > >>> > > > >>> Best, > > > >>> Mu > > > >> > > > > > > > > > > > > > > > > >
