What about cutting down on SMs as recommended by Kellen? Sheng Zha <zhash...@apache.org> schrieb am Di., 3. Dez. 2019, 20:15:
> This is certainly one way to do it. However, the binary size limits our > ability to publish pypi. So assuming that we want to have our binary on > pypi still, we'd have to convince pypa to raise our limits. Thus, it seems > to me that this hypothetical vote with respect to stopping nightly publish > to pypi would likely only have one acceptable outcome. > > This is more of an emergency situation as an essential distribution > channel is currently broken so I'm focusing on the POC for now. > > -sz > > On 2019/12/03 18:28:44, Marco de Abreu <marco.g.ab...@gmail.com> wrote: > > Excellent! Could we maybe come up with a POC and a quick writeup and then > > start a proper vote after everyone verified that it covers their > use-cases? > > > > -Marco > > > > Sheng Zha <zhash...@apache.org> schrieb am Di., 3. Dez. 2019, 19:24: > > > > > Yes, there is. We can also make it easier to access by using a > > > geo-location based DNS server so that China users are directed to that > > > local mirror. The rest of the world is already covered by the global > > > cloudfront. > > > > > > -sz > > > > > > On 2019/12/03 18:22:22, Marco de Abreu <marco.g.ab...@gmail.com> > wrote: > > > > Isn't there an s3 endpoint in Beijing? > > > > > > > > It seems like this topic still warrants some discussion and thus I'd > > > prefer > > > > if we don't move forward with lazy consensus. > > > > > > > > -Marco > > > > > > > > Tao Lv <mutou...@gmail.com> schrieb am Di., 3. Dez. 2019, 14:31: > > > > > > > > > * For pypi, we can use mirrors. > > > > > > > > > > On Tue, Dec 3, 2019 at 9:28 PM Tao Lv <mutou...@gmail.com> wrote: > > > > > > > > > > > As we have many users in China, I'm considering the > accessibility of > > > S3. > > > > > > For pip, we can mirrors. > > > > > > > > > > > > On Tue, Dec 3, 2019 at 3:24 PM Lausen, Leonard > > > <lau...@amazon.com.invalid > > > > > > > > > > > > wrote: > > > > > > > > > > > >> I would like to remind everyone that lazy consensus is assumed > if no > > > > > >> objections > > > > > >> are raised before 2019-12-05 at 05:42 UTC. There has been some > > > > > discussion > > > > > >> about > > > > > >> the proposal, but to my understanding no objections were raised. > > > > > >> > > > > > >> If the proposal is accepted, MXNet releases would be installed > via > > > > > >> > > > > > >> pip install mxnet > > > > > >> > > > > > >> And release candidates via > > > > > >> > > > > > >> pip install --pre mxnet > > > > > >> > > > > > >> (or with the respective cuda version specifier appended etc.) > > > > > >> > > > > > >> To obtain releases built automatically from the master branch, > users > > > > > >> would need > > > > > >> to specify something like "-f > > > > > >> http://mxnet.s3.amazonaws.com/mxnet-X/nightly.html" option to > pip. > > > > > >> > > > > > >> Best regards > > > > > >> Leonard > > > > > >> > > > > > >> On Mon, 2019-12-02 at 05:42 +0000, Lausen, Leonard wrote: > > > > > >> > Hi MXNet Community, > > > > > >> > > > > > > >> > since more than 2 months our binary Python nightly releases > > > published > > > > > >> on Pypi > > > > > >> > are broken. The problem is that our binaries exceed Pypi's > size > > > limit. > > > > > >> > Decreasing the binary size by adding compression breaks > > > third-party > > > > > >> libraries > > > > > >> > loading libmxnet.so > > > > > >> https://github.com/apache/incubator-mxnet/issues/16193 > > > > > >> > > > > > > >> > Sheng requested Pypi to increase their size limit: > > > > > >> > https://github.com/pypa/pypi-support/issues/50 > > > > > >> > > > > > > >> > Currently "the biggest cost for PyPI from [the many MXNet > binaries > > > > > with > > > > > >> > nightly > > > > > >> > release to Pypi] is the bandwidth consumed when several > hundred > > > > > mirrors > > > > > >> > attempt > > > > > >> > to mirror each release immediately after it's published". So > Pypi > > > is > > > > > not > > > > > >> > inclined to allow us to upload even larger binaries on a > nightly > > > > > >> schedule. > > > > > >> > Their compromise is to allow it on a weekly cadence. > > > > > >> > > > > > > >> > However, I would like the community to revisit the necessity > of > > > > > >> releasing pre- > > > > > >> > release binaries to Pypi on a nightly (or weekly) cadence. > > > Instead, we > > > > > >> can > > > > > >> > release nightly binaries ONLY to a public S3 bucket and > instruct > > > users > > > > > >> to > > > > > >> > install from there. On our side, we only need to prepare a > html > > > > > >> document that > > > > > >> > contains links to all released nightly binaries. > > > > > >> > Finally users will install the nightly releases via > > > > > >> > > > > > > >> > pip install --pre mxnet-cu101 -f > > > > > >> http://mxnet.s3.amazonaws.com/mxnet-cu101/ > > > > > >> > nightly.html > > > > > >> > > > > > > >> > Instead of > > > > > >> > > > > > > >> > pip install --pre mxnet-cu101 > > > > > >> > > > > > > >> > Of course proper releases and release candidates should still > be > > > made > > > > > >> > available > > > > > >> > via Pypi. Thus releases would be installed via > > > > > >> > > > > > > >> > pip install mxnet-cu101 > > > > > >> > > > > > > >> > And release candidates via > > > > > >> > > > > > > >> > pip install --pre mxnet-cu101 > > > > > >> > > > > > > >> > This will substantially reduce the costs of the Pypi project > and > > > in > > > > > fact > > > > > >> > matches > > > > > >> > the installation experience provided by PyTorch. I don't > think the > > > > > >> benefit of > > > > > >> > not including "-f > > > > > >> http://mxnet.s3.amazonaws.com/mxnet-cu101/nightly.html" > > > > > >> > matches the costs we currently externalize to the Pypi team. > > > > > >> > > > > > > >> > This suggestion seems uncontroversial to me. Thus I would > like to > > > > > start > > > > > >> lazy > > > > > >> > consensus. If there are no objections, I will assume lazy > > > consensus on > > > > > >> > stopping > > > > > >> > nightly releases to Pypi in 72hrs. > > > > > >> > > > > > > >> > Best regards > > > > > >> > Leonard > > > > > >> > > > > > > > > > > > > > > > > > > > > >