Re: julia_1.0.0-1_amd64.changes REJECTED

2018-09-25 Thread Lumin
hi ftp-master,

Sorry for the noise, but I really care about the package src:julia. And
I started to suspect that ftp-master failed to recieve my last feedback
on the rejection. So I'm re-sending the feedback again, and CCing
-devel to make sure the mail won't get lost.

As of 1.0.0-3 (NEW), this package is nearly ready for the buster release,
and the remaining transition blockers are not in src:julia itself.

So I want to ask ftp-master:

1. Isn't "incomplete backtrace" a sensible reason to keep debug symbols?
   Policy said "should" but not "must". Please tell me what I can do in
   order to help improve the src:julia package to satisfy the requirements?

2. julia/0.4.7-7+b3 mips64el has been waiting for removal for nearly
   a season. In my fuzzy memory julia 0.4.x is also the last blocker
   for llvm-3.8 removal. Is there any reason that they should stay
   in the archive for longer time?

3. Does ftp-master team extremely lack people such that there isn't
   even enough manpower to recruit new members? My ftp-trainee
   application mail was not responded for quite a long time again.

I don't really understand what's happending behind the ftp-master
curtain because ftp-master's work is not visible to me. Appologies
if I got something wrong.

Thanks.


On Thu, Aug 23, 2018 at 07:49:09AM +, Lumin wrote:
> On Thu, Aug 16, 2018 at 09:55:11PM +0200, Bastian Blank wrote:
> > On Wed, Aug 15, 2018 at 09:48:55AM +, Lumin wrote:
> > > > -rw-r--r-- root/root   2093360 2018-08-14 12:28 
> > > > ./usr/lib/x86_64-linux-gnu/libjulia.so.1.0
> > > This is what I expected and is NOT a mistake.
> > >  libjulia.so.1 (symlink) -> libjulia.so.1.1 (ELF shared object)
> > 
> > Whoops, I missed the symlink.  Usually someone uses the complete
> > version, aka 1.0.0 for the filename.
> 
> So there is no problem with the SONAME/SOVERSION or package name.
>  
> > > > Please describe directly why you need debug infos.
> > > Justification: Unable to pass unit test without debugging info.
> > > Conclusion:
> > >   1. keeping debugging info makes sense.
> > 
> > No.  You confuse reason with effect.  Some program error is the reason,
> > the failing tests the effect.
> 
> At the bottom part of this mail you can find a julia code snippet,
> extracted from Julia's unit test.
> 
> What I'm emphasizing here is, the debug info in those shared objects
> are intensionally kept to preserve a good user experience and
> avoid increasing maintainance burden.
> 
> This is the expected backtrace from the code snippet:
> 
> > ERROR: could not open file /tmp/y/nonexistent_file
> > Stacktrace:
> >  [1] include at ./boot.jl:317 [inlined]
> >  [2] include_relative(::Module, ::String) at ./loading.jl:1038
> >  [3] include(::Module, ::String) at ./sysimg.jl:29
> >  [4] include(::String) at ./client.jl:388
> >  [5] top-level scope at none:0
> 
> This is the actual backtrace after stripping the shared objects:
> 
> > ERROR: could not open file /tmp/y/nonexistent_file
> > Stacktrace:
> >  [1] include(::String) at ./client.jl:388
> >  [2] top-level scope at none:0
> 
> Stripping the shared object is bad to both us Julia maintainers and
> Julia upstream, because the users is always getting a problematic
> backtrace and the bug reports will always possibly be problematic.
> 
> Especially when there is a problem in Julia's internal, such
> incomplete backtrace is really not what we want to see.
> 
> Most importantly, policy chapter 10.1 says
> 
> > Note that by default all installed binaries should be stripped,
> > either by using the -s flag to install, or by calling strip on the
> > binaries after they have been copied into debian/tmp but before
> > the tree is made into a package.
> 
> It is "SHOULD" instead of "MUST".
> 
> Is this answer acceptable now?
> 
> 
> 
> using Test
> import Libdl
> 
> # helper function for returning stderr and stdout
> # from running a command (ignoring failure status)
> function readchomperrors(exename::Cmd)
> out = Base.PipeEndpoint()
> err = Base.PipeEndpoint()
> p = run(exename, devnull, out, err, wait=false)
> o = @async(readchomp(out))
> e = @async(readchomp(err))
> return (success(p), fetch(o), fetch(e))
> end
> 
> # backtrace contains type and line number info (esp. on windows #17179)
> for precomp in ("yes", "no")
> succ, out, bt = readchomperrors(`$(Base.julia_cmd()) --startup-file=no 
> --sysimage-native-code=$precomp -E 'include("nonexistent_file")'`)
> @test !succ
> @test out == ""
>   println("DEBUG")
>   println(bt)
>   println("DEBUG")
> 
> @test occursin("include_relative(::Module, ::String) at $(joinpath(".", 
> "loading.jl"))", bt)
> lno = match(r"at \.[\/\\]loading\.jl:(\d+)", bt)
> @test length(lno.captures) == 1
> @test parse(Int, lno.captures[1]) > 0
> end
> 



Bug#907389: ITP: pscircle -- visualizing Linux processes in a form of radial tree

2018-08-27 Thread Lumin
Package: wnpp
Severity: wishlist
Owner: Mo Zhou 

* Package name: pscircle
  Version : 1.2.2
  Upstream Author : Ruslan Kuchumov
* URL : https://gitlab.com/mildlyparallel/pscircle.git
* License : GPL-2
  Programming Lang: C
  Description : visualizing Linux processes in a form of radial tree

If output is not specified, pscircle will print the resulting
image to X11 root window.



Bug#906248: ITP: sleef -- SLEEF Vectorized Math Library

2018-08-15 Thread Lumin
Package: wnpp
Severity: wishlist
Owner: Lumin 

* Package name: sleef
  Version : 3.3
  Upstream Author : SLEEF Project.
* URL : https://sleef.org/ https://github.com/shibatch/sleef
* License : Boost software license
  Programming Lang: C + intrinsics
  Description : SLEEF Vectorized Math Library
  Section : Science

Sleef is one of PyTorch/Caffe2 deps.



Bug#904440: ITP: nsync -- C library that exports various synchronization primitives, such as mutexes [TF deps]

2018-07-24 Thread lumin
Package: wnpp
Severity: wishlist
Owner: lumin 

* Package name: nsync
  Version : 1.20.0
  Upstream Author : google
* URL : https://github.com/google/nsync
* License : Apache-2.0
  Programming Lang: C
  Description : C library that exports various synchronization primitives, 
such as mutexes

tensorflow dependency



Re: Re: Re: Concerns to software freedom when packaging deep-learning based appications.

2018-07-13 Thread Lumin
Hi Jonas,

> Seems you elaborated only that it is ridiculously slow so use CPUs
> instead of [non-free blob'ed] GPUs - not that it is *impossible to use
> CPUs.
> 
> If I am mistaken and you addressed the _possibility_ (not popularity) of
> reproducing/modifying/researching with CPUs, then I apologize for
> missing it, and as if you can please highlight the the essential part of
> your point.

Sorry if I didn't make my point clear.

>From a technical point of view, CPU can do the same work as GPU.
So it is definitely possible, even if it takes 100 years with CPU.

>From human's point of view, a pure free software stack can do the
same thing. But one have to wait for, say, 1 year. In this
case, in order to make sense, one is forced to use non-free.

Based on this observation, I raised the topic in the original post,
because the freedom to modify/reproduce a work is limited by,
as concluded previously, license of big data, and the noticable
time/device cost. Hence I asked people how we should deal with
related works if some of us want to integrate such work into Debian.

> Bill Gates is famously quoted for ridiculing the need for more than 640
> kilobytes of memory in personal computers.  Computer designs changed
> since then.



Re: Re: Concerns to software freedom when packaging deep-learning based appications.

2018-07-13 Thread Lumin
Hi Russell,

> On Thu, 2018-07-12 at 18:15 +0100, Ian Jackson wrote:
> > Compare neural networks: a user who uses a pre-trained neural network
> > is subordinated to the people who prepared its training data and set
> > up the training runs.
> 
> In Alpha-Zero's case (it is Alpha-Zero the original post was about)
> there is no training data.  It learns by being run against itself. 
> Intel purchased Mobileye (the system Tesla used to use, and maybe still
> does) with largely the same intent.  The training data in that case is
> labelled videos resembling dash cam footage.  Training the neural
> network requires huge amounts of it, all of which was produced by
> Mobileye by having human watch the video and label it. This was
> expensive and eventually unsustainable.  Intel said they were going to
> attempt to train the network with videos produced by game engines.  I
> haven't seen much since the Intel purchased Mobileye however if they
> succeed we are in the same situation - there is no training data.  In
> both cases is is just computers teaching themselves.

To be clear, there are mainly three types of learning: (1) supervised
learning[1]; (2) unsupervised learning[2]; (3) reinforcement learning[3].

AlphaGo-Zero is based on reinforcment learning, but it is a bit special:
we can generate meaningful data in the status space of checkboard.
However, many other present reinforcement learning research use data
that is not easy to generate by algorithm. For example, I remember
some group of people tries to teach the neural network to drive the
car by letting it play Grand_Theft_Auto_V[4].

Supervised learning (requires labeled data) and Unsupervised learning
(requires unlabeled data) often require a bit amount of data.
That data may come along with license restriction. As for reinforcement
learning's data .. well, I'm confused but I don't want to dig deeper.
 
> The upshot is I don't think focusing on training data or the initial
> weights is a good way to reason about what is happening here.   If Deep
> Mind released the source code for Alpha-Zero anyone could in principle
> reproduce their results if you define their result as I'm pretty sure
> they do: produce an AI capable of beating any other AI on the planet at
> a particular game.  The key words are "in principle" of course, because
> the other two ingredients they used was 250 MW hour of power (a wild
> guess on my part) and enough computers to be able to expend that in 3
> days.

Releasing initial weight doesn't make sense. The initial weights
of the state-of-the-art neural networks are simply drawn from a
certain Gaussian distribution or a certain uniform distribution.
The key to reproduce a neural network is input data + hyper-parameters,
such as the learning rate used during gradient descent.
 
> A better way to think about this is the AI they created is just another
> chess (or Go or whatever) playing game, no different in principle to
> chess games already in Debian.  However, it's move pruning/scoring
> engine was created by a non human intelligence.  The programming
> language that intelligence uses (the weights on a bunch of
> interconnected polynomials) and the way it reasons (which is boils down
> finding the minima of a high dimensional curve using newtons method to
> slide down the slope) is not something human minds are built to cope
> with.  But even though we can't understand them these weights are the
> source, as if you give them to a similar AI it can change the program. 
> In principle the DSFG is fulfilled if we don't discriminate again non-
> human intelligences.
> 
> Apart from the "non-human" intelligence bit none of this is different
> to what we _already_ accept into Debian.  It's very unlikely I could
> have sensible contributions to the game engines of the best chess,
> backgammon or Go programs Debian has now.  I have no doubt I could
> understand the source, but it would take me weeks / months if not years
> to understand the reasoning that went into their move scoring engines. 
> The move scoring engine happens to be the exact thing Alpha-Zero's AI
> (another thing I can't modify) replaces.   In the case of chess at
> least they will have a database of end games they rely on, a database
> generated by brute force simulations generated using quantities of CPU
> cycles I simply could not afford to do.
>
> Nonetheless, cost is an issue.  To quantify it I presume they will be
> able to rent the hardware required from a cloud provider - possibly we
> could do that even now.  But the raw cost of that 250 MW hour of power
> is $30K, and I could easily imagine it doubling many times as it goes
> through the supply chain so as another wild guess you are probably
> looking at $1M to modify the program.  $1M is certainly not "free" in
> any sense of the word, but then the reality no other Debian development
> is free either.  All development requires computers and power which
> someone has to pay for.  The difference is now is 

Re: Re: Concerns to software freedom when packaging deep-learning based appications.

2018-07-13 Thread Lumin
Hi Ian,

> Lumin writes ("Concerns to software freedom when packaging deep-learning 
> based appications."):
> >  1. Is GPL-licended pretrained neural network REALLY FREE? Is it really
> > DFSG-compatible?
> 
> No.  No.
> 
> Things in Debian main shoudl be buildable *from source* using Debian
> main.  In the case of a pretrained neural network, the source code is
> the training data.
> 
> In fact, they are probably not redistributable unless all the training
> data is supplied, since the GPL's definition of "source code" is the
> "preferred form for modification".  For a pretrained neural network
> that is the training data.

It would be intresting if we look at some real examples. 
I emphasized GPL becase I see the project[1] mentioned in the original post.
is released under GPL. However, it didn't clearly clarify the license
of the pretrained network that the upstream distributes.

However, according to [1]'s readme: "Recomputing the AlphaGo Zero weights
will take about 1700 years on commodity hardware". Well, I guess that
means pure free software stack can never reproduce this work.

Apart from that Alpha-Go project, actually I see more pretrained models
released under MIT/BSD/Apache2, or public domain. Here is another example
about ImageNet, which is a typical dataset in the computer vision field.

   https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/readme.md
   Framework: Caffe(BSD-2-clause), repro code: (BSD-2-clause),
   pretrained-network: (public domain), dataset: ImageNet (???) [2]
   -> Software stack is free, but not the ImageNet dataset.
   -> Debian cannot distribute related applications in the main section.


> >  2. How should we deal with pretrained neural networks when some of us
> > want to package similar things? Should it go contrib?
> 
> If upstream claims it is GPL, and doesn't supply training data, I
> think it can't even go to contrib.

OK, then what if the model is released under a more premissive license
such as MIT/BSD, given that the model is trained from non-free academic
dataset by a non-free software stack (replacable by CPU implementation
but with ridiculous time cost)? I think in this case that kind of
network can enter contrib.
 
> If upstream does not claim to be supplying source code, or they supply
> the training data, then I guess it can go to contrib.

The Computer Vision research community is eager to release their code
and model under premissive licenses such as BSD/MIT. Many related
conferences publish the papers without access restriction. The biggest
problem lies in the big data.
 
> Note that the *use* of these kind of algorithms for many purposes is
> deeply troublesome.  Depending on the application, it might cause
> legal complications such as difficulties complying with the European
> GDPR, illegal race/sex/etc. discrimination, and of course it comes
> with big ethical problems.
> 
> :-/

hmm... It appears to be quite incompatible to DFSG, but that's another
complex story :-)

> Ian.
> 

[1] https://github.com/gcp/leela-zero
[2] http://image-net.org/download-faq
I searched and found no explicit license declaration of ImageNet.
But according to the agreement text, the dataset may be non-free.



Re: Re: Concerns to software freedom when packaging deep-learning based appications.

2018-07-13 Thread Lumin
Hi Jonas

> Perhaps I am missing something, but if _possible_ just 100x slower to 
> use CPUs instead of GPUs, then I fail to recognize how it cannot be 
> reproduced, modified, and researched 100x slower.
> 
> Quite interesting question you raise!

I can provide at least two data points:

1. The alexnet[1] paper said that "We trained the network for roughly 90
   cycles through the training set of 1.2 million images, which took five
   to six days on two NVIDIA GTX 580 3GB GPUs."

   This is an old but very famous paper. In this paper, the deep neural
   network was trained on a public dataset named ImageNet[2]. For image
   classification purpose, the dataset provides 1000 catogories of
   images, and ships more than 100GB of pictures.

   ImageNet is a typical large dataset. Training networks on it requires
   expensive devices and takes a lot of time. As time goes by, the
   latest GPUs and software stack has reduced the network training time
   significantly, and the training time was reduced to less than half
   an hour.

   Nobody will train a neural network on ImageNet by CPU.
   CPU takes ridiculously long time, compared to GPU.

2. A very simple and reproducible CPU/GPU performance comparison.
   This snippet imports a pretrained neural network from another
   famous work, and calculates prediction result for 50 fake input
   images.

   │ import torch as th
   │ import torchvision as vision
   │ import time
   │ 
   │ net = vision.models.vgg19(pretrained=True)
   │ batch = th.rand((50, 3, 224, 224))
   │ net, batch = net.float(), batch.float()
   │ 
   │ t = time.time()
   │ net(batch)
   │ print('CPU version:', time.time() - t)
   │ 
   │ net, batch = net.cuda(), batch.cuda()
   │ t = time.time()
   │ net(batch)
   │ print('CUDA versoin', time.time() - t)

   My CPU is Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz, running with MKL
   My GPU is Nvidia TITAN X (Pascal), running with CUDA9.1 and cuDNN
   CPU version: 30.960017681121826
   CUDA versoin 0.17571759223937988
   CUDA version is 176x faster than CPU.

   Nobody will train a neural network on big dataset by CPU.
   CPU takes ridiculously long time, compared to GPU.
   (The framework used is PyTorch[5], vgg19 net comes from [4])


[1] 
https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
[2] https://image-net.org/
[3] oops, I think I lost the link to the mentioned paper
[4] http://www.robots.ox.ac.uk/%7Evgg/research/very_deep/
[5] https://pytorch.org/



Concerns to software freedom when packaging deep-learning based appications.

2018-07-12 Thread Lumin
Hi folks,

I just noticed that one of us tries to package deep-learning based
application[1], specifically it is AlphaGo-Zero[2] based. However, this
raised my concern about software freedom. Since mankind relys on artificial
intelligence more and more, I think I should raise this topic on -devel.


However, before pointing out the problem that I concern about, I should
first explain some terms in this field:

 (1) For those who don't know what "deep-learning" is, please think of it
 as a subject which aim to solve problems such as "what object
 does this photo present to us? a cat or a bike?"[5], which cannot
 be solved by using traditional algorithms.

 (2) For those who don't know what a "pretrained (deep/convolutional) neural
 network" is, please think of it as a pile of matrices, or simply a
 large, pre-computed array of floating numbers.

 (3) CUDA Deep Neural Network library (cuDNN)[4] is NVIDIA's **PROPRIETARY**,
 stacked on CUDA, and requires NVIDIA GPU exclusively.


My core concern is:

  Even if upstream releases their pretrained model under GPL license,
  the freedom to modify, research, reproduce the neural networks,
  especially "very deep" neural networks is de facto controled by
  PROPRIETARIES.

Justification to the concern:

  1. CUDA/cuDNN is used by nearly all deep-learning researchers and
 service providers.

  2. Deep neural networks is extremely hard to train on CPU due to
 the time cost. By leveraging cuDNN and powerful graphic cards,
 the training process can be boosted up to more than 100x times
 faster. That means, for example if a neural network can be trained 
 by GPU in 1 day, then the same thing would take a few months on CPU.
 (Google's TPU and FPGA are not the point here)

  3. A meaningful "modification" to neural network often refers to
 "fine-tune", which is a similar process to "training".
 A meaningful "reproduce" of neural network often refers to
 "training starting from random initialization".

  4. Due to 1. 2. and 3. , the software freedom is not complete.
 In a pure freesoftware environment, that work cannot be reproduced,
 modified, or even researched. Although CPU indeed can finish the
 same work in several months or several years, but that's way
 too much reluctant.

 In this way, the pretrained neural network is not totally "free"
 even if it is licenced under GPL-*. None of the clauses in
 GPL is violated, but the software freedom is limited.


I'd like to ask:

 1. Is GPL-licended pretrained neural network REALLY FREE? Is it really
DFSG-compatible?

 2. How should we deal with pretrained neural networks when some of us
want to package similar things? Should it go contrib?



[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=903634
[2] https://en.wikipedia.org/wiki/AlphaGo_Zero
[3] https://www.debian.org/social_contract
[4] https://developer.nvidia.com/cudnn
[5] More examples about what deep learning can do:
(2) tell you what objects a picture presents, and where are they.
(3) tell you what's in a picture in a complete English word.
(4) translate one language into another, such as zh_CN -> en_US
(5) fix your english grammar error https://arxiv.org/pdf/1807.01270.pdf
(6) ...


signature.asc
Description: PGP signature


More expressive Multi-Arch field

2018-04-18 Thread Lumin
Hello folks,

I found myself prone to forget what "same" or "foreign" means
in the Multi-Arch section. Once and once again I have to lookup
docs to fugure out what they stand for. These two words don't
explicitly present their meaning. Based on this, I'm writting to
put forward an idea for improving this field.

I have read Osamu's doc[1], and I think there could be three cases
of Multi-Arch:

1. Multi-Arch: co-installable

e.g. we have libfoo1 which puts a .so file under /usr/lib/triplet.
 In this case libfoo1:amd64 is co-installable with, for instance,
libfoo1:i386 .

2. Multi-Arch: exclusive

e.g. we have a package libfoo-tools which puts ELFs to /usr/bin.
In this case this binary is not co-installable with libfoo-tools
compiled for another arch. Hence it is "exclusive".

3. Multi-Arch: universal / indep

e.g. we have a pure python package python-bar. This package
is usable on all the archs, or say independent to archs.
These packages can be marked as "indep" or e.g. "universal".

Compared to "same"/"foreign", the idea above provides a more
expressive and self-doc'd Multi-Arch field.

Your opinion?

[1] https://www.debian.org/doc/manuals/debmake-doc/debmake-doc.en.txt
  policy and dev-ref don't explain the Multi-Arch field.
-- 
Best,


Re: Urging for solution to the slow NEW queue process

2018-04-12 Thread Lumin
Hi Holger,

> you didnt mention which package of yours is stuck in NEW, could you
> please elaborate? Else this seems like a rant out of the blue, without
> much checking of facts, like Phil (Hands) thankfully provided.

I was just afraid that things getting wrong seeing a large median number
of time for a package to wait, without knowing the problem of node
ecosystem since I don't write code in language whose name starts with "J".

> I also share wookey's observation that NEW is being processed more
> quickly than ever currently (and actually since quite some time now.
> Which reminds me of the situation that some people *still* think Debian
> releases are unpredictable and infreqently while in reality for the last
> 14 years we've released every 22 months or so, with a variation of 2
> months.)

I'm glad to know nothing goes wrong except for the node upstream.
Thank you everyone for letting me know the actual situation of NEW queue
:-)

-- 
Best,



Re: Fw:Re: Urging for solution to the slow NEW queue process

2018-04-12 Thread Lumin
Hi Andreas,

> The fact that the NEW queue is continuely growing is a sign that DDs are
> continuosely motivated to fill it up. ;-)  As Mattia said in his
> response patience is a feature you learn as DD and it is not a bad
> feature.

Thank you and Mattia for pointing that out.

And it would be better if the ftp-master site provide graphs indicating
"how much packages are processed". With these graphs one are
less likely to misinterpret the situation of the NEW queue.

> I have not seen any applause in this thread for your offer to help.  I
> hereby do this and thank you explicitly.  I have learned that e-mails to
> ftp-master work way worse than IRC via #debian-ftp.  May be you repeat
> your offer there.
>
> I personally admit that getting no response to e-mails is more draining
> on the patience than waiting for getting a package acceptet.  Thus
> knowing this alternative channel helped me a lot.

Sounds like a good way looking for help when next time I encountered
a problem alike. I just prefer email because conversations are archived,
while those in IRC are not.

-- 
Best,



Urging for solution to the slow NEW queue process

2018-04-11 Thread Lumin
Hi folks,

I'm sorry for repeating this topic in -devel without reading all the
followups in this thread [1] which seems to be dying. Is there
any conclusion in the thread[1] ?

Briefly speaking, if a DD was told that "Thank you for your contribution
to Debian but please wait for at least 2 months so that your package
can enter the archive.", will the DD still be motivated working on NEW
packages??? Please convince me if you think that doesn't matter.

Let's have a look a this chart[2]. Obviously the NEW queue became
somewhat weirdly long since about a year ago. We can also move
to the middle part of this page[3] where we can estimate a median
number of time for a package to wait in the NEW queue. The median is
**2 month**. Things has been going in the BAD direction compared
to the past.

I'm only a DM and I tried to apply for FTP assistant but got
nothing in reply from ftp-master. Now what I can do is just
repeating this topic again and urge for a solution.

Sorry for any inconvenience.

Best,
lumin

[1] https://lists.debian.org/debian-devel/2018/03/msg00064.html
[2] https://ftp-master.debian.org/stat/new-5years.png
[3] https://ftp-master.debian.org/new.html

-- 
Best,



Re: Conditions for testing migration

2018-02-22 Thread Lumin
Hi Adam,

On 11 February 2018 at 13:51, Adam D. Barratt  wrote:

> The answer was "yes", in fact.
>
> I'm unsure how you've deduced "i386 wasn't a problem" when the above
> clearly shows that the lack of a lua-torch-torch7 package on several
> architectures is a current blocker for the migration.
>
> The lack of lua-torch-torch7 on i386 would also have been the cause of
> your original issue - arch:all packages must be installable on both
> amd64 and i386 currently; if they are not, you need to ask the Release
> Team to consider adding a hint to force the testing migration scripts
> to ignore the issue.

Thank you, the package migrated after adding i386 build to lua-torch-torch7.
:-)



-- 
Best,



Conditions for testing migration

2018-02-11 Thread Lumin
Hello guys,

I encountered a weird situation where a package
doesn't migrate to testing:

  Assume source package "sA" yields binary
  package "bA" with Architecture=any. "sA"
  turns to be a valid candidate to migrate.

  Source package "sB" yields binary package
  "bB" with Architecture=all Depends=bA.
  Pakcage "sB" is not considered to migrate.

For instance, package lua-torch-xlua[3]
(= 0~20160719-g41308fe-4) is an Arch=all
package which depends on lua-torch-torch7
(arch=any). lua-torch-torch7 is shipped by
testing, but lua-torch-xlua doesn't migrate[4].
I tried to change Arch to any in the
0~20160719-g41308fe-5 upload to figure out
whether the missing of dependency in some
release architectures such as i386 caused
this situation, but the answer is no.

I cannot find useful information about testing
migration in policy[1]. Devref[2] listed some
conditions for a valid migration candidate.
However that doesn't explain what I encountered.

Does anyone known how to handle such situation?
Does devref need an update about migration?

A similar topic was sent to d-mentors by @olebole
a month ago but there was no response[5].
So I'm sending this to devel.

Thank you.

[1] https://www.debian.org/doc/debian-policy/
[2] https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#testing
[3] https://tracker.debian.org/pkg/lua-torch-xlua
[4] https://qa.debian.org/excuses.php?package=lua-torch-xlua
[5] https://lists.debian.org/debian-mentors/2018/01/msg00355.html

-- 
Best,



Re: Re: When do we update the homepage to a modern design? (was Re: Moving away from (unsupportable) FusionForge on Alioth)

2017-05-15 Thread lumin
> I'll take any day a sort animations that explains things rather then
> going through forest of information to figure out what is it, but I
> guess these all are personal opinions.

A tiny bit of animations should be enough for our homepage. The style
of lxde.org does not fit Debian's style and I think the style of the
old lxde homepage is a better fit at this point.

Too much animation and loud web page elements are too fancy but
actually somewhat annoying, and lack solemnity.

> >> I believe that what we are actually looking for is a bit of
> >> improvement in the marketing side.
> >> Modern and fancy things.
> >>
> >> The LXDE example is good on that.
> > http://lxde.org/ seems to be the site in question. I agree with
> Paul,
> > I don't like it, and when I encounter pages in that style, I tend
> to
> > close the window.
> 
> Then lets forget about getting newcomers (fresh blood) to Debian as
> you're so close minded to modern/new things - the same way they
> probably
> close the window when they see '90 style with a lot of text that
> actually says nothing. We are strange with our talks last few
> debconfs -
> we want new people but we don't want to break our precious habits nor
> do
> we want to give freedom to others to express themselves if they don't
> fit into our circle of thinking which must be the best one.

LXDE is a desktop environment so it's fine to craft a fancy homepage
to attract people. However that style does not fit Debian.

Most of modern business websites are fancy. New bloods may like
them.
However if we craft a fancy page alike, they will forget
it immediately
after closing the window. And many of you don't
like that to happen,
aren't you?

What exactly scares newbies away is the feeling of rigidness but
not the solemnity and simplicity. We value our common value,
we appreciate the hard work you've done via bugs.d.o and
ftp-master and many others alike, but what a newbie can see
about Debian is its face. I know that new users who value
only "pretty face" are less likely to catch the common value
of Debian, and people with love to this community can bear
any "ugly face" of it.  No one dislike a proper and better design.

IMHO there are two good examples, the Gentoo homepage and the kernel
homepage https://www.kernel.org/ .(Remember the old kernel page?)
These pages are pretty but not annoying. An ideal homepage for Debian
should be 1. solemn and silent (as few loud elements/animations
as possible) 2. informative (dense but not exhausting one's eyes)
3. well-designed (e.g. https://www.kernel.org/ is visually simple,
but not too simple. Visitors sense a well-designed style.)

On the other hand, I think the CD image link of Sid should be added
to the Debian image download page, maybe with some tags say
"for expert".



When do we update the homepage to a modern design? (was Re: Moving away from (unsupportable) FusionForge on Alioth)

2017-05-15 Thread lumin
On Mon, 2017-05-15 at 11:19 +0200, Arturo Borrero Gonzalez wrote:
> On 14 May 2017 at 11:58, lumin <cdlumin...@gmail.com> wrote:
> > On the other hand, I fancy modern platforms such
> > as Gitlab, as a user. And wondering when Debian
> > will update its homepage (www.d.o) to a modern
> > design[1].
> > 
> > [1] This is off-thread, but some of my young
> > friends just gave up trying Debian at the
> > first glance at our homepage.
> 
> off-thread, yes. But please spawn another thread to talk about this
> real issue.

> Our users are really complaining about our look in the web and
> we
> should address it.

I'm looking forward to a new design of our homepage, but I'm not
able to help since not familiar to this field.

Take a look at the homepages of major distros:

https://www.archlinux.org/
https://www.centos.org/
https://www.opensuse.org/
https://manjaro.org/
https://www.ubuntu.com/
https://getfedora.org/
https://linuxmint.com/
https://gentoo.org/

Especially look at the homepage of Gentoo. Some of you must
remember the old gentoo homepage, but now gentoo has a way much
prettier face. Then look at ours

https://www.debian.org/

We are the last major distro that move to systemd as the
default init system. And now we are the last major distro
that keeps an old design of homepage.

Debian is a community that driven by volunteers. I believe
volunteers are working hard for community at the points
they are interested in. I guess, possibly there are too few
volunteers able/intend to update the design, so the homepage is
just kept as is.

If none of the volunteers is willing to contribute a new
design, what about spend some money to hire several worker
working on this. neilm pointed out that we don't know how
to spend our money at Debconf16, that we don't know how
to spend our money. Making the community better is a good
reason for doing so, since a modern design may
attract more users/contributors, and is less likely to scare
newbies away ...

Even if Debian is ranked number 2 at distrowatch.



Re: Moving away from (unsupportable) FusionForge on Alioth

2017-05-14 Thread lumin

AFIK there is a gitlab instance ready for use:
  http://gitlab.debian.net/

Several months ago when people are talking about
replacing git.d.o with a Gitlab instance, many
Git services, including Gitlab and Gogs, are
compared. However they are Git-only and hence
not so appropriate for Alioth use.

A list of other forges here:
  https://wiki.debian.org/Alioth/OtherForges

On the other hand, I fancy modern platforms such
as Gitlab, as a user. And wondering when Debian
will update its homepage (www.d.o) to a modern
design[1].

[1] This is off-thread, but some of my young
friends just gave up trying Debian at the
first glance at our homepage.

Best,
lumin



Bug#846915: Fixing OpenRC halt/reboot behavior by updating initscripts

2016-12-04 Thread lumin
Package: initscripts
Version: 2.88dsf-59.8
Severity: important
Tags: patch
X-Debbugs-CC: 844...@bugs.debian.org, 
pkg-sysvinit-de...@lists.alioth.debian.org, 
openrc-de...@lists.alioth.debian.org, debian-devel@lists.debian.org

Hello guys,

I find a simple way to fix an OpenRC bug [1] by updating initscripts
package. So both sysvinit list and openrc list are rolled in the CC list.

The OpenRC bug [1] is introduced by this commit:

https://anonscm.debian.org/cgit/openrc/openrc.git/commit/?id=f1ec5852df7db55e7bb170b26997c75676430c5b

It removed the "transit" script and the "reboot" and "halt"
behavior gets broken in my kfreebsd-amd64 virtual machine.
>From the openrc maintainer's comments and logs, it seems that
keeping "transit" script is a pain to them.

However, I fixed this problem in my kfreebsd virtual machine
adding merely two lines of code in initscripts. Maybe this
is better than bringing the "pain" back to openrc maintainers?

I'm not quite familiar with openrc and initscripts so I'm not
sure if my solution is correct.

P.S. It is really annoying when I cannot poweroff my kfreebsd
virtual machine with any of them: "poweroff" "halt" "shutdown"
except for "/etc/init.d/halt stop".

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=844685From 1b32fc20368f008ab1f5f9197ef8b294efdb76f9 Mon Sep 17 00:00:00 2001
From: Zhou Mo 
Date: Sun, 4 Dec 2016 09:22:59 +
Subject: [PATCH] fix openrc halt and reboot behavior from initscripts side

Make sure that the "halt" and "reboot" services are both added into
runlevel "off":
 # rc-update add halt off
 # rc-update add reboot off
---
 debian/changelog | 8 
 debian/src/initscripts/etc/init.d/halt   | 1 +
 debian/src/initscripts/etc/init.d/reboot | 1 +
 3 files changed, 10 insertions(+)

diff --git a/debian/changelog b/debian/changelog
index 1b4d1b9..ab7a61d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,11 @@
+sysvinit (2.88dsf-59.9) UNRELEASED; urgency=medium
+
+  * Non-maintainer upload.
+  * Add OpenRC support to /etc/init.d/{halt,reboot}, which fixes OpenRC
+halt and reboot behavior since openrc version 0.21-4 . (Closes: #844685)
+
+ -- Zhou Mo   Sun, 04 Dec 2016 09:18:29 +
+
 sysvinit (2.88dsf-59.8) unstable; urgency=medium
 
   * Non-maintainer upload.
diff --git a/debian/src/initscripts/etc/init.d/halt b/debian/src/initscripts/etc/init.d/halt
index c179a25..45b82de 100755
--- a/debian/src/initscripts/etc/init.d/halt
+++ b/debian/src/initscripts/etc/init.d/halt
@@ -72,6 +72,7 @@ case "$1" in
 	exit 3
 	;;
   stop)
+if [ "$RC_REBOOT" = "YES" ]; then exit 0; fi
 	do_stop
 	;;
   *)
diff --git a/debian/src/initscripts/etc/init.d/reboot b/debian/src/initscripts/etc/init.d/reboot
index e1dcb1c..1b61ee5 100755
--- a/debian/src/initscripts/etc/init.d/reboot
+++ b/debian/src/initscripts/etc/init.d/reboot
@@ -29,6 +29,7 @@ case "$1" in
 	exit 3
 	;;
   stop)
+if [ "$RC_REBOOT" != "YES" ]; then exit 0; fi
 	do_stop
 	;;
   status)
-- 
2.10.2



[buildd] unexpected FTBFS on amd64 buildd «binet»

2016-10-16 Thread lumin
Hi there,

I encountered an unexpected FTBFS on amd64 that I can't repro.[1]
And I'd like to ask the list before fixing it by e.g. an binary
only upload.

My package lua-torch-torch7/experimental fails[2] to build from
source because of an "illegal instruction" error at the debhelper
auto test stage. However from that buildlog I can't tell which
program to blame -- luajit or lua-torch-torch7.

The upstream code indeed contains instruction specific stuff but
I have never encountered such failure on amd64 architecture.
Besides, I tested this package with debomatic-amd64[3] and the
result is quite healthy.

So I have trouble allocating where the source of problem is.

Cosmic ray?

My questions:

 * should we suspect the health state of that amd64 buildd
   machine «binet»?
 * what should I do next? do a binary-only amd64 upload or
   request (and how) for a rebuild against that package?

[1] 
https://buildd.debian.org/status/package.php?p=lua-torch-torch7=experimental
[2] 
https://buildd.debian.org/status/fetch.php?pkg=lua-torch-torch7=amd64=0%7E20161013-g4f7843e-1=1476616893
[3] 
http://debomatic-amd64.debian.net/distribution#experimental/lua-torch-torch7/0~20161013-g4f7843e-1/buildlog



Re: Bad news to CUDA applications (was: Re: GCC 6 & binutils for the Debian stretch release)

2016-07-01 Thread lumin

Releated bug on ArchLinux:
https://bugs.archlinux.org/task/49272?project=5=12602

There are some hacks but none of them seems to be "an actual solution
to packaging".

On Fri, 2016-07-01 at 06:07 +0000, lumin wrote:
> Hi all,
> (please keep me in CC list)
> 
> I'm pointing out a BIG problem introduced by stretch's GCC-6-only plan.
> 
> In brief CUDA 8.0~RC fails to work with GCC-6, this conclusion
> comes from my local Caffe build log as attached. 
> That is to say, after GCC-6 transition *ALL* packages depending
> on cuda will get removed from stretch due to FTBFS.
> 
> I don't expect Nvidia to release CUDA 8.5 before the stretch
> freeze date (Q1 2017), i.e. even a freeze-exception against
> cuda might not save this situation. So all maintainers maintaining
> CUDA application packages have to face this harsh condition.
> Do you have any solution to this problem?
> 
> Besides, I cc'ed 2 nvidia guys with a hope that they can provide
> some helpful information.




Bad news to CUDA applications (was: Re: GCC 6 & binutils for the Debian stretch release)

2016-07-01 Thread lumin
Hi all,
(please keep me in CC list)

I'm pointing out a BIG problem introduced by stretch's GCC-6-only plan.

In brief CUDA 8.0~RC fails to work with GCC-6, this conclusion
comes from my local Caffe build log as attached. 
That is to say, after GCC-6 transition *ALL* packages depending
on cuda will get removed from stretch due to FTBFS.

I don't expect Nvidia to release CUDA 8.5 before the stretch
freeze date (Q1 2017), i.e. even a freeze-exception against
cuda might not save this situation. So all maintainers maintaining
CUDA application packages have to face this harsh condition.
Do you have any solution to this problem?

Besides, I cc'ed 2 nvidia guys with a hope that they can provide
some helpful information.


caffe_buildlog_nvcc8_gcc6_failure.txt.gz
Description: application/gzip


signature.asc
Description: This is a digitally signed message part


Bug#823140: RFS: caffe/1.0.0~rc3-1 -- a deep learning framework [ITP]

2016-05-01 Thread lumin
Package: sponsorship-requests
Severity: wishlist
X-Debbugs-CC: a...@debian.org, deb...@danielstender.com, deb...@onerussian.com, 
debian-devel@lists.debian.org, debian-scie...@lists.debian.org

Dear mentors,

  I am looking for a sponsor for my package "caffe"

 * Package name: caffe
   Version : 1.0.0~rc3-1
   Upstream Author : Berkeley Vision and Learning Center
 * URL : caffe.berkeleyvision.org
 * License : BSD-2-Clause
   Section : science

  It builds those binary packages:

 caffe-cpu  - Fast, open framework for Deep Learning (CPU_ONLY)
 caffe-cuda - Fast, open framework for Deep Learning (CUDA)
 libcaffe-cpu-dev - development files for Caffe (CPU_ONLY)
 libcaffe-cpu1 - library of Caffe, a deep learning framework (CPU_ONLY)
 libcaffe-cuda-dev - development files for Caffe (CUDA)
 libcaffe-cuda1 - library of Caffe, a deep leanring framework (CUDA)
 python-caffe-cpu - Python2 interface of Caffe (CPU_ONLY)
 python-caffe-cuda - Python2 interface of Caffe (CUDA)

  To access further information about this package, please visit the following 
URL:

  https://mentors.debian.net/package/caffe

  Alternatively, one can download the package with dget using this command:

dget -x 
https://mentors.debian.net/debian/pool/contrib/c/caffe/caffe_1.0.0~rc3-1.dsc

  Debomatic-amd64 build log can be obtained at

  
http://debomatic-amd64.debian.net/distribution#experimental/caffe/1.0.0~rc3-1/buildlog
  Note, the source uploaded to debomatic-amd64 is different to the one
  at mentors -- the time stamp in d/changelog differs, the only difference.

  Changes since the last upload:

caffe (1.0.0~rc3-1) experimental; urgency=low

  * Initial release. (Closes: #788539)
  * Fix spelling error in src/caffe/layers/memory_data_layer.cpp.

Thanks :-)


signature.asc
Description: This is a digitally signed message part


Bug#818900: [Lua Policy] integrate debian's lua modules into Debian's Luarocks

2016-03-21 Thread lumin
Package: lua5.1-policy
Version: 33
Severity: wishlist
X-Debbugs-CC: debian-devel@lists.debian.org, h...@hisham.hm

Hi,
(Talking about policy, hence CC'ing -devel)
(CC'ing luarocks upstream)

When I'm dealing with one of my ITP's I found that this is
a noticeable problem to Debian's lua packages. And I think
this may require some changes to our lua policy, or the dh-lua
scripts.

Luarocks is a lua module manager, just like pip to python.
However Debian's luarocks is blind to Debian's lua modules,
i.e. `luarocks list` won't list lua modules installed by dpkg,
besides, lua modules installed by dpkg won't participate
lua module dependency resolution, that's bad.

When pulling new lua modules from the internet with `luarocks`,
it will scan lua module dependency and automatically pull missing
modules and install them. For example, I need to install a lua
module that is not packaged by us, say lua-xxx, and it depends
on lua-cjson. Then `luarocks install xxx` will cause luarocks
to install a new lua-cjson module, ignoring the lua-cjson package
installed by dpkg. Why do we provide lua-cjson package?

*** What to do to make improvement? ***

IMHO following changes should be considered:

1. update default configuration file of luarocks
   /etc/luarocks/config-5.1.lua

```patch
  rocks_trees = { 
 home..[[/.luarocks]],
 [[/usr/local]],
+[[/usr]],
  }
+ deps_mode = 'all'
```

2. let luarocks package install this directory

   /usr/lib/luarocks/rocks/

3. update lua-* packages with luarocks integration,
   e.g. update their postinst and prerm scripts.
   
   To this point I have a solution that works but is not good enough:
   (patch parts copied from my locally modified lua-cjson package)

```patch
--- /dev/null
+++ b/debian/lua-cjson.postinst
@@ -0,0 +1,31 @@
+#!/bin/sh
+set -e
+
+prepare_luarocks ()
+{
+  local rockdir
+  rockdir='/usr/lib/luarocks/rocks/lua-cjson/2.1.0-1/'
+  mkdir -p $rockdir
+  echo 'rock_manifest = {}' > $rockdir/rock_manifest
+  cp /usr/share/doc/lua-cjson/lua-cjson-2.1.0-1.rockspec $rockdir
+  if [ -x /usr/bin/luarocks-admin ]; then
+luarocks-admin make-manifest --local-tree --tree=/usr
+  fi
+}
[...]
```

and this one

```patch
--- /dev/null
+++ b/debian/lua-cjson.prerm
@@ -0,0 +1,27 @@
+#!/bin/sh
+set -e
+
+remove_luarocks ()
+{
+  if [ -x /usr/bin/luarocks ]; then
+   luarocks remove lua-cjson --local-tree --tree=/usr
+  fi
+}
+
```

Thanks! :-)
-- 
 .''`.
: :' :
`. `' 
  `-  



Bug#807579: [patch] CUDA 7.5 works well with GCC5, while CUDA 7.0 fails to.

2015-12-10 Thread lumin
Package: nvidia-cuda-toolkit
Version: 7.0.28-1
Severity: normal
tag: patch
x-debbugs-cc: debian-devel@lists.debian.org, a...@debian.org, d...@debian.org 

Hi,

After changed default compiler to GCC5, many cuda-related packages
get FTBFS because cuda << 7.5 can't work well with GCC5.

The CUDA 7.0 present in experimental is still not working with GCC5, 
even applied the gcc compatibility hack that ArchLinux applied[1]

I'm working on packaging CUDA 7.5, and my local test shows that
CUDA 7.5 works with GCC-5.3.1 from Sid, with the the same hack as [1]
applied.

My package debian-science/caffe finally gets rid of FTBFS,
it passed the build without any nvcc/gcc complain.
I also noticed that there are many packages influenced by
CUDA's failure working with GCC5, hence cc'ing -devel.

My work is still in process, and I'll attach patches in the following
mails to this bug.

Thanks.

[1] 
https://projects.archlinux.org/svntogit/community.git/tree/trunk/PKGBUILD?h=packages/cuda


signature.asc
Description: This is a digitally signed message part


Re: Fwd: [Patch] Shall we update CUDA to 7.0.28 ? (#783770)

2015-11-13 Thread lumin
Hi,

On Fri, 2015-11-13 at 15:21 +0100, Samuel Thibault wrote:
> Thanks for your contribution that will hopefully help getting cuda 7.0
> or 7.5 packages out faster.

I do hope so. :-)

> > FYI: As far as I know, in my experiments,
> > a Tesla K20C GPU works 5+ times faster than an Intel Xeon W3690.
> 
> That's not a reason for rushing to CUDA 7.0 or later.  A K20C will work
> fine with CUDA 6.5.  Also, helping free software to support NVIDIA GPUs
> would help avoiding the issue :)

Well, I mean Caffe could work without CUDA but it is slow, so in order
to boost Caffe up I need a set of working build-deps, i.e.
we should import CUDA 7.5, which should be OK working with GCC-5.

> I guess there is no way to backport the CUDA 7.5 fixes to support GCC-5?
> (damn the non-free software...)

I think there is nearly no way doing so, even if we know what to change
to backport GCC-5 support into CUDA 7.0. That's because the CUDA's EULA
FORBIDS ANY KIND OF MODIFICATION, e.g. strip'ing the huge ELFs...
So 

> Urgl, so CUDA 7.0 is not even enough?
> 
> I guess we may want to skip 7.0 and directly upload 7.5.

Yes, NVCC 7.0.27 (shipped in CUDA 7.0.28, the version number is correct)
still complains about G{CC,++} >> 4.10. That means CUDA 7.0 refuses to
work with GCC-5 in my case.

So can we skip CUDA 7.0 ?

If that's OK, then I'll be glad to get CUDA 7.5 and
import it again.

> So yes, the answer is: "please be patient, we are working on it. Help is
> welcome".

I plan to take care of this package for some time.

Thank you for comment :-)
-- 
 .''`.   Lumin
: :' : 
`. `'   
  `-638B C75E C1E5 C589 067E  35DE 6264 5EB3 5F68 6A8A


signature.asc
Description: This is a digitally signed message part


Fwd: [Patch] Shall we update CUDA to 7.0.28 ? (#783770)

2015-11-13 Thread lumin
Hi all,

(d-devel blocks my mail, which contains many patches,
for the original mail please see
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=783770
)


Summary
===

In short I'm trying to update the CUDA package for Debian,
and patches are provided as attachments. My work is based
on the source of nvidia-cuda-toolkit/experimental, i.e.

nvidia-cuda-toolkit_6.5.19-1.debian.tar.xz


Story
=

I'm the packager of "Caffe", a deep learning framework,
whose packaging work is currently stored under debian-science.
However I suffered from the Debian's CUDA version a lot.

Debian's CUDA/experimental is QUITE OUT OF DATE so that
it fails to work with GCC-5.
So my Caffe packages just get FTBFS, due to the ancient
version of CUDA.


However ArchLinux's CUDA version (7.5) is OK for my package,
but I don't understand why Debian experimental holds such an
.. 6.5 version, compared with the one Archlinux ships.

I know some people don't care about packages in non-free section,
or even hate them. But having a new and working CUDA package
is really a good news to, at least, deep learning researchers,
as the GPU can boost training speed up to 5+ times than CPU.

FYI: As far as I know, in my experiments,
a Tesla K20C GPU works 5+ times faster than an Intel Xeon W3690.

What I want to say is, having an updated CUDA is wonderful
for Debian, for productivity use e.g. deep learning research.

I like Debian, I like stable systems, however Arch beats Debian
at this point.


Packaging
=

The CUDA package works fine with my chroot'ed sid, with
sources.list = sid + experimental.

What makes me happy is, the compilation for gcc-4.9 on *.cu files
of Caffe runs well. CUDA-7.0.28 supports up to g{cc,++}-4.9, but
still not working with g{cc,++}-5.
The caffe builds failed because liking ELFs to v5libs with g{cc,++}-4.9,
so it matches my expectation.

I'm afraid that I have to bump CUDA to 7.5, to solve the FTBFS of Caffe.
which really fixes CUDA's attitude towards GCC-5.

Following is my dch, and you can see there are still some items remained to be
done:

nvidia-cuda-toolkit (7.0.28-1) experimental; urgency=medium

  * New upstream release 7.0 (March 2015). (Closes: #783770)
+ SOVERSION Bump from 6.5 => 7.0
  * Drop i386 support for all CUDA packages.
  * Add two new packages, which are introduced by CUDA 7.0:
+ libnvrtc
+ libcusolver
  * Bump libnvvm2 to libnvvm3.
  * Update *.symbols.in
  * Update *.install.in
+ nvidia-cuda-dev: Not installing stuff from open64 directory
  (e.g. nvopencc), because they are not provided in CUDA 7.0.
  * [temporary] comment manpage patches out.
+ [todo] solve a pile of lintianW on manpages
  * [todo] update watch / watch.in
  * [todo] update CUDA_VERSION_DRIVER in rules.def
  * [todo] remove nvopencc-related stuff
  * [todo] remove libcuinj32-related stuff
  * [todo] update copyright with new EULA
  * [todo] update get-orig-source in d/rules
  * [todo] check the compatible gcc version 
- d/rules: 's/__GNUC_MINOR__ > 8/__GNUC_MINOR__ > 9/'
  * [todo] check --list-missing for missing docs.


If current patches are OK, then I'd be willing to solve the rest [todo]
items.
I don't mind importing CUDA 7.5 immediately after uploaded CUDA 7.0
to experimental.

:-)

Thanks.


-- 
 .''`.   Lumin
: :' : 
`. `'   
  `-638B C75E C1E5 C589 067E  35DE 6264 5EB3 5F68 6A8A


signature.asc
Description: This is a digitally signed message part


Who is the most energetic DD

2015-10-03 Thread lumin
s.alioth.debian.org>
 8  549 <debian-med-packag...@lists.alioth.debian.org>
 7  620 <jaw...@cpan.org>
 6  739 <pkg-haskell-maintain...@lists.alioth.debian.org>
 5  773 <python-modules-t...@lists.alioth.debian.org>
 4  838 <pkg-ruby-extras-maintain...@lists.alioth.debian.org>
 3  902 <pkg-java-maintain...@lists.alioth.debian.org>
 2 1200 <gre...@debian.org>
 1 3105 <pkg-perl-maintain...@lists.alioth.debian.org>

This list is generated from 
  debian/dists/sid/main/source/Sources.gz
by my code rather than by UDD.

My code is located at 
https://github.com/CDLuminate/DebArchive
and in order to re-play the result, you need to first download a
Sources.gz from a Debian Mirror.

Happy hacking !  :-)
-- 
 .''`.   Lumin
: :' : 
`. `'   
  `-638B C75E C1E5 C589 067E  35DE 6264 5EB3 5F68 6A8A


signature.asc
Description: This is a digitally signed message part


Bug#797898: RFS: caffe/0.9999~rc2+git20150902+e8e660d3-1 [ITP]

2015-09-03 Thread lumin
Package: sponsorship-requests
Severity: wishlist
X-Debbugs-CC: debian-devel@lists.debian.org, debian-ment...@lists.debian.org, 
788...@bugs.debian.org

Dear mentors,

  I am looking for a sponsor for my package "caffe"

 * Package name: caffe
   Version : 0.~rc2+git20150902+e8e660d3-1
   Upstream Author : BVLC (BERKELEY VISION AND LEARNING CENTER)
 * URL : http://caffe.berkeleyvision.org/
 * License : BSD-2-Clause
   Section : science

  It builds those binary packages:

 caffe-cpu  - Fast open framework for Deep Learning (Tools, CPU_ONLY)
 caffe-cuda - Fast open framework for Deep Learning (Tools, CUDA)
 libcaffe-cpu-dev - Fast open framework for Deep Learning (LibDev, CPU_ONLY)
 libcaffe-cpu0 - Fast open framework for Deep Learning (Lib, CPU_ONLY)
 libcaffe-cuda-dev - Fast open framework for Deep Learning (LibDev, CUDA)
 libcaffe-cuda0 - Fast open framework for Deep Learning (Lib, CUDA)
 python-caffe-cpu - Fast open framework for Deep Learning (Python2, CPU_ONLY)
 python-caffe-cuda - Fast open framework for Deep Learning (Python2, CUDA)

  To access further information about this package, please visit the following 
URL:

  http://mentors.debian.net/package/caffe

  Alternatively, one can download the package with dget using this command:

dget -x 
http://mentors.debian.net/debian/pool/contrib/c/caffe/caffe_0.~rc2+git20150902+e8e660d3-1.dsc


  Changes since the last upload:

caffe (0.~rc2+git20150902+e8e660d3-1) experimental; urgency=low

  * Initial release. (Closes: #788539)
  * Modify upstream Makefile:
- Add SONAME: libcaffe.so.0 for libcaffe.so
- Fix rpath issue
  * Modify upstream CMake files:
- Add SONAME: libcaffe.so.0 for libcaffe.so
  * Flush content of ".gitignore" from upstream tarball


  Regards,
   Zhou Mo


signature.asc
Description: This is a digitally signed message part


Wrapper package of Linuxbrew for Debian

2015-08-18 Thread lumin
Hi folks,

I've prepared a linuxbrew's wrapper package [1]
and had some discussion at debian-mentors [2].
Now the package is in a nice shape, so I'm writing
to -devel for comment on this wrapper package.

linuxbrew-wrapper [1]
=

Package linuxbrew-wrapper provided only

  * a shell wrapper script wrote by myself
  * upstream install script

and provide no linuxbrew itself. The main purpose of
creating this package is

  * Eliminate the gap for users using linuxbrew on Debian/its variants.
   
  After installed linuxbrew-wrapper, the only thing users
  need to do is set their ENVs properly as they want. 

Linuxbrew [0]
=

Linuxbrew is the fork/port of Homebrew, the missing package
manager of OS X. And the advantages of it are:

 * Home-based package management, which means you can install
   brew'ed software at home without ROOT permission.

 this is also useful when a group of people are working on
 a workstation and root permission is limited.
 And I provided an example profile in the package, which 
 contains some ENVs that may be useful.

 * Install missing software.

 You know, some users can't wait for some software to be packaged
 for Debian ... And part of users will mess their home up after
 compiled bunches software tarball ... Homebrew/Linuxbrew provided
 a relatively elegant way managing home-made software ( certainly 
 in the scope of brew'ed software )

 * [...]

However the disadvantage of it is a little tricky:

 * Security issue.

 Security issue is totally handled by upstream, and I can't
 patch any linuxbrew Formula (the linuxbrew packaging scripts)
 at all (linuxbrew's update is managed by git, patches from 3-rd party will
 render linuxbrew git repo dirty and become un-predictable)

 Hence nothing from upstream except for the linuxbrew install script was 
left
 in the package. Then linuxbrew packaging bug and security bug are
 upstream bug ... after all in that case the buggy file is not provided by 
debian.

 * Copyright issue.

 I didn't investigate all software that linuxbrew ships. So I'm
 not sure if linuxbrew provides only *freesoftware*.
 Linuxbrew itself is BSD-2-Clause licensed.

 * Package Management issue.

 I'm not bringing trouble to apt/dpkg. However having 2 package
 management system may ... require users be aware of what thay are doing.


Looking forward to comments :-)


[0] https://github.com/Homebrew/linuxbrew
[1] http://mentors.debian.net/package/linuxbrew-wrapper

http://mentors.debian.net/debian/pool/main/l/linuxbrew-wrapper/linuxbrew-wrapper_20150804-1.dsc
[2] https://lists.debian.org/debian-mentors/2015/08/msg00157.html
-- 
 .''`.   Lumin
: :' : 
`. `'   
  `-638B C75E C1E5 C589 067E  35DE 6264 5EB3 5F68 6A8A


signature.asc
Description: This is a digitally signed message part


The Spirit of Free Software, or The Reality

2015-07-04 Thread lumin
Hello Debian community,

I long for becoming a Debian member, always. However now I get into
trouble with the problem of Spirit of Free software or Reality.
I wonder how Debian interprets it's Spirit of Free Software.
(Certainly Social Contract and DFSG don't refer much detail)

As we know, getting into the stage where as the same as
Richard.M.Stallman (i.e. Resists any non-free stuff, thoroughly )
is very hard for an ordinary person, as well as me. Even though,
many people are trying their best to protect their software freedom,
with several careful compromises to non-free blobs.

Several years ago I was influenced by Debian's insist on Freesoftware,
and then trying to gradually block non-free matters away, and was
very happy doing that, because I protected my computer away from those
terrible non-free softwares and got myself stayed in a clean, pure
computer environment.

Blocking non-free blobs away, does it means partially blinding
one's eye for teenagers? In order to get touched with the world
outside of freesoftware, sometimes indeed we need to compromise with
non-free blobs, at least temporarily. After all freesoftware communities
and opensource software communities occupies only a tiny proportion
of human.

Hence my strategy was changed. I compromised with more and more 
non-free blobs when I want to experience what I haven't experienced,
when I want to gain what I don't possess, when I want to explore the 
outer world that I haven't seen.

Then I got into a stage, where I strongly insist on Freesoftware,
but sometimes accept to use non-free blobs.

I'm aware
 * Insist on freesoftware != totally the RMS way.
then that weird way of insisting my so called 'freesoftware' I thought
was developed.

I have no trouble on making my personal choises, what I want to know
is, what would you do to protect your software freedom, when the
reality requires you to touch non-free blobs?

Keep the freesoftware spirit and faith of freesoftware in mind,
and actually at the same time touch non-free blobs by hands?
How to resolve this tough situation?

--

I see many people fighting for software freedom.
i.e. #786909 and [...]
Sincerely, Thank you all the free software fighters !
Fighting for what a person believes in is noble and respectful.

Thank you, fighters, from my bottom of heart.

--
Best,
lumin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1436028958.14957.61.ca...@gmail.com



Re: The Spirit of Free Software, or The Reality

2015-07-04 Thread lumin
Hi Jan Gloser and debian-devel,

First I'd like to repeat a point on my view:

 * Free Software != Software can be legally used without charge

Besides, some Free Software Licenses don't prevent people from 
selling them for profit, and so does Debian GNU/linux itself.

The key of freesoftware is not only if it takes charges, but 
the software freedom it gave to users (including free of charge).
Indeed everyone can use non-free software, but once we compromised
more, the non-free software producers would bite more.

For example, the Chromium:
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786909
What if we constantly keep feeling free to use non-free blobs,
and get compromised with those suspicious weird binary blobs,
and those odd software behaviours? Maybe some producers will 
make more unconditionally downloads and so on in their softwares.
Then trust becomes a problem.

At this point we who compromised need fighters like RMS.
However, if people think the same, who speak for software freedom,
who fight for itself's software freedom? The best candidate is oneself,
but I, compromised !!

Generally it's hard to find an optimized equilibrium solution,
between belief and survival; software freedom and reality.

On Sat, 2015-07-04 at 19:40 +0200, Jan Gloser wrote:
 Hello Lumin,
 
 
 I am not an active member of the debian community, just a listener on
 this thread, but you got my attention. I also admire free software
 makers although I think one must always keep in mind the reality of
 the world and the rules of the game called 'trade'.
 
 
 
 Software is a product like any other. It requires care, time and
 considerable effort to develop. With the advent of cheap, affordable
 computers people somehow started to think that everything in this
 domain should be free. Well, I don't really think so. If you go to the
 market and want to get some apples, it's only fair that you pay for
 the apples. It's your way to say to the apple-seller: Hey, I
 appreciate what you're doing. Take the money and continue growing and
 delivering apples so that me and people like me can buy them when we
 want. I think non-free software is not inherently bad. Every
 programmer likes to get paid (or at least I do). Programmers usually
 get paid a lot and that gives them some room - that allows them to
 give something back for free. But you must carefully decide where the
 line is - what you can give for free and what you must charge others
 for. Because the reality is there. If you give everything for free you
 won't be able to survive in this global 'game of monopoly' that we are
 all playing - and that also means you won't be able to give ANYTHING
 back.
 
 
 I think the free software movement is partly an outgrowth of the times
 when just a few people really had the software-making know-how, or a
 few companies. And these companies charged ridiculous prices. It's
 very good that these companies have competition today in the form of
 free software so that users can ask: Hey, this software I can get for
 free. What extra can you give me? Why do you charge so much? I am
 definitely against over-pricing. But I am also definitely not against
 charging a reasonably price.
 
 
 It would be really nice if we didn't have to care about money at all.
 Let's say you would make software and give it for free. If you needed
 a house, you would go to someone who specializes in that and he would
 build the house for you, for free. If you needed shoes ...  you get my
 point, right? Then we could live like a huge happy tribe, sharing
 everything we have. This is a very nice philosophy. It has a history
 though. It also has a name. Communism. And history has shown us that
 communism on a large scale does not work.
 
 
 So from my perspective - feel free to use non-free software, but
 remember to pay for it, at least if the price is reasonable ;-). And
 if it is not - make a better alternative and either charge for it or
 give it away for free. All depends on how much money you need for your
 own survival.
 
 
 Cheers,
 
 Jan
 
 
 On Sat, Jul 4, 2015 at 6:55 PM, lumin cdlumin...@gmail.com wrote:
 Hello Debian community,
 
 I long for becoming a Debian member, always. However now I get
 into
 trouble with the problem of Spirit of Free software or
 Reality.
 I wonder how Debian interprets it's Spirit of Free Software.
 (Certainly Social Contract and DFSG don't refer much detail)
 
 As we know, getting into the stage where as the same as
 Richard.M.Stallman (i.e. Resists any non-free stuff,
 thoroughly )
 is very hard for an ordinary person, as well as me. Even
 though,
 many people are trying their best to protect their software
 freedom,
 with several careful compromises to non-free blobs.
 
 Several years ago I was influenced by Debian's insist on
 Freesoftware,
 and then trying

Bug#788539: ITP: caffe -- a deep learning framework

2015-06-12 Thread lumin
Package: wnpp
Severity: wishlist
Owner: lumin cdlumin...@gmail.com
X-Debbugs-CC: debian-devel@lists.debian.org

Hi,

* Package name: caffe
  Version : rc2
  Upstream Author : Berkeley Vision and Learning Center
* URL : https://github.com/BVLC/caffe
* License : BSD 2-Clause license
  Programming Lang: C++, Python
  Description : a deep learning framework
 .
 Caffe is a deep learning framework made with expression,
 speed, and modularity in mind. It is developed by the Berkeley
 Vision and Learning Center (BVLC) and community contributors.

Caffe is a well-known software to many (computer vision) researchers,
and it helps them doing related researches. 

Its project home page is:

http://caffe.berkeleyvision.org/


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1434120176.2131.19.ca...@gmail.com



what's the difference between [s/i/m/l/k/n] ?

2015-02-28 Thread lumin
Hi guys,

When learning on how to package software,
I am confused about what's the difference between the
essence of [s/i/m/l/k/n].
(single binary, indep binary, multiple binary, ...)

the man page of dh_make(8) briefly explained what the diff
between them __without_revealing_the_essence__.

the maint-guide (debian new maintainer's guide) says so:
There are several choices here: s for Single binary package, i
for
***
outside the scope of this document.
and maint-guide didn't hint where the answer lies.

After some experiments with dh_make, I found that,
[s/i/m/l/k/n] affects at least file debian/control.

As lacking of documentation, I wonder that, 
if [s/i/m/l/k/n] only affects debian/control and affect nothing except
for control?

P.S. If there is need to explain the difference on essence between
s/i/m/l/k/n clearly, I think it's good to file a bug againt maint-guide?

Thanks. :-)
-- 
Regards,
  C.D.Luminate


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1425189276.2931.16.ca...@gmail.com



Thank you for the remarkable work on Jessie

2014-11-28 Thread lumin
Hi debian-devel,

I'm glad to say I just upgraded my machine[1] from wheezy (7.4~5)
directly to current Jessie, and I found this progress is just
painless at all, and no dependency problems remains after upgrade.

There indeed are some configuration issues which I resolved
easily, and I think those issue is not bug.
Besides, I noticed that the init system has been changed from sysvinit
to systemd, and it works very well.

However there's a issue:
Once started the Xorg, it Segfaults right away.
Seems that this is caused by xserver-xorg-video-siliconmotion[3].

For wheezy, I downloaded a tarball[4] from a forum,
and Xorg would not Segfault anymore after applied its contents.
I don't know how to resolve this issue on Jessie to let Xorg work again.
(Please let me know what log/file should I send if someone is interested
in looking into this problem.)

Nevertheless, Thank you for your hard work,
so that Debian users can enjoy those smooth dist-upgrade.

Hurra Debian!

[1] loongson-2f, mipsel architecture
[2] i'm not sure
[3] This problem exists on wheezy too
[4] content: xorg.conf and modified xserver-xorg-video-siliconmotion.deb
-- 
Regards,
  C.D.Luminate


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1417179844.2022.1.ca...@gmail.com



Re: Thank you for the remarkable work on Jessie

2014-11-28 Thread lumin

On Sat, 2014-11-29 at 00:08 +0800, Paul Wise wrote:
 Please file a bug (severity serious):

Yes, filed this bug at:
#771387: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771387

-- 
Regards,
  C.D.Luminate


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1417227159.2059.3.ca...@gmail.com