Loz said :
> On Wed, Apr 1, 2009 at 7:11 AM, Aymeric Mansoux <[email protected]> wrote:
> > But now, If someone uses the live version running of a stick, or a hd.
> > Everything is fine, until this person decides to update the liveUSB/HD
> > via aptitude. Then, instead of pulling just some files related to the
> > audio/video/multimedia software, it might pull a whole new system that
> > will have to be cached on the persistent mode, and might also just
> > break the system. Erasing the stick or the HD with a new liveUSB/HD
> > recent snapshot would be then easier, which breaks a bit the concept of
> > a live system that can be maintained with minimal effort.
> 
> Well, I should state that I do have a conflict of interests, as the
> Live release doesn't hold much interest to me, I'm currently running
> an install of Debian testing, with the p:d and multimedia sources, so
> I have access to supercollider and suchlike. I'm not using the p:d
> kernel, as this broke xorg on my laptop. So obviously I'd prefer p:d
> to stick with testing.
> 
> > In your opinion, which components that would become outdated, would be a
> > real problem?
> 
> Well, we don't really know what new hardware will come out in the next
> 18 months (when Squeeze is likely to go stable). Of course, any new
> hardware could be sorted out in kernel upgrades, but then aren't you
> just duplicating work done by the main Debian branch?

Not at all.
Debian's stock i386 kernel is compiled for the most basic intel 
architecture and uses no preemption at all.

Our kernel has RT patches and is compiled for generic i686 + various
changes. This makes a *huge* difference in terms of performance. For the
future milestone we will also provide kernels compiled specially for
amd64, pentium4, etc...

Of course Debian provides already more specific kernels for some sub-arch
(amd64 kernel for i386 distro) but the config is very similar to the 686
kernel for i386, no preemption, etc.

So we are not duplicating efforts, after all these kernel are really
meant for uber-stability because they have to run on servers, ours are
meant to run on desktop with high performance and low latency needs.

The side effect of this is that we cannot always pick the most recent
kernel, for example leek and potato has been release with 2.6.24 and not
2.6.26 because the latest had really poor RT performances and hard
lockups in some cases, so we were waiting for 2.6.29, which has been
released recently to work on a new RT kernel for pure:dyne. In the end,
it's also our choice to provide something that will work perfectly for
fewer people, instead of something that will work poorly for everyone.
And maybe the overall distribution should reflect that as well...


> As I said before, for most people, testing is as stable as they'll
> ever need. It's certainly more stable than Windows, and probably more
> stable than OSX. Debian's naming of a release as 'stable' lures people
> into thinking that that is the one they need for a non-flakey OS, when
> this simply isn't the case. I wouldn't want to have a machine running
> on unstable, but very few people would ever have any major issues with
> testing, if any issues at all. The only downside I can think of with
> testing, is that if an error did pass through to testing, you'd have
> wait 2 or 3 weeks for the fix to filter back through.

sure.
I think stable in debian mostly means that the features are frozen,
that's why it's easier to build and experiment on top.


> Also, never underestimate a user's wish for something new and shiny.
> Planning your OS around a bulletproof system will make some people
> very pleased indeed, knowing that their music system is very unlikely
> to crash (bar any software issues), but for most users, I think they'd
> see another release with a few extra bells and whistles, and look back
> at p:d as being outdated.

I also tend to fall for the update/new toys cycle, so I understand your
point .. but...

... maybe there is a misunderstanding here...
If we would use Lenny, we would not wait for the next stable Debian to
be released to go on updating our packages, for example, if a new Pd is
released, we would build it against Lenny and put it on our repos.
So you'll always get the latest Pd, sc, fluxus, etc...

On the other hand, do we really need the very last version of "xyzshell" 
or "libwhatever" when it has none or very little influence on the
system? But if "supersoftware2.x" is not maintained by us and is only in
the testing repository, someone could still have access to it via
pinning. For example this could be easy even for newbie, if we provide a
default apt/preferences and apt/sources.list that covers all the
spectrum stable/testing/unstable, then we coud default to stable, but
with any tool (apt-get, aptitude, synaptics...) it is easy to pull 
any packages, and from my experience, aptitude and apt-get (and probably
synaptic too) are usually quite smart in solving dependencies or
proposing solutions. As a matter of fact, I believe most Debian "power
users" (sorry could not find a better term;) regularly work like this,
with a stable base system and pulling only the stuff they're interested
to test or get a more updated version.

Could you name or detail one situation where you would need a *full* 
testing system? Similarly can you think of any software that could be a 
problem not to have at the latest version all the time? Or is it just
the fact that using testing allows to have updates on everything at once
and not have to do individual updates?

a.

PS: just to be clear, these are all discussions, we haven't decided yet
what will be the right thing to do in the end, any help or comment on
this thread is really helping the project's future a lot :)
Also, it is very likely that our next sprint will happen sometimes in
July, with maybe a mini sprint before, it would be good to have resolved
this issue before!


 
> Loz
> 
> ---
> [email protected]
> irc.goto10.org #pure:dyne
> 

---
[email protected]
irc.goto10.org #pure:dyne

Reply via email to