Re: Downgrading packages after removing a repository

2009-08-05 Thread Andrew Sayers
Michael Bienia wrote:
snip
 Indepenent of how hard you make it to break an installation, there will
 be someone who managed to break it nonetheless and expects from you to
 unbreak it. And at the same time you will annoy experienced users who
 know what they are doing.

I don't follow this part.  Could you explain how the proposed solution 
(warnings on upgrade, warnings on downgrade) increases the number of 
people who expect Ubuntu to unbreak their system, or the number of 
annoyed experienced users?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Downgrading packages after removing a repository

2009-08-05 Thread Andrew Sayers
Michael Bienia wrote:
 On 2009-08-05 12:55:04 +0100, Andrew Sayers wrote:
 Michael Bienia wrote:
 snip
 Indepenent of how hard you make it to break an installation, there will
 be someone who managed to break it nonetheless and expects from you to
 unbreak it. And at the same time you will annoy experienced users who
 know what they are doing.
 I don't follow this part.  Could you explain how the proposed solution 
 (warnings on upgrade, warnings on downgrade) increases the number of 
 people who expect Ubuntu to unbreak their system, or the number of 
 annoyed experienced users?
 
 It was related to the part where you commented that if Ubuntu makes it
 possible to break installations (addind 3rd party repositories) it
 should also provide tools to unbreak it again.
 But perhaps I misunderstood it or read too much into it that if Ubuntu
 doesn't provide these tools, it shouldn't provide options to shoot
 oneself into the foot (possibly break an installation).
 
 Adding additional warnings might help if done right, but I currently see
 some problems I don't currently see a solution for:
 - you certainly won't warn on every upgrade else the warning is useless
   (when you warn about every possible upgrade the warning gets ignored:
the last 20 upgrades went fine, why should this one break things?)
 - how to identify repositories to warn about updates from?
 - you can't rely on apt (or dpkg) about the warning (how should it know
   which updates are safe and which not) and you can't rely on the
   packages itself either (which 3rd party package will contain a warning
   that it might break user data on downgrades?)
 
 Michael
 

These are very good points - how about warning when upgrading to a 
version with a different Origin, Label, Suite, Version, or Codename line 
in the Release file, or when enabling downgrades?

For example:

 These packages will be upgraded from repositories that are
 incompatible with their current repositories:

  * libfoo
* old origin: Ubuntu
* new origin: LP-PPA-andrew-bugs-launchpad-net
  * gbar
* old origin: LP-PPA-andrew-bugs-launchpad-net
* new origin: Evilsoft Inc.
* old label: Ubuntu
* new label: EvilWare
  * kqux
* old suite: jaunty
* new suite: karmic
* old codename: jaunty
* new codename: karmic
* old version: 9.04
* new version: 9.10

 WARNING: Incompatible repositories are not supported.
 Incompatible repositories can cause data loss, can make programs
 unable to run, and can even make your computer unable to boot.

And:

 WARNING: downgrades are not supported.

 Downgrading packages can cause data loss, can make programs
 unable to run, and can even make your computer unable to boot.

On the command-line, I would suggest printing the upgrade warning 
whenever relevant packages are apt-get installed or upgraded, and 
putting the downgrade warning next to the relevant command-line option 
in the man page.

In a GUI, I would suggest printing the upgrade warning whenever packages 
are installed or upgraded in any way, and printing the downgrade warning 
when the user clicks on some type of allow downgrades for this session 
button.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Downgrading packages after removing a repository

2009-08-04 Thread Andrew Sayers
You make a good point about breakage when packages are downgraded.  But 
it seems a little disingenuous for us to bend over backwards to make 
unsupported upgrades possible (adding a software sources menu item, 
putting PPAs in Launchpad, creating /etc/apt/sources.list.d/ and so on), 
then for us to walk away when those upgrades make systems unusable.

I also take your point that pain is an important way of communicating 
danger to users.  But making a system unusable seems like pushing a man 
off a cliff to warn him about the dangers of falling.

I would expect the message to be at least as effective if we had a GUI 
to say warning: may cause breakage on upgrade, warning: breakage 
caused on downgrade, and breakage evident from the loss of 
configuration data.  If that's acceptable to you, and if C's blueprint 
idea seems okay, then I'll include the GUI suggestions in the blueprint.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Downgrading packages after removing a repository

2009-08-01 Thread Andrew Sayers
I've found a bug (or maybe it's a feature request) in apt (or maybe it's 
in software-properties-gtk).  I'd like to get people's opinions about 
where this is best reported, and what the report should say.

When you add a repository to your computer, then remove that repository, 
it's not obvious how to downgrade packages that are no longer available.

Normally this is a minor irritant, but it can be a security issue, or 
can even make recovery very hard indeed.  Here are three user stories to 
illustrate the issue:


Anna added a PPA through Synaptic  Settings  Repositories, which 
upgraded emacs.  She didn't like the upgraded version, so she removed 
the repository.  She scrambled around for a while, before realising she 
could get her old emacs back by removing it then reinstalling.

Tim added a repository from a random website through System  Admin  
Software Sources, then updated and was notified that a new version of 
debconf was available.  He installed the upgrade, then realised that the 
upgrade had been downloaded from the new repository.  Realising he'd 
been tricked, he removed the new repository and assumed that debconf had 
been uninstalled as well.

Bob, thinking that a Debian-based distribution should be okay with 
Debian packages, followed command-line instructions to create 
/etc/apt/sources.list.d/debian-unstable.  Once his Ubuntu/Debian hybrid 
was installed, he rang his technical friend to clear up the mess.  The 
friend tried every apt-get command he knew, before gradually realising 
that he had to run apt-cache showpkg name, find the package version, 
do apt-get install name=ubuntu version, and repeat many, many times.


Ideally, I would like well-advertised command-line and GUI options that 
can downgrade packages to the latest downloadable version.  Something 
like this for example:

1) Add a --ignore-status option to apt-get, which forces it to ignore 
package versions listed in /var/lib/dpkg/status.  This would let sudo 
apt-get --ignore-status install ubuntu-desktop clear up most any problem.

2) When apt-get update deletes a file in /var/lib/apt/lists/, print a 
warning for every installed package that's just become non-downloadable, 
something like the latest version of package is no longer 
downloadable.  You may want to run `apt-get --ignore-status install 
package`

3) Provide similar functionality to (1) and (2) through synaptic

4) Provide similar functionality through AppCenter

Would you find this too intrusive?  Not intrusive enough?  Should I 
forget about Synaptic now that AppCenter is coming along, or should I 
focus on getting functionality into APT that can later be made available 
through the GUI?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: The google custom search - perhaps it went there by itself?

2009-07-27 Thread Andrew Sayers
Vincenzo Ciancia wrote:
 The fact that there is nobody willing to reply (I posted a similar 
 message one year ago, so this is certainly not a matter of time) can 
 mean only two things:
 
 1) on this list, nobody knows the answer. I think this is likely. Then, 
 the custom search should be removed from firefox. Nobody knows why it is 
 there.
 
 2) someone knows, but they are ashamed to tell the truth. This is likely 
 too. If you just want to adopt a marketing strategy, damnit, just admit 
 it. The majority of users including myself will continue to use ubuntu. 
 But why silence? This is really worrying.

Hi Vincenzo,

With all due respect, I think you're missing a more likely possibility:

Developers look at the level of bitterness in this thread, and at the 
ratio of people that have been attacked vs. people that have been 
treated respectfully, do a quick cost:benefit analysis of posting, and 
decide that they're better off keeping quiet.

I understand that you're annoyed because you feel an issue that's 
important to you has been ignored for a long time, but venting your 
feelings to the list is never productive.  You are more likely to get 
useful responses if you focus on constructive areas of debate, and 
ignore irrelevant sub-threads wherever possible.  Doing so will make 
your thread look more attractive to potential posters.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Reporting usability problems: please be more tolerant when you triage bugs!

2009-07-23 Thread Andrew Sayers
Hi Vincenzo,

You might have more luck if you describe your changes as feature 
requests.  Whether or not you personally think they're bugs, calling 
them new features should avoid the always been that way reaction from 
developers.

You might also want to try helping out with the improved me too 
blueprint: https://blueprints.launchpad.net/malone/+spec/improved-me-too 
- useful me too data would let you argue that behaviour is non-obvious.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Standing in the street trying to hear yourself think

2009-07-09 Thread Andrew Sayers
I think the FAQ/flowchart/chatroom model could work very well in other 
places, but the Signpost is just about pointing people in the right 
direction - providing solutions is outside our modest scope.

You've already got a chatroom in #ubuntu, so the next thing is to start 
writing answers.  I would suggest that interested people clear their 
weekend to trawl through the forums and/or the #ubuntu logs at 
irclogs.ubuntu.com, then write them up on the wiki somewhere.  With 
luck, a hierarchy of questions will become apparent before all that 
answer-writing drives you completely insane :)

I plan to be in #ubuntu-signpost most of the time this week and over the 
weekend, so pop in if you want to talk about any of this.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Standing in the street trying to hear yourself think

2009-07-08 Thread Andrew Sayers
I think the model we're heading towards with the signpost is that the 
wiki page contains questions that have been asked before, while IRC and 
the wiki discussion page are for new questions.

If it works, I think #ubuntu might want to look at the signpost model. 
Being able to click I have a problem with my hardware - video card - 
NVidia card - unsupported NVidia card would satisfy a bunch of users 
without needing direct support, and would make it easier to direct 
people towards the level 2 tech support channels.

Done right, a signpost-like model could also ensure that level 2 support 
requests are well formulated.  Leaf nodes for unknown problems might 
look like this:

BEGIN WIKITEXT

=== Modern NVidia card with no known issues ===

Your problem is not covered by this guide. Go to #ubuntu-video and say 
I have a problem with my modern NVidia card (TYPE).  This card has no 
known issues.  My problem is: PROBLEM.  Make sure to replace TYPE and 
PROBLEM with the type of card you have and the problem you're having 
with it.

END WIKITEXT



About people asking already-answered questions - As I half-suggested in 
another post, I think the second-order problem here is that many 
approaches make it easier to post than to search.  I would recommend 
forums drowning in deja vu to try putting roadblocks between the user 
and the send message button.  Preferably, these roadblocks should be 
in the form of search buttons :)

I also think there's a third-order problem here: developers don't have 
to-the-eyeball strategies for delivering their content.  Expecting users 
to trawl through old posts seems intuitively reasonable, but the 
evidence is that it doesn't work that way.

Here's a nice demonstration that convinced me of the need for software 
to deliver information right into the user's eyeball.  It doesn't work 
unless you actually do it, so please have a go - I promise it's not a trick.

BEGIN DEMONSTRATION

For this demonstration, you'll need a thumb and a digital watch.

Hold your thumb at arm's length and stare at your thumbnail for a 
moment.  Then place your watch over your thumb, such that the seconds 
counter is over your thumbnail.  With your thumb still at arm's length, 
stare at your thumbnail and count the seconds going by.  You should be 
able to count the seconds easily.

Now place your watch to the left of your thumb, such that the seconds 
counter is jammed against the side of your thumbnail.  With your thumb 
at arm's length, stare at your thumb and try to count the seconds go by. 
  You might be able to detect when there's a change, but will be 
completely unable to read the numbers.

90% of the rod and cone cells in your entire eye are pointed at an area 
about the size of your thumbnail.  So information just one thumbwidth 
away from the point you're focussing on is almost impossible to take in.

END DEMONSTRATION

To solve the third-order problem, I recommend putting the above 
demonstration in front of developers' eyeballs :)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Standing in the street trying to hear yourself think

2009-07-03 Thread Andrew Sayers
Evan wrote:
 Launchpad is for bug reporting and tracking, beyond that I have no idea 
 where the actual division of responsibilities lies. Perhaps clarifying 
 that (ex: Wiki is for Howtos only, forums are only for ...) and then 
 providing a meta-support page for each topic would help. So somebody 
 looking for help from Ubuntu gets directed to support.ubuntu.com/Audio 
 http://support.ubuntu.com/Audio. This would have links such as:
 
 I want to know how to do something (wiki)
 I want to report a bug on the topic (lp)
 I think I have a bug but I'm not sure (lp answers?)
 I have a question or suggestion concerning the future of the topic 
 (devel-discuss)
 None of the above (#ubuntu-signpost)

I'm coming to a similar conclusion from the opposite direction.  A quick 
chat in #ubuntu-signpost lead me to start writing this:

https://help.ubuntu.com/community/Signpost

Hopefully people will go to the wiki signpost first, and 
#ubuntu-signpost if they fall off the bottom of the flowchart.  If that 
happens, we can update the signpost with answers from every individual 
question.

It'll probably take me a few days to add all the signage I can think of, 
and it would be great if you could add signs to resources you know about.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Standing in the street trying to hear yourself think

2009-07-02 Thread Andrew Sayers
The Ubuntu community is growing, and as Evan mentioned, our current 
channels of communication can only support a finite rate of messages. 
So there are only two possible solutions: increase the supply of 
meat-bandwidth, or decrease the demand.  Other posts have interesting 
ideas about increasing supply, so I'd like to suggest a way of 
decreasing the demand.  This would involve trying to find higher-order 
issues when someone asks a question - the trail of logic that lead them 
(and therefore others) to demand bandwidth in the first place.

As some people might remember, I wrote a survey about noise on the list 
a few months ago.  The response was fairly clear, but didn't demonstrate 
any overwhelming pressure, and my impression was that the list 
coincidentally became less noisy at about the same time.  So I did 
something I rarely do - wrote up my conclusions and shelved them for 
later reference[1].  Those conclusions might be interesting in this 
debate, although they're fairly specific to this list.

Thinking back on the survey, one of the meta learning points is that 
most Ubuntu people get far more exercised by specific cases than by 
statistics.  Another example is the way there's always a clamour to 
improve apport, but popcon's significant opportunities seem to be 
provoke a more relaxed attitude amongst would-be beneficiaries.  Though 
we few evidence-zealots keep chipping away, the fact is that getting the 
majority of Ubuntu folk interested in a problem means presenting them 
with an individual they can help.

This suggests a solution I've pulled together from a couple of Joel on 
Software articles[2][3]: when someone comes to you with a problem, first 
fix the presenting problem, then fix the second-order problem that 
caused it, then the third-order problem, and so on back to the original 
source.  Although this significantly increases the amount of work per 
issue, it's more than offset by the reduction in the number of issues.

Here's an example:

An e-mail recently came to the list[4] asking about adding a feature to 
the panel.  This seems to me like an upstream issue, not something for 
the list.

I e-mailed the author suggesting he take the issue up upstream (fixing 
the first-order bug), and politely inquired what lead him here.  He said 
that he'd read the text at [5], which suggested feature requests go to 
ubuntu-devel.  Realising that a non-developer probably shouldn't be 
posting to -devel, he decided to post here.  This suggests two 
second-order bug fixes to me:

* The page at [5] should suggest -devel-discuss rather than -devel
* The page at [5] should talk about filing feature requests upstream

Fixing these second-order issues would shut down a whole category of 
misdirected posts, but we can still look at a third-order bug: people 
don't know where to post messages.  A possible fix would be:

* Create #ubuntu-signpost, a place for people to be pointed in the right 
direction

That would be a fairly easy channel to support, and would reduce the 
amount of noise created by a much wider category of issues.

Fixing the first-, second- and third-order bugs would probably reduce 
traffic to the list by about one post every few months, so I would 
expect it to pay for itself in time spent within a year or so.  That's 
not much on its own, but a community-wide effort would soon add up.

So to conclude, my suggestion is that we politely ask people about the 
higher-order issues that lead them to send messages.  These might be bad 
documentation, not enough search, actual program bugs, or any number of 
other things.  Taking extra time to address higher-order issues will 
stop similar issues from occurring again in future, significantly 
reducing demand for bandwidth in the long-run.

Also, I'll be on #ubuntu-signpost later if anyone wants to join me, but 
right now it's way past my bedtime :)

- Andrew

[1]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2009-February/007122.html

[2]http://www.joelonsoftware.com/articles/customerservice.html,
(section 1, fix everything two ways)

[3]http://www.joelonsoftware.com/items/2008/01/22.html

[4]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2009-July/008969.html

[5]http://www.ubuntu.com/community/reportproblem

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Fallback plan if Empathy isn't ready for Karmic?

2009-06-24 Thread Andrew Sayers
Martin Pitt wrote:
 Andrew Sayers [2009-06-22 19:04 +0100]:
 There's currently a big push to make Empathy the default IM client in 
 Karmic, even though the version in Jaunty still has grave issues - for 
 example, MSN doesn't work at all for me[1].
 
 We don't propose to retroactively default to it in Jaunty. :-)

Hi Martin,

My point here wasn't clear - Empathy hasn't had a full round of testing 
in a stable version of Ubuntu, because the version in Jaunty isn't 
usable as a day-to-day IM client.  Discussions in this thread and the 
guidelines thread show that it's not even going to have a full round 
of testing in an alpha/beta, because the version in Karmic also isn't 
usable as a day-to-day IM client.

I'm glad to hear that upstream is so active, but a usable version will 
have to land very soon in order for the small band of Karmic testers to 
give it enough use to clear out all the bugs that will come from a large 
number of Ubuntu newbies.  I'll ask again nearer the time about how 
things are progressing.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Provide a GUI option in the installer to enable popcon

2009-06-24 Thread Andrew Sayers
Matthew Paul Thomas wrote:
 Andrew Sayers wrote on 23/06/09 17:55:
 
 We're constantly trying to make the Ubuntu installation process simpler.
 And explaining the Popularity Contest in an understandable way, in the
 installer which is completely out of context, would be quite difficult.
 
 As part of the AppCenter design work, I hope to make the popcon option
 more prominent in context.

That's a good point, and AppCenter looks very interesting indeed.  I 
look forward to playing with it!

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Empathy is not in line with the much discussed guidelines

2009-06-23 Thread Andrew Sayers
Nicolò Chieffo wrote:
 you should use git master before giving points ;)

Could you give us some idea of when a testable version will land in 
Karmic?  We've got two months left until the final decision on whether 
this becomes as significant a part of Ubuntu as Firefox or OpenOffice, 
so it would be nice to have more than a few weeks of testing by people 
who don't feel like compiling their IM client from source every day.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Provide a GUI option in the installer to enable popcon

2009-06-23 Thread Andrew Sayers
This argument must have been had before, but recent events have prompted 
me to suggest it anyway:

The submit statistical information page in System 
Administration  Software Sources  Statistics should be
presented during the installation process.  The box should
be checked by default in pre-RC versions of Ubuntu, and
unchecked in stable versions.

This would enable or disable the Ubuntu Popularity Contest 
(popcon.ubuntu.com), which is currently installed but disabled by default.

This would allow us to make decisions based on far better information, 
such as:

* Calculating a precise trade-off between number of users and number
   of bytes, to help decide which programs go on the CD
* Checking whether applications aren't getting bug reports because they
   don't have bugs or because people aren't using them
* Spotting sudden spikes or drops in program usage that suggest a bug
   has been introduced (or give hints as to the seriousness of a bug)
* Testing whether ordinary users are going out of their way (not) to use
   a particular program

It's not currently possible to make strong claims based on popcon data, 
because only a small number of people bother to hunt it out and check 
the box.  This is a self-reinforcing problem: why should I bother if the 
data is worthless anyway?

It strikes me as reasonable to enable this by default in pre-RC versions 
(why are you running alpha/beta software if not to give feedback?), and 
reasonable to at least ask in final versions (even the laziest user is 
enabled to make his or her individual contribution).

If just 1% of Ubuntu users tick the box, that gives us enough data to 
improve Ubuntu by justifying our decisions with evidence.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Provide a GUI option in the installer to enable popcon

2009-06-23 Thread Andrew Sayers
Mackenzie Morgan wrote:
 On Tuesday 23 June 2009 1:36:21 pm Siegfried-Angel wrote:
 If I remember correctly there is already such an option in the installer.
 
 I think he wants it to be more prominent, not hidden behind advanced, that 
 way it gets more use.

I wasn't aware of the feature, so I'll stand corrected on that point :)

I'll change my original suggestion to either the popularity contest 
should draw a statistically significant number of ordinary users, or it 
should be taken out of the default install.  Otherwise, it's just 
wasting space.

IMHO, the best solution would be to enable popcon by default in 
alpha/beta versions, and as Mackenzie says, make it more prominent for 
everyone else.

This could have a transformative effect on the development process with 
very little work, which I think is worth a single extra question for 
users.  Aside from anything else, the lack of flamewars would make this 
list far more productive.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Provide a GUI option in the installer to enable popcon

2009-06-23 Thread Andrew Sayers
Caroline Ford wrote:
 2009/6/23 Andrew Sayers andrew-ubuntu-de...@pileofstuff.org:

 IMHO, the best solution would be to enable popcon by default in
 alpha/beta versions, and as Mackenzie says, make it more prominent for
 everyone else.

 This could have a transformative effect on the development process with
 very little work, which I think is worth a single extra question for
 users.  Aside from anything else, the lack of flamewars would make this
 list far more productive.
 
 Only if you think that the alpha and betatesting community are
 representative of the wider user base, which they are unlikely to be.

That's a very good point - to get usable data, popcon.ubuntu.com would 
have to offer raw results by version.  For example, we would need to 
download the raw data for Karmic alpha/beta versions, and for the Karmic 
final version.  Then we could subtract one from the other to get 
approximate results for non-beta users.

That would pose some minor privacy issues, and would take a bit more 
work (which I'll happily volunteer to do), but I still think the 
benefits would far outweigh the costs.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Fallback plan if Empathy isn't ready for Karmic?

2009-06-22 Thread Andrew Sayers
There's currently a big push to make Empathy the default IM client in 
Karmic, even though the version in Jaunty still has grave issues - for 
example, MSN doesn't work at all for me[1].

The plan is to make sure that these bugs are all fixed in time for 
Karmic, but what's the backup plan if there are still showstoppers when 
the release starts to get closer?  More precisely, when, where and how 
should people speak up if Empathy still has showstopping bugs?

Obviously it would be undesirable to try retrofitting Pidgin back in 
after the feature freeze, but I could see arguments for requiring 
notification earlier (default apps need more time) or later (Pidgin's 
already been well tested).

- Andrew

[1]https://bugs.launchpad.net/ubuntu/+source/haze/+bug/338891

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: about empathy as the default IM application

2009-06-19 Thread Andrew Sayers
Peteris Krisjanis wrote:
 
 I like this idea. This could be not only limited to Pidgin, but other
 software, like Xchat, for example.
 
 Go ahead, create blueprint for this.
 

With apologies for the delay, please see:

https://blueprints.edge.launchpad.net/ubuntu/+spec/install-programs-from-migration-assistant

Comments and additional applications would be most welcome!

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: about empathy as the default IM application

2009-06-19 Thread Andrew Sayers
Thanks for these links.  Now you remind me, I do remember a flurry of 
activity in August, which I'd put out of my mind when it didn't get any 
traction.

I'll try to listen in during the next UDS, but it looks like there 
aren't many archives kept around for those of us that want to go in and 
see what happened in the past.  Is it worth asking Canonical to archive 
the IRC logs next time, and to convert the streams to OGG format for 
later downloading?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: about empathy as the default IM application

2009-06-17 Thread Andrew Sayers
I guess my previous message wasn't clear - I'm not making an argument 
here from personal preference, I'm trying to file a bug in Ubuntu 
itself.  Specifically, that dropping Pidgin will cause a regression in 
the user experience for migraters.

I'm also not arguing that migraters are incapable of learning new 
things, just that they shouldn't be asked to learn a new IM program at 
the same time as they're learning where their start menu went.  I would 
have no problem, for example, with asking updaters whether they wanted 
to switch to Empathy.

This decision was made at UDS with no input from, or output to, the 
wider community.  Brainstorm has never heard of Empathy, and I've never 
seen it get more than luke warm support on this list.  While I agree 
with UDS in general, saying at UDS it was already decided that Empathy 
would ship with Karmic the decision has already been made for us 
goes completely against the grain of open source development.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: about empathy as the default IM application

2009-06-17 Thread Andrew Sayers
Hi Dan,

About halfway through this reply, a compromise occurred to me: get 
migration-assistant to install Pidgin if it's detected.  If that works, 
it would get rid of many of the issues I've been complaining about, at 
least for migraters that plan to dual boot.  This post covers some 
underlying issues, as well as problems that might still apply to people 
that (e.g.) get Ubuntu with a new PC.

I'll send another reply to discuss the migration-assistant approach, so 
please hold off until then :)

dan wrote:
 I think we're missing some context.  At least I was.  Correct me if I'm 
 wrong, but what you're trying to say is:
 
 When migrating users from Windows to Ubuntu, you start by migrating them 
 to existing cross platform applications, like Pidgin.  If Pidgin is 
 removed as the default IM application, further training will be needed 
 for the new Ubuntu users.

That's almost what I'm saying, but misses a few crucial points:

On your first day in Linux, you're bombarded by new things.  How do I 
copy+paste?  Why doesn't weird hardware issue work?  Where's my C: 
drive?  What's the equivalent of all the programs I forgot about, but 
now realise I use all the time?  And so on.

IM is one of the programs people use on their first day, and it's one of 
the programs that needs to work before you can get help from a friend 
without paying a phone bill.

So my first point is not just that migraters will need extra training, 
but that they will need to work this out on their own, at a time when 
they're completely overloaded, and liable to fall back on old habits. 
Old habits usually means typing the name of a program into Google, 
then clicking the first link they see, or following the first set of 
instructions they can find.  When I've been the friend that migraters 
contact, this has always ended badly.

My second point is that changing in Karmic, rather than (for example) 
making a big fanfare about how we'll change after the next LTS, would be 
unfair on people who have taken the time to plan for this during the 
past year or two.

   ...  I think the 
 difficulty for you is that in Ubuntu we're looking for the best *Ubuntu* 
 experience, not necessarily the best migration experience.  It would be 
 nice if those two interests were 100% aligned, but sadly they are not, 
 in this specific case.

I agree with this, up to a point.  Unless you're planning to forcibly 
uninstall Pidgin when people upgrade, application defaults are only 
relevant to people doing a fresh install.  Migraters are an important 
subset of installers, so their needs should be carefully considered.

As always, this is a balancing act between the desire to create a system 
that people can get into, and the desire to create a system that people 
will like once they have got into it.  I wouldn't complain if the 
argument for Empathy were overwhelming, but all I've heard boils down to:

* it's a bit nicer
* it's a bit better integrated
* it has voice+video support
* it means we can stop dealing with the Pidgin guys
* all the bugs it has will definitely be fixed by release day
* users will file loads of new bugs after release day (!)
* if you don't like it, you can always install Pidgin

Maybe there's a stronger argument and I just haven't heard it, but this 
feels like it's putting the interests of developers ahead of users.  It 
certainly doesn't feel like the sort of urgent issue that's worth 
throwing away months of preparatory work by soon-to-be-migraters.

 Regarding UDS, and the decisions made there, I dont see how those 
 sessions could be any more inclusive.  There is news and blogs and 
 information flowing out of there almost 24x7 for a week.  The session 
 schedules are published.  And there are resources for how you can 
 participate, even if you are not able to attend.  UDS is probably the 
 most democratic, inclusive, open source (to misuse the phrase), 
 developer summit ever.

That's good to hear.  Is there a central point with a list of these 
blogs and news sites?  My experience has been the community going dark 
for a week, so I'd appreciate a bit of light :)

I've not been able to find any discussion of Empathy online before this 
week, and I can't find it in the schedules or the list of discussions. 
Could you point to somewhere that the arguments are laid out?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: about empathy as the default IM application

2009-06-17 Thread Andrew Sayers
As promised, this reply will concentrate on working around problems 
faced by migraters by patching migration-assistant.  I would be willing 
to put programming time into the ideas suggested here.

As I stated in another post, the best Linux migration strategy involves 
two stages: new apps/same OS, then new OS/same apps.  The migration 
assistant automatically configures a few applications based on your old 
settings, which can be extremely useful to migraters.  I propose we get 
migration-assistant to install (equivalents of) the user's old applications.

Installing the user's old apps would give them a personalised 
experience, and would help document the what's the Linux equivalent of 
application X? issue.  Because the experience would be tailored to the 
specific user, there's no concern about degrading everyone else's 
experience for the sake of some migraters.

This would mean that migraters don't get the standard Ubuntu 
experience out of the box.  But that strikes me as a valid choice for a 
user to make.

Obviously, Pidgin would be an application that should be installed. 
Other applications I've recently seen a need for include Skype, GMail 
notifier, and whatever the current equivalent of xmms is.

As I mentioned in another message, this wouldn't do anything for 
migraters that get Ubuntu with a new PC.  I personally doubt that many 
people move to Linux with no dual booting period in between, but I would 
be willing to look at providing a similar facility by adapting 
migration-assistant to zip up some files under Windows, then unzip them 
under Linux.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: My Suggestions on ISSUE, MOTD, lsb-base-logging.sh

2009-06-13 Thread Andrew Sayers
I got halfway through a project that would solve broadly the same 
problems as your /etc/issue proposal. It faltered when Launchpad kept 
attaching the blueprint to Drupal and the merge proposal didn't go anywhere.

The would-be blueprint is still available[1], the bug in Launchpad has 
been fixed[2], and the merge proposal is still awaiting review[3].  If 
anyone can suggest a plan to get the merge looked at, the rest should 
only be a few days of work.

- Andrew

[1]https://wiki.ubuntu.com/BlueScreenOfLife
[2]https://bugs.launchpad.net/blueprint/+bug/320889
[3]https://code.launchpad.net/~andrew-bugs-launchpad-net/friendly-recovery/ubuntu/+merge/3642

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: shameful censoring of mono opposition

2009-06-10 Thread Andrew Sayers
Hi Mark,

I think I understand now why you and the list have been butting heads so 
much.  I'd like to present my theory, then explain how you can be more 
productive in advocating to developers.

At a Fortune 500 company, I would expect that advocacy is very political 
- it's important to create (the perception of) a group of winners that 
do what you want, and a group of losers that don't.  Competition, and 
fear of losing out, are very strong incentives for people in those 
organisations - if they didn't want to win, they wouldn't be in the 
Fortune 500.

Among open source developers, advocacy needs to be much more logical - 
it's important to explain how doing what you want achieves the 
developer's goals.  Scratching an itch is widely recognised as the 
most common incentive for open source developers, and any talk that 
doesn't help them scratch their personal itch isn't productive.

Telling open source developers that they should want to scratch a 
different itch won't work.  It's like telling people they should be 
attracted to a different gender, or should have a different taste in 
music - you don't get to choose what your interests are.

Talking about winning and losing also won't work.  Open source is 
just coming out of a stage where you had to join the losing team in 
order to get in.  In a few years, you might start to see developers 
appear that wanted to join the winning team, but right now anyone that's 
been around long enough to be really effective is for OSS whether it 
wins or loses.

Finally, creating rifts between groups won't work.  Development is about 
sharing a bad idea around until it becomes good, so people that like to 
blacklist those with bad ideas generally don't become developers.

Put simply, Fortune 500 advocacy is like Fortune 500 business - 
confident, aggressive, and victorious.  OSS developer advocacy is like 
OSS development - methodical, inclusive, and accurate.

I discussed a specific model elsewhere[1] that could be used for 
advocacy.  It boils down to stating your premise, explaining your 
reasoning, then arriving at a conclusion.  I recommend you try it out, 
as it will work much better around here.

- Andrew

[1]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2009-June/008533.html

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: shameful censoring of mono opposition

2009-06-09 Thread Andrew Sayers
Christopher Chan wrote:
 These are about 'standards'. Can there really be a technical argument 
 between using say the metric system versus the foot/yard or the ounce/pound?

Yes:

1) state your technical requirements
2) state the relevant properties of each standard
3) argue about which properties best match which requirements
4) profit

For example:

1) I want a system of measurements that:

* minimises the amount of learning necessary in schools
* is controlled by an international body which represents my interests
* allows easy comparisons between different quantities

2)

Imperial units:
* Older people know Imperial, they can spend the time to teach kids
* Kids who have learnt from (grand)parents needn't learn in school
* Imperial uses multiple words per unit (inch, foot, yard)
* Imperial is controlled by the British government
* Imperial is widely use in places that won't go away (e.g. roads)
* Older people will never know anything but imperial
* You can't make comparisons unless you know what the quantities mean

Metric units:
* Metric only requires knowledge of base 10, except to count time
* Kids already need to learn base 10, and to count time
* Metric uses a single word per unit (metre, litre)
* Metric is controlled by SI
* Because it uses base 10, metric is very easy to compare

3) If we started over from scratch, the benefits of imperial would be 
moot.  But we're not, so moving away from imperial would cause 50+ years 
of difficulty comparing quantities.  But if the move to metric is 
successfully completed, we'll have hundreds or thousands of years of upside.

4) Being an optimist about how much time we've got on this planet, I 
vote metric.  Intelligent people may disagree, especially if they have 
different requirements.  I respect those that disagree, and accept that 
the requirements I've laid out do not necessarily reflect the 
requirements of the Ubuntu project.



Contentious issues can be argued civilly, you just need to be a bit 
careful about it.  When discussing topics that can get religious, please 
consider a premise-reasoning-conclusion model like the above.

 - Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-04 Thread Andrew Sayers
Mike Jones wrote:
 This discussion has gone on long enough that I'm no longer able to tell 
 what we are discussing.

Neal posted a GNOME bug report 
(http://bugzilla.gnome.org/show_bug.cgi?id=554172) which I think sums up 
the issues really well.

I agree with Evan that there are two issues:

a) What should be expressed as powers of 2 vs. powers of 10?
b) What names should we use for powers of 2 and 10?

As to (a):

There are a few places where there is a strong technical reason to 
prefer powers of 2.  For example, memory is designed in such a way that 
you will always have a round number of bytes in base 2, but never in 
base 10.  Everywhere else, there are strong arguments on both sides, 
including for instance:

* 30 years of precedent for base 2 in the computing community
* 300 years of precedent for base 10 in the scientific community
* Interoperatibility with other systems (e.g. Windows)
* Compliance to relevant standards (e.g. POSIX)

So far as I can tell, everyone has made up their mind about which of 
these issues outweigh which other issues.  Further debate is likely to 
produce lots of heat and little light.

As to (b):

I think the issue can be summarised like this:

As developers of the English language, we get words from the dictionary 
(our upstream provider), and hand them on to users (our downstream 
receiver).

We have agreement that the words are defined upstream as 
kilo=1000/kibi=1024, but do not have agreement on whether a valid bug 
report has been filed by downstream.

This is a serious issue because the English language has a long and 
proud tradition of being modified solely through patches working their 
way upstream.  Some people believe that any attempt to impose words from 
the top is an inappropriate attempt to grab power, which should be 
resisted on principle.  Some people believe this is an especially 
egregious example because the computing community was hardly consulted 
at all, and strongly objected where it was consulted.

My personal opinions:

My understanding is that GNOME shies away from configuration options 
where possible, whereas KDE quite likes them.  As such, I doubt that 
GNOME developers would be willing to make this configurable.  Even if 
they did, you still have to discuss which option is the default.

As I mentioned elsewhere[1], not all standards are equal.  It's 
important that we consider standards seriously, but that doesn't mean 
automatic adoption.

As far as I'm concerned, the most important thing is that the UI uses 
words consistently.  A close second-most important thing is that the UI 
uses words that users can understand and mentally manipulate.

I have no strong opinion right now about whether giga should mean 10^9 
or 2^30, but I do have a strong opinion that ordinary users can't define 
the word at all.

IMHO, the only words that are widely recognised by ordinary users are 
million, billion etc.  As Scott mentioned, the definitions for these 
words vary[2], but I believe this can be managed with localisations.

Since ordinary users don't have any words for large powers of 2, I would 
expect them to have difficulty thinking in base 2 no matter what words 
are used.  Here's a thought experiment: in base 6, try calculating 4 + 
4.  Even if you understand perfectly what I mean, I bet you have to use 
your fingers :)

- Andrew

[1]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2009-June/008376.html
[2]http://en.wikipedia.org/wiki/Long_and_short_scales

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: On apturls and repositories

2009-06-03 Thread Andrew Sayers
Assessment of PPAs sounds to me like peer review.  That would be a big 
job to implement, but IMHO benefits would go far beyond a web of trust. 
  Of course, I'm not volunteering to do it :)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-02 Thread Andrew Sayers
Scott James Remnant wrote:
 On Tue, 2009-06-02 at 08:57 +0200, Martin Pitt wrote:
 
 Max Bowsher [2009-06-01 23:41 +0100]:
 To my mind, the power-of-2 grouping is sufficiently intrinsic to the
 nature of bytes, whilst the kibi mebi gibi tebi stuff not only sounds
 and looks stupid, but loses a great deal of clarity by making all of the
 prefixes differ only in a single syllable.
 As far as I can see, the predominant opinion seems to be to fix 701.2
 MB to be 735.2 MB, not 701.2 MiB.

 Agree.
 
 That way we show a correct figure, and nobody needs learn a new unit.
 
 It also happens to match the standard for storage and bandwidth, and is
 what other operating systems are also tending to use (thus it's more
 likely this is what packaging will use).
 
 Scott
 

This is perhaps a bit heretical, but how about correcting 701.2 MB to 
735.2 million bytes?  As well as the heat produced by the MB/MiB 
debate in the computer community, laypeople only seem to understand 
mega and giga by mapping them to million and billion.

Using the longer term meets Scott's criteria, makes Ubuntu more 
accessible, and saves us all a bunch of time explaining what a terabyte 
is when our parents start getting them.  Even in a small dialogue box, a 
size of 735.2 mln doesn't take many more pixels.

- Andrew



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-02 Thread Andrew Sayers
snip - using million and mln rather than mega and MB)

Scott James Remnant wrote:
 ... this is yet another strange postfix or unit that users would
 have to learn.
 
 735.2 MB is not confusing if it means ~735,200,000 bytes.

That's a good point for the short form, so long as the UI spells out 
elsewhere what MB means.

How about using million bytes by default, and MB where there's a 
significant pixel constraint?  That explains things to the user of 
average curiosity, and doesn't require any terminology they haven't used 
before.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-02 Thread Andrew Sayers
Scott James Remnant wrote:
 What would you use for the thousand million bytes case? :)
 
 HINT: the meaning of billion differs between thousand million and
 million million depending on your location.
 
 Most people in the metric-speaking world know what Kilo means, and have
 proved themselves able to learn Mega, Giga, etc.

Billion increasingly means 10^9 - Wikipedia claims[1] that the long 
scale is mostly used in non-English-speaking regions, so it's safe to 
use billion unless the localisation in use says otherwise (at which 
point, locale-specific words are needed anyway).

I agree that people can learn what mega and giga mean, so long as you 
give them the opportunity to learn.  Using million bytes 
interchangeably with MB gives significantly more people that opportunity.

Mega is also a problem because accessibility isn't just about the 
ability to understand something, it's about the amount of mental effort 
required.  Grab a non-technical friend or family member and ask them how 
many million in a billion, then how many kilobytes in a megabyte. 
You'll find they have to think longer and harder to answer the second 
question, if they can do it at all.  I'm not clear what extra value 
mega provides that's worth so many wasted cycles.

- Andrew

[1] http://en.wikipedia.org/wiki/Long_and_short_scales

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-02 Thread Andrew Sayers
That grab a friend experiment is one of those posts where my inner 
science geek betrays me.  I chose kilo and mega instead of mega 
and giga so that I would be less likely to skew the experiment by 
asking the same exact question twice in a row with different phrasing. 
A more robust methodology would allow for valid comparison between 
million and mega... I don't suppose you know any identical twins 
with a penchant for answering simple maths questions? ;)

When I asked my father, he understood that a kilobyte was less than a 
megabyte, which was less than a gigabyte.  But he had no idea how much 
less - he would have believed me if I said a gigabyte was 10 or 10,000 
megabytes.

I actually like your MP3 player example by the way - if I told my dad 
that his MP3 player had a capacity of 4 billion bytes, and an average 
MP3 was 4 million bytes, he'd be able to do exactly the calculation you 
described.  With MB and GB, he'll need a pencil and paper no matter how 
many times I explain it.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-25 Thread Andrew Sayers
Jan Claeys wrote:
 A lot of people run unstable during alpha  beta, but many do it in a VM
 or on an old spare system.  That doesn't help find regressions that are
 hardware-related, of course, and in general those systems might not see
 the same sort of use that people's main computers see.
 
 And to be honest, I don't see how we can make more people use alpha
 versions on their I need this for work system...

Recycling my chroot idea from before, how about encouraging people to 
install Alpha versions in a chroot?  You could use localfs to graft your 
real /home in if you wanted.  A bit of grub trickery would even let you 
boot right into the chroot, with the alpha kernel, when you had enough 
free time to give it a go.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Put other releases in a chroot (was Re: Current situation of amarok, and of latex tools)

2009-05-14 Thread Andrew Sayers
Regressions occur in Ubuntu releases.  As mentioned elsewhere, this is 
to be expected, and may be for the best.  But if you've spent 6 months 
getting Intrepid just how you like it, starting over again with Jaunty 
can be a pain.

So how about we offer the user the opportunity to `cp -l /bin /etc /usr 
/lib* /sbin /usr /var /chroots/jaunty` when they upgrade to Karmic? 
Then with a bit of shell trickery, they can run their old version of 
amarok just by running jaunty amarok.  It wouldn't take up that much 
space on a modern hard drive, especially using hard links, and would 
solve a lot of these headaches.

Similarly, how about we offer the user the opportunity to `debootstrap 
jaunty /chroots/intrepid --include=ubuntu-desktop` if they install 
Karmic from scratch?  That gives new users similar access to Jaunty.

Finally, how about upgrading debootstrap in Jaunty when Karmic is 
released, so that people can install it in a chroot, and try before they 
buy?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: What do you think about the signal:noise ratio? Survey results

2009-02-16 Thread Andrew Sayers
There are a few general points that I thought were worth highlighting:

There is a gulf between the way that developers and non-developers see
the world.  This is reflected in their interests, their speech, and
their approach to issues.  While Ubuntu has many ingenious
technologies to improve developer/user interaction, technological
solutions can only ever have a limited impact on this interpersonal
problem.

While the number of useless posts isn't so bad, we could definitely
stand to increase the number of useful posts to the list.  This list
is an important place for interaction between developers and
non-developers, which isn't currently being used to its full extent.

Few people are currently planning to leave.  Either everyone that's
going to leave has already left, people leave shortly after making
their mind up to leave, or people that complain about noise don't
respond to surveys.

ubuntu-devel-discuss is peanut butter, ubuntu-devel is Marmite:
everyone vaguely likes u-d-d, but either you love u-d or you hate it.
I personally see that as healthy, but it's important to be aware that
u-d isn't open to the public in any more than a technical sense.

Perceptions of signal and noise are more about style than
substance.  People don't really mind what topics you choose to discuss
on here, so long as you're clearly trying to improve the lives of your
fellow Ubuntu users.

I get the general impression that u-d-d provides a lowish quantity of
information to developers - high enough for them to subscribe, but low
enough that they can drift away when targeted by inappropriate
behaviour.




I'd also like to suggest two ways of making the list better:

First, we should write up the section 2 comments as guidance, and post
it somewhere useful.  Perhaps in the charter, on a web page somewhere,
or in an e-mail sent to new subscribers.  Because the guidance shows
that attitude is the key, this should encourage new people to
contribute even if they're not steeped in Unix knowledge, and should
encourage constructive behaviour in those who don't take naturally to
the collaborative approach.

Second, we should write up a page in FAQ format, with headings like I
am having trouble setting up my computer/have found a bug in a
program/would like to help improve Ubuntu/etc. and contents like
post your message on such-and-such list/forum/IRC channel.  The
guidance currently available (e.g. at
http://www.ubuntu.com/support/CommunitySupport) focuses too much on
listing all the things *we* do, and not enough on answering the
questions *they* have.  A page of concrete examples will do a better
job of avoiding misdirected messages.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


What do you think about the signal:noise ratio? Survey results

2009-02-16 Thread Andrew Sayers
Hi all,

The results are now available for a survey looking at the ratio of
signal to noise on this mailing list.  I'd like to thank the people
that responded, and I hope it kicks off a productive debate.  The
results are available at http://www.pileofstuff.org/ubuntu-survey/

I generally find it's best to read the raw data before people's
opinions, so I'll put my comments in a reply to this post.
Different interpretations of the data are, of course, welcome :)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Fwd: Is disabling ctrl-alt-backspace really such a good idea? - no.

2009-02-13 Thread Andrew Sayers
Fergal Daly wrote:
 Anyway, I'm curious, is this really a developer list? I subscribed
 because it was the only way to _contact_ ubuntu developers and I've
 seen lots of people use it for that. So maybe it has more technical
 users than the average but that's not the same thing as being a
 developer list.

Since we're busy talking about C-A-B right now, I thought I'd delay
posting the results of the signal:noise survey until next week.  Without
wanting to pre-empt that discussion, I think it's fair to say there's a
range of voices around here.

Back on topic, how about this for a compromise solution:

Make tty1 run a simple ncurses application similar to friendly-recovery,
which would give you a set of options like restart your computer, log
out, and go back to your graphical session.  Then people can do
c-a-f1 instead of c-a-b to get 99% of the value without the risk.
Ubuntu users that really want a shell wouldn't be that inconvenienced,
as they can still use c-a-f1 to c-a-f6.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Ctrl-alt-backspace - too late for a blueprint?

2009-02-13 Thread Andrew Sayers
I've written up a blueprint for a potential solution to the c-a-b
problem.  For some reason, Launchpad attaches this blueprint to
loco-drupal no matter what I tell it, so it's available here:

https://blueprints.launchpad.net/loco-drupal/+spec/blue-screen-of-life

If someone else would like to try moving it to somewhere more
appropriate, I can file a bug report saying whether it's just me that's
affected.

This blueprint suggests a fairly simple program.  With some help I could
get it done in time for the feature freeze, but as I'm not so much as a
MOTU, I don't know whether this is impossible for reasons of process.

Could an Ubuntu dev tell me whether it's worth rushing to try and get
this into Jaunty?  Assuming it is, I could do with some people to help
with writing/testing/etc. over the next few days.

The blueprint fleshes out an idea I proposed in another thread.  I've
tried to respond in the blueprint to suggestions made by people that
replied to my original idea.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Thoughts for assisting those with limited bandwidth

2009-02-02 Thread Andrew Sayers
If you just want to disable certain large packages, could you do some
sort of pinning arrangement on them?  You should be able to configure
apt so that it (for example) prefers an older version of OOo to an
updated one, but likes a security fix better still.  See
https://help.ubuntu.com/community/PinningHowto for one of the many
guides about this.  If this works, you might want to consider
auto-generating such a list with a bit of Perl that grabs large files in
/var/cache/apt/archives/.

Oh, and fill in the survey: http://pileofstuff.org/ubuntu-survey/ :)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Internet-Teenagers and what Ubuntu can do.

2009-01-28 Thread Andrew Sayers
To be honest, I never really understood the focus on technological
solutions to this problem.  The user being monitored will always try to
fight their way out of the box, and will often succeed (e.g. by
downloading a live CD and using that).

When you start locking down every avenue for unauthorised use of a
computer, you very quickly find yourself disabling legitimate uses of
that computer - see the record industry for a classic example of where
these good intentions usually lead to.

It seems to me that putting computers in shared spaces (e.g. the family
living room) encourages users to police themselves, as well as letting
everyone spend more time together.

 PS: a) I don't know if this is the right place to discuss this
- noise signal

Thank you for this opportunity to shamelessly remind everyone to fill in
the signal:noise survey: http://pileofstuff.org/ubuntu-survey/

Everyone is invited to fill the survey in - it's just as important to
fill in the survey if you don't know what counts as signal and noise, or
if you find it a constant irritant.  The survey doesn't take long, is
completely anonymous, and is designed so that you don't need to worry
about whether your answers are right or wrong.  Go to:
http://pileofstuff.org/ubuntu-survey/

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


What do you think about the signal:noise ratio? A survey.

2009-01-23 Thread Andrew Sayers
I have created a survey looking at this list's signal:noise ratio at
http://pileofstuff.org/ubuntu-survey/ - please take a few minutes to
fill it in, so we can better decide how to tackle the issue.

Ubuntu developers tend to complain about the ratio of signal to noise on
the Ubuntu-devel-discuss mailing list - that is, the percentage of posts
that take up their time without helping them to improve Ubuntu. Many
developers have apparently unsubscribed from the list for that reason.
This survey assesses the degree to which that actually occurs, why it
occurs, and what we can do about it.

Thanks to all the people that replied yesterday - the survey should be
more usable and informative as a result of your ideas.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Doing something about signal:noise complaints

2009-01-22 Thread Andrew Sayers
LoĂŻc Martin wrote:
 
 While I'm no Shakespeare, I'm confused with the formulation:
 
 I'm [somehow confident] that other people would consider these
 examples of noise.
 

Good point - I've now changed it to ... consider these to be examples
of noise.  Is that alright?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Doing something about signal:noise complaints

2009-01-22 Thread Andrew Sayers
Martin Owens wrote:
 
 Grumbling developers aren't good, no, but then I've also seen how
 developers treat and think of their users in the most distasteful ways
 on other lists.

I think this gets to an important issue - the survey didn't test the
degree to which people would hold back from posting good posts.  Further
to the change from LoĂŻc's suggestion, I've now changed the
[confident..not confident] questions to [definitely..definitely not].
Do you think that's enough to let people express themselves?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Doing something about signal:noise complaints

2009-01-22 Thread Andrew Sayers
Markus Hitter wrote:
 
snip - developers don't want to hear people complaining

It's an interesting theory, and should be possible to test if enough
developers respond.  We could look at the correlation between people
identifying themselves as developers and identifying bugs, mistakes,
incompatibilities, and differing opinions as examples of noise.  It's an
anonymous survey, and one where good data should help to improve
developers' lives, so there'd be little reason for them to lie.

 You ask how likely it is for the participant to post to the -devel list.
 Isn't the -devel list closed to non-developers, making it not a choice
 for most people?

It's interesting you say that.  Apparently anyone can post, but it's
moderated for non-developers.  I don't subscribe to the list myself, but
I'm told that moderation is usually fairly rapid.  Would you mind
mentioning that you thought it was closed in your survey response
tomorrow?  If it's a popular misconception, better signposting might be
part of the solution.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Doing something about signal:noise complaints

2009-01-22 Thread Andrew Sayers
Thank you for the detailed review.  I've made some changes based on it,
which I'll explain below.

I've made the language in section 1 a bit more subjective.  I'm not sure
whether you were meaning that the [] boxes should be for numeric input,
but I've left them as drop-down boxes because numbers would add make it
a harder question to answer and to analyse, without really helping to
answer any more questions.

In section 2, I appreciate the thought about time, but I have plenty of
free time to trawl the archives at the minute - that's why I'm posting
this now instead of a few months ago.  I've generally tried to reduce
the amount of work involved in filling the form out, in order to
increase the number of people that will bother.  Of course, if you feel
like linking directly to posts, I won't complain :)

I agree about would read vs. would like to read, and have fixed that
now.

The first question in section 3 is kind of interesting, and serves two
purposes.

The first purpose of section 3 question 1 is as a check on my own
sanity: I have previously assumed that developers and non-developers
should mingle on this list, but I have no real proof - maybe people
think that developers shouldn't be on a list for gossiping behind their
backs, or that non-developers have no place in a list for discussing
development ideas.  It would significantly alter the debate if either of
those were the case.

The second purpose of section 3 question 1 is to look at what sort of
mandate we have to change things on the list.  For example, a discussion
based on the survey results might look at moderating just the first post
someone sends to the list.  IMHO, any such rule would drive some
non-developers away - a moot point if the group is happy with that
compromise, but an issue worthy of long discussion if it's seen as
unacceptable.

I've tried to clean up the remainder of section 3 based on your
feedback.  I'd appreciate any comments you have about what it's like now.

I agree with most of your points about section 4 - I must confess the
whole Ubuntu Member thing is a designation I'd forgotten all about.  The
only point where I'd disagree is with unsubscribing.  It's an assumption
to say that most subscribers have been around for a while, so I've added
a question to test that very issue.  I've also changed the wording to
ask whether people are likely to leave within 6 months, to test for
people that are gradually becoming unhappy, but aren't on the brink of
unsubscribing right at this minute.

Finally, I should point out that I've written a page describing some of
the ways I intend to analyse the data.  If you're interested, it's at
http://pileofstuff.org/ubuntu-survey/analyses.html

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Go-OO.org?

2008-12-31 Thread Andrew Sayers
Hi Chris,

Thanks for this - it's good to have some definitive information :)

Do you know why Sun asked for their branding on a product with a
different feature set to their own?  It seems to me like this would
cause confusion (as evidenced by this thread), and would give some
measure of support to patches that Sun don't want in their tree.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Go-OOO.org?

2008-12-30 Thread Andrew Sayers
As I mentioned at the start, my interest in this is rather indirect - I
don't expect to ever write OO.o code, and probably don't understand the
issues as well as people closer to the situation.  As such, I'm
evaluating the arguers more than the argument, and trying to work out
what sorts of things we should support as a community if and when those
in the know suggest them.

snip - lots of people say boo Sun, few say yay Sun
 Political argument.
 
 On that field, are you suggesting +1 for sun's side This has happened
 before, it's not a disaster, it'll iron itself out; or -1 for sun's
 side this happens, but they are handling it particularly bad and
 digging their own grave?

I'm not really taking a position on any specific argument here, I'm just
pointing out that whatever Sun says about specific complaints against
them, the lack of community members willing to give Sun more than
begrudging support speaks for itself.  To put it another way - although
I can't evaluate the many arguments about bad process, I can tell that
the system is producing bad output, so I know there exists a bug
/somewhere/ that needs fixing.

snip - OO.o as the new XFree86

It's worth remembering that XFree86 had to stagnate for years before the
majority of its core developers were fed up enough to jump, and even
then it took a sizeable straw (license change) to break that particular
camel's back.  Unless Sun drastically cuts the number of devs it has
working on OO.o, I don't see this as a viable option for many years.

I'm personally more taken by the Mozilla analogy.  The Mozilla project
never went away, it just got rebooted.  Spinning OO.o off into a
non-profit organisation would be one way that Sun could reboot OO.o
without forking, although I'm sure there are others.

Snip - ask Sun to pull from Go-oo

I wasn't clear about the details of what I meant by getting Sun to pull
changes from Go-oo.  Asking Sun to pull wouldn't necessarily mean
refusing to sign the JCA, just requiring Sun to convince each individual
developer to sign, and to get Sun to do the (apparently significant)
paperwork necessary to get patches accepted.  As a developer, I'd feel
much more enthused about the process if I got a letter from Sun with a
copy of the JCA, a plain English explanation, and a pre-paid return
envelope, rather than being told to print out a copy and fax it to Santa
Clara.  It might even benefit Sun, as they wouldn't be criticised so
much for failing to put developers on a new patch for months, when
they're all busy ironing bugs out for a new release.

I think we already agree on the most important point though - that this
is an example of a drastic action that can only be taken by a strong
Go-oo project.

snip - applying gentle pressure on Sun to improve their process
 It's kind of all-or-nothing; pressure comes in the form of arguing
 with someone, or publicly criticizing them.  Hints don't work.
 Stepping into that arena makes things rough, because you can't
 maintain good faith; and once you put your foot down on the it's time
 to fork or this fork is simply better than the source line, you
 can't backpedal, because the whole atmosphere changes.

You may be right, but I'm not yet convinced in an open source setting.
Kernel developers tend to be downright rude in public, laying out why
each others' ideas are stupid, why various distros have terrible
policies, and so on.  But things seem to work out fine there, so I don't
see why OO.o can't be the same.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Go-OOO.org?

2008-12-29 Thread Andrew Sayers
Speaking as someone with a strictly armchair interest in this topic, I'd
like to make a few observations here -

The way (non-Sun) people talk about OO.o reminds me of the way people
used to talk about the pre-Firefox Mozilla project - worthy and
important, but with low developer morale due to an ugly, hostile
codebase.  A certain amount of mud will always get slung at a project of
OO.o's size, and Sun often have valid excuses for the mud that gets
thrown their way, but I've never heard a community member stand up and
defend Sun's behaviour, or give examples of how Sun went the extra mile
to help them out.  That silence speaks more to me than the noise on the
other side.

The evidence seems to be that when Sun's OO.o team makes its mind up,
only action can force them to change it - you can't debate them into a
better solution.  As such, it's important that other players in the OO.o
game have a good set of actions available to them.  Go-oo is one such
action, giving community developers an easier target for adoption of
their code - somewhat analogous to Andrew Morton's branch of Linux.
Go-oo also makes further actions possible - some subtle, some drastic.
Developing a good vocabulary of actions will be important in order to
improve the development process without suffering the upheaval that
would come from an x.org-style fork.

Towards the subtle end of the scale, Go-oo makes it possible to start
referring to the Sun codebase as Sun's tree rather than upstream,
forcing Sun to earn their reputation as the true version of OO.o.
Towards the drastic end of the scale, Go-oo could request that Sun pull
the patches they're interested in, rather than getting patches pushed at
them with whatever extra paperwork they request, putting the cost of
Sun's bureaucracy back on Sun's balance sheet.

So what does this mean for Ubuntu?  Mainly that we need to weigh our
actions not only in terms of what produces the best short-term results
for users, but also whether the message it sends will improve the
process in the long-term.  As Joe said, publicly ditching the upstream
OO.o would send far too negative a message right now.  If Sun continues
to drag its heels, the next move might be to start talking about how
Ubuntu adds value over the basic OO.o, putting some gentle corporate
pressure on Sun to get their act together.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are file permissions in files on external devices silly?

2008-11-22 Thread Andrew Sayers
tchomby wrote:
 On Fri, Nov 21, 2008 at 09:43:16PM +, Sam Tygier wrote:
 The mac os solution is to have a 'enforce permissions on this device' option 
 (in the info/properties of the device). maybe this could be implemented in a 
 similar way to the .is_audio_player file [0].
 
 A checkbox in the properties dialog of the drive. Maybe unchecked by default. 
 Sounds like a good solution to me.
 

Can I suggest that you have a go at implementing this with scripts on
your machine, then write up a blueprint for how to include it?  It
sounds like it shouldn't be too hard with bindfs doing the gruntwork,
and you're more likely to get a positive response if all you're asking
is please put buttons in these locations that perform these actions.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are file permissions in files on external devices silly?

2008-11-21 Thread Andrew Sayers
I wouldn't normally comment, but I think it's important to put some
perspective to the earlier argument about users creating noise on the list.

When someone with an @ubuntu.com address responds to a reasonable
question with a flame like this, without actually suggesting a solution
to the important use case presented, then signal-generating users get
the message that they're not welcome here.  In the end, you wind up with
a list full of noise-generators and curmudgeons like me.

To address the actual point, security of files on removable media can
only be handled at the hardware level, by making sure bad people don't
steal your disks.  Bad guys can be assumed to have root access to at
least one box that they can plug a drive into, so complex permissions
systems on removable media serve only to frustrate ordinary users.

The computing world has historically used FAT(32) as the standard
cross-platform file format, which has the advantage of not supporting
user/group permissions, thereby requiring FS developers to set the
UID/GID to some sensible default.  However, FAT has the disadvantage of
being generally inappropriate to the modern age, so this sort of issue
is likely to be more common in future.  Why not request that uid=blah
and gid=blah be supported by ext2/3, like it is for vfat?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Do you really want developers to be on this list was (Re: Very bad status of hardware (especially wifi) support in ubuntu, due to the too many accumulated regressions)

2008-11-13 Thread Andrew Sayers
Sarah - this should make sense on its own, but it builds on an idea I
suggested in
https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2008-November/006250.html

which you might provide a little background to this post.

 3) There are plenty of other hardware regressions by which I am affected
 and I feel like these should be a bit more acknowledged by developers.
 Because I can't be the only one.
 
 What I'd like to raise - how does one write such a database, when there
 is no clear-cut answer on whether this card, with this driver, works?

Since we're talking about regressions here, one solution would be to
make downgrading as easy as upgrading, and to request an optional
hardware profile immediately before a user up/downgrades.  Spotting
problematic hardware then becomes a relatively simple statistical
problem: when a user gives their hardware profile ready for an upgrade,
they can be informed you have device X, users with device X were n%
more likely than average to downgrade.  Are you sure you want to continue?.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Do you really want developers to be on this list was (Re: Very bad status of hardware (especially wifi) support in ubuntu, due to the too many accumulated regressions)

2008-11-13 Thread Andrew Sayers
Stephan Hermann wrote:
 
 On Thu, 2008-11-13 at 11:56 +0100, Markus Hitter wrote:

   - Allow downgrades. This should help narrowing potential causes of  
 the trouble.
 
 This is something I don't understand.
 When I upgrade to a new release, I always think (or is it knowing): Ok,
 for the next 4 hours I'll sit in front of this computer, and I expect
 something to break...because it's software made by people. If nothing
 breaks, then I'm really surprised and happy. But when something breaks,
 I already expected that. And when I find the cause for the breakage,
 I'll try to fix it, AND/OR file a bug report about that issue. 

That's commendable practice, but the problem in Vincenzo's case was a
hardware regression that would require upstream developer time in order
to write a fix.  An easy downgrade path would give users in that
situation the opportunity to use a system that works while they're
waiting.  It also gives a communication channel to users that aren't
technical enough to describe hardware problems - if we log hardware
profiles when users up/downgrade, we can see which profiles correlate
most strongly with downgrades, and use that to help guess which bug
reports are one guy with a dodgy graphics card, and which are something
more general.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Confusing namespace for git in Intrepid repository

2008-10-22 Thread Andrew Sayers
Scott Kitterman wrote:
 On Wednesday 22 October 2008 09:07, Ethan Baldridge wrote:
snip - package git should install Git, not GIT
 
 You would think that, but the package git predates the existance of Git the 
 DVCS, so until it's removed/renamed, no.
 
 Scott K

How about changing the package's description so that the acronym for GNU
Interactive Tools is always GIT (all caps), and adding a paragraph
description like:

Note: the GNU Interactive Tools (GIT) are not related to the Git version
control system, which is available in the git-core package.

That would at least give people a fighting chance to work out what's
going on.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


E-mail file formats [was Re: Configuration masquerading Data]

2008-09-18 Thread Andrew Sayers
This is a bit OT, but you can get e-mails out of just about anything by
apt-getting an IMAP server and uploading your old mail from your client
of anti-choice.  It's not exactly a newbie-friendly solution, but it's
how I rescued my e-mails from Outlook back in the day.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Configuration Validation

2008-09-13 Thread Andrew Sayers
I like the idea of a FUSE interface to GConf, and I could see extending
the idea to some sort of configFS - I seem to remember the ReiserFS guys
talking about a similar idea years ago, before recent events overtook
them.  I think an interface that involves opening
~/.configfs/myproject/version1/number_of_frobnitzem would be very
attractive to developers of small projects, eager to avoid the pain of
maintaining a parser.  It also has the advantage of degrading gracefully
- if configFS isn't installed, it just creates a directory hierarchy to
store a program's data.

A FUSE-based solution for small projects strikes me as the most
effective use of time in the short-term, because you can get some actual
evidence about the problem domain as you build the program.
Volunteering to convert sendmail.cf to XML isn't something you want to
do right away :)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Configuration Validation

2008-09-12 Thread Andrew Sayers
Could you spell out some specific issues that this would solve?  For
example, are you looking to avoid two packages overwriting each other's
files in ~/?  If so, can you give an example of that happening?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Configuration Validation

2008-09-12 Thread Andrew Sayers
I think we've all had that idea at one time or another, but sadly it's
based on a misunderstanding of how the community works.

Steve Jobs stood atop the mountain and commanded that Mac developers
jump to plists, and everyone jumped because that's how Apple development
works.  If Mark Shuttleworth stood atop the mountain and commanded Linux
devs do the same, people would listen politely then carry on as before.
 Linux developers are very individualistic in that way, which has
benefits and drawbacks.  You've identified some of the drawbacks, and I
should imagine others will reply stating some benefits soon enough :)

Projects like GConf tackle this issue in a more Linuxy way: write a
tool, then convince people that they'll get more value for less effort
by using it.  If you're really motivated to work on this problem, I
suggest you talk to them about what you can do to help out.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: feedback on new wiki theme

2008-09-07 Thread Andrew Sayers
I don't like fixed width pages personally (my computer should do what I
want, not what someone else wants me to want), but you can appease more
reasonable people by changing width: NNNpx statements to width: NNem
or max-width: NNem.

Using em rather than px fixes the width at a certain number of times
the height of the element's font, so you should always get the same
number of characters on a line, no matter the user's font size.

Using max-width instead of width makes life easier for people who
have particularly narrow screens (or large fonts), but will give you
problems with IE.  Then again everything gives you problems with IE, so
I generally code to the standard then add IE-specific CSS by adding
something like:

!--[if lt IE 7]
link href=ie6.css type=text/css rel=stylesheet
![endif]--
!--[if lte IE 7]
link href=ie7.css type=text/css rel=stylesheet/
![endif]--

This gives you an ie6.css file that's rendered in all IEs before IE7,
and an ie7.css that's rendered in all IEs before IE8.  IE8 allegedly
won't need its own file of special cases, but it should be obvious how
to extend the technique if you find that's not the case.  This technique
is based on that recommended by the IE team for including IE-specific CSS:

http://blogs.msdn.com/ie/archive/2005/10/12/480242.aspx

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Minutes from the Technical Board, 2008-07-15

2008-08-19 Thread Andrew Sayers
I think there's an elephant in this room - why are we running fsck at all?

a) If it's to detect corruption due to software errors, fsck should be
linked up to apport, and reported (semi-)automatically.
b) If it's to check for dying hardware[1], it can be disabled for all
but the oldest hard drives[2], and even then is better replaced with a
badblocks check run while booting continues
c) If it's to guard against bit-flipping caused by cosmic rays and other
weirdness[3], snapshot-based solutions discussed earlier would be more
appropriate, because the most vulnerable drives are huge/highly active
ones that live on servers that never get rebooted.

The nearest to a definitive statement that I've been able to find is
from the tune2fs man page.  The following is from the text for the -c
option:

Bad disk drives, cables, memory, and kernel bugs could all
corrupt a filesystem without marking the filesystem dirty or
in error.

(A similar message is included in the text of the -i option)

This seems to cover all the above alternatives.  Given that any solution
that wants to make it into Intrepid has to be feature-complete by the
28th, how about doing 'fsck ... | tee /var/tmp/fsck.log || mv
/tmp/fsck.log /var/cache/apport.log' in checkfs, then getting apport to
pick up any logs and ask to report them in the usual way?  Then we'll
have better data to make a decision with for Intrepid+1.

- Andrew

[1]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2007-October/001843.html
[2]https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2007-October/001856.html
[3]http://kerneltrap.org/Linux/Data_Errors_During_Drive_Communication

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Automatic fsck

2008-08-17 Thread Andrew Sayers
I've been trying out a script (attached) for the last few days, that
does something similar to the idea in my previous comment.  It's a shell
script that can be put in cron.daily and/or called from an @reboot cron
job.  The script checks each of your LVM-based filesystems in turn, and
won't start a new check if it's been going for more than 10 minutes.

The short version of the story is that fsck'ing a snapshot of a live
filesystem is possible, but we might want to get at least a little input
from LVM or FS developers first.

The main problem with this script is that it trips over on temporary
files.  It's common for programs (via mkstemp(), I think) to create a
temporary file, open it, then delete it.  The inode that was previously
associated with the file continues to exist so long as a file descriptor
to it remains open, but when a snapshot of the filesystem is created,
the inodes are never removed, so they become orphans.  fsck notices this
minor problem in the snapshot and flags the filesystem as needing to be
checked.

Steps to repeat this problem:

$ sudo /etc/init.d/mysql start # creates temporary files on my system
$ sudo lvcreate -L1024M -s /dev/your-volgroup/your-root-device
$ sudo fsck -v -n -f /dev/your-volgroup/lvol0
$ sudo lvremove /dev/your-volgroup/lvol0

fsck should complain about orphaned files.  I get this:

$ sudo fsck -v -n -f /dev/nautilus/lvol0
fsck 1.40.8 (13-Mar-2008)
e2fsck 1.40.8 (13-Mar-2008)
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 180229 has zero dtime.  Fix? no

Inodes that were part of a corrupted orphan linked list found.  Fix? no

Inode 180230 was part of the orphaned inode list.  IGNORED.
Inode 180231 was part of the orphaned inode list.  IGNORED.
Inode 180232 was part of the orphaned inode list.  IGNORED.
Inode 180233 was part of the orphaned inode list.  IGNORED.
Inode 180251 was part of the orphaned inode list.  IGNORED.
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Inode bitmap differences:  -(180229--180233) -180251
Fix? no


root: ** WARNING: Filesystem still has errors **


   23381 inodes used (8.92%)
 518 non-contiguous inodes (2.2%)
 # of inodes with ind/dind/tind blocks: 2563/15/0
  211424 blocks used (40.33%)
   0 bad blocks
   1 large file

   13390 regular files
2902 directories
1258 character device files
4553 block device files
   1 fifo
  16 links
1216 symbolic links (1137 fast symbolic links)
  46 sockets

   23382 files


To my untrained eye, it looks like this could be argued to be a bug in
ext2 or LVM (because they're not deleting inodes properly), or a bug in
fsck (because it doesn't have an errors remain, but who cares? return
code).  Alternatively, it could be argued that the fsck script I've
written should parse the output of fsck and decide which filesystem
errors are really important.

I've gone as far as I can go with this idea - if someone with more of a
clue is interested, could you suggest the best way of solving this issue?

- Andrew
#!/bin/sh
# Check filesystems without rebooting, using LVM
# Andrew Sayers, 14 August 2008
# [EMAIL PROTECTED]
#
# This script aims to be FS-agnostic, although it currently calls tune2fs in
# two places, to reset the mount-count and check-time.

# What to tell the user if an error occurs
TITLE=Filesystem problem detected
MESSAGE=Your hard disk has a problem,
Please reboot your system to fix it


check_filesystem() {
# (I think) LVM escapes dashes in volume names by doubling them (--)
# The following gets the volume group, even if it has --s in it
export VOLDEV=$1
export VOLGROUP=$(echo $VOLDEV | sed -e 
's/^\(\(\(\([^-]*\)--\)*\)[^-]*\)-\([^-].*\)/\1/' -e 's/--/-/g') \
export VOLUME=$(  echo $VOLDEV | sed -e 
's/^\(\(\(\([^-]*\)--\)*\)[^-]*\)-\([^-].*\)/\5/' -e 's/--/-/g')

export BACKUP=$(lvcreate -L1024M -s /dev/$VOLGROUP/$VOLUME | cut -d\ 
-f2)
if ERRORS=$(fsck -v -n -f /dev/$VOLGROUP/$BACKUP 21)
then
tune2fs -T now -C 0 /dev/mapper/$VOLDEV /dev/null
lvremove -f /dev/$VOLGROUP/$BACKUP /dev/null

# Note: in the success case, success isn't reported until after 
tune2fs has completed
# (in case tune2fs fails)
touch /var/cache/fsck/$VOLDEV
logger -p cron.info snapshot fsck of 
\/dev/$VOLGROUP/$VOLUME\ reported a healthy filesystem
else
RETURN_VALUE=$?

# TODO: check whether $BACKUP has gone away (due to too much FS 
activity), and handle that somehow
# TODO: write a co-operating GUI app to handle messages 
something like:
# notify-send -u critical -t 6000 --category=device.error 
$TITLE $MESSAGE
# TODO: automatically remove $BACKUP after reboot

Re: Automatic fsck

2008-08-13 Thread Andrew Sayers
Phillip Susi wrote:
 The snapshot was never mounted in the first place, so there is no need
 to unmount it.
 
 As you mentioned before however, any files changed since the snapshot
 was made will be lost when you reboot and merge the snapshot back to the
 main volume.
 

Either I'm not making myself clear or my lack of kernel mojo is showing.

The test I did previously was on a running filesystem, while it was in
use - not during boot-up as we'd previously discussed.  It seems to me
that a literal snapshot, capturing a random moment in the life of a
filesystem, would be treated by e2fsck as not having been cleanly
unmounted.  Since e2fsck didn't complain about anything like that, some
part of the snapshotting process must cleanly unmount the filesystem.
This means that one can take a snapshot while the system is running,
fsck the snapshot, and (if the snapshot is error-free) conclude that the
main volume doesn't need an automatic check for another few months/reboots.

So building on Lars' idea, I propose:

At boot-time:

1. kernel mounts root filesystem read-only
2. init scripts check which filesystems have passed their max mount
   count/interval
   2a. init scripts snapshot those filesystems
   2b. the main volumes for those systems are mounted
   (without being checked)
3. init scripts continue as normal
4. fsck starts on each snapshot, preferably with ionice -c3
5. if a snapshot is found to be clean,
   5a. the main volume has its mount-count/check-time reset
   5b. the snapshot is destroyed
   5c. the user is /not/ informed
6. if a snapshot is found to have problems,
   6a. the main volume is marked to be fsck'd
   6b. the snapshot is destroyed
   6c. the user is asked to reboot

In /etc/cron.weekly/fsck:

1. pick a device in /dev/mapper (i.e. only check one device per week)
2. snapshot that volume
3. continue from (4), above

Since merging is currently in beta, and probably a daft idea in this
context, it's better to roll out something more practical, and think
about being more audacious next time.

Remembering my aforementioned lack of kernel mojo, the biggest problems
I can see with this approach are that it requires Ubiquity to do LVM by
default and to keep a significant chunk of the drive free for snapshots
(off the top of my head, I'd say 1-5% of total disk space).

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Automatic fsck

2008-08-13 Thread Andrew Sayers
Alexander Jones wrote:
 PLEASE redirect your efforts towards online fscking. This whole idea
 is absolutely horrible.

How so?

- Andrew


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Call for testing empathy

2008-08-12 Thread Andrew Sayers
Xavier Claessens wrote:
 What do you mean by merging? The code is totally different, it's
 impossible to merge together.
 
 Xavier Claessens

I'm talking more about merging the projects than the codebases - finding
a way that you can all work on a single project that would satisfy users
and developers of both.  That would mean working out the unique selling
points from both projects, and finding a way of developing an
application that has the best of both.  For example, the ability to use
(or just import) configuration files from older versions of Ekiga and
Empathy, the best user interface elements of both,  automatic creation
of an ekiga.net account, and so on.  I accept that you'd need to throw
away a significant chunk of code from both projects, but to be honest,
my experience has been that rewriting code isn't too time-consuming once
you've made all the little decisions about how the program needs to work.

Ego-wise, it would probably also mean picking a new name for the joint
project, because otherwise one project's members feel like they've been
gobbled up by another project.  You could use that to your advantage
though - calling the joint project something like GNOME instant
messaging would give the impression of an official part of GNOME,
integrated with the wider project.  That would help you sell other GNOME
folk on using the library in their own apps.

I assume that you're planning to replace Ekiga as the default GNOME
voice/video client in the long-run anyway, so it seems worthwhile to go
through the pain of merging the projects now, rather duplicate each
other's work until you're ready to have a bitter fight on a mailing list
somewhere about which project lives and which dies.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Automatic fsck

2008-08-12 Thread Andrew Sayers
Matt Zimmerman wrote:
 The LVM solution isn't viable anyway; there's no guarantee that the metadata
 on disk is in any way consistent while the filesystem is mounted.  The
 problem in your test isn't only that the filesystem is changing from
 underneath it, it's also that it may not have been consistent in the first
 place.

How about making a snapshot when the system boots up (before it's
mounted), then fsck'ing the snapshot?  I don't know enough about LVM to
say whether snapshots can be written back to the main partition, but if
so, it would be possible to fix the broken partition from the GUI, then
reboot (or just remount the filesystem).  The downside of that is that
any recent changes would be lost, but the upside is that you'll never
need to sit and watch fsck run.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Call for testing empathy

2008-08-10 Thread Andrew Sayers
Alexander Jones wrote:
 2008/8/10 Luke L [EMAIL PROTECTED]:
 Pidgin works terrificly, and is stable. Ekiga covers the rest. This would be
 a pretty big switch in terms of volume of users, and Intrepid is only 2.5
 months away. I believe this should be put on hold.
 
 Saying that Ekiga meets the VoVi needs of Pidgin users is pretty
 ridiculous. It's SIP for a start, nobody uses SIP, and for the few
 that do, TELEPATHY SUPPORTS SIP TOO!

According to the popularity contest[1], Ekiga is the 464th
most-frequently-used package in the whole of Ubuntu, while Pidgin is the
628th.  That's comparable to Evolution (448th) or or Wine (647th).
Although popcon data is only representative of people that have
installed the popcon package, it's reasonable to say that both Ekiga and
Pidgin are meeting the needs of a large group of users.

I'm not arguing that these packages must never be touched, just that we
need to think long and hard before ripping them out.  I'd be much more
swayed by the argument when Empathy starts to realise some of its
game-changing potential.

- Andrew

[1] http://popcon.ubuntu.com/by_vote.gz

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Call for testing empathy

2008-08-10 Thread Andrew Sayers
Xavier Claessens wrote:
 I doubt the contest is meaningful for packages installed by default...
 Having pidgin/ekiga installed does not mean actually using it. Or am I
 wrong?
 
 Xavier Claessens.

You'd be right if I referenced the list sorted by the number of people
who installed the package (http://popcon.ubuntu.com/by_inst.gz), but the
list sorted by number of people who use the package regularly
(http://popcon.ubuntu.com/by_vote.gz) should indeed measure actual use.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Call for testing empathy

2008-08-10 Thread Andrew Sayers
I'm a bit confused about the desired outcome of this proposal.  From the
discussion, it seems to be an attempt to get more developers looking at
a new messaging framework with the potential to do all sorts of weird
and wonderful things.  If so, then replacing Pidgin as the default IM
client seems like a bad plan - I would expect devs to already have a
favourite IM client, which they'll install unless they have a compelling
reason to switch.  On the other hand, I'd expect newbies to use the
default because they don't know they have a choice.  That would lead to
a small developer base supporting a lot of users submitting bug reports
about making Empathy a better IM client, which would take developer time
away from being creative.

Could you give some examples of existing Ubuntu applications that would
benefit from integrating (lib)empathy?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Disappointed with Ubuntu Server, could be used by such a wider audience

2008-07-31 Thread Andrew Sayers
George Farris wrote:
 Lets start again.  Yes, contrary to popular geek culture, there are
 people that would like to:
 
 A) Install a home server from CD
 B) Login and be presented with a list of options for configuring that
 server
 C) Not have to understand how to run the server at the guts level.
 
 Do you want to shared your music on your network?
 Yes
 Where is your music located?
 Done
 
 Do you want to share files with others on your network?
 Yes
 Fine - Proceed to share definitions.
 Define file locations and security
 
 Would you like to run a web server?
 Yes
 Fine this is now set up and you can connect here:

It sounds like you're asking for gnome-app-install (the Add/Remove
application in the main menu) to include Apache in its application list,
and to add whatever bits and pieces are necessary for Samba and related
packages to be counted as supported applications.

If someone from the gnome-app-install team is listening, they might be
able to tell you how much technical know-how is needed to make the above
happen.  Otherwise, you could e-mail one of them or post a question
asking how you'd get started on it:

https://answers.launchpad.net/gnome-app-install

Either way, I think this is a good idea, and I'm glad you volunteered to
 do something about it :p

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Debugging help

2008-07-17 Thread Andrew Sayers
Could this be related to the infamous bug #196277?  To my untrained eye,
it looks rather like the issues described by Frantisek Fuka in
https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/196277

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Feature Request: Better partitioning wizard

2008-07-08 Thread Andrew Sayers
Partitioning is one of those topics that you can argue round forever
without any danger of reaching agreement about the general case.  I'm
not sure what arguments you've read about the / + /home approach, but
I found a recent discussion on this list fairly interesting:

https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2008-May/004207.html

The short version of the argument is that two partitions can solve some
problems, but there are better solutions to the problems that ordinary
people have in practice.  The only way I would expect you to benefit
from a second partition, based on the usage you've described, is if you
wanted to dual boot between two Linux distributions (say, Ubuntu and Red
Hat), and maintain a shared /home partition.  My advice would be to
investigate how to repartition, but not to actually do anything
until/unless you need to.  Repartitioning now is no easier than
repartitioning later, and later you'll have more skill and perhaps
better tools to do the job.

If you really want to add a /home partition, which as Bryce suggests
isn't something to consider lightly, you probably want to use gparted
(available in all good repos).

I'm ambivalent about whether to include an intermediate option, but I
think a collection of roles would be a good choice if we do include
one.  Roles could include standard (as automatic), multiple Linux
distributions (/ and /home), mail server, (/ and /var), and so on.
As well as helping people with less common use cases, it would give us
an excuse to add documentation that's only visible to those that care.
Example descriptions could include:

Standard: this partitions your disk in a way that has been found to be
the most appropriate for ordinary use, based on experience from a
variety of operating systems

Multiple Linux distributions: this partitions your disk in a way that
allows you to share your home directory with other Linux distributions
installed on your computer

Mail server: this partitions your disk in a way that ensures mail will
still be delivered if your root partition fills up.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: No run menu item?

2008-07-04 Thread Andrew Sayers
Przemysław Kulczycki wrote:
 Recently I've noticed that Ubuntu (or just Gnome) doesn't have a run
 program menu entry like in Windows' Start menu.
 I don't think every user will magically know the alt+f2 shortcut.
 I think this menu item should be added to the Applications menu, above
 or below Add/Remove menu item.
 It's a pretty trivial task so I think it could be easily added in Intrepid.
 

Is this an issue that would be better handled in Brainstorm?  In the
absence of a clever compromise, it's largely about how common the use
case is - an empirical question best answered by the crowd rather than
by developers.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Pidgin VNC integration

2008-06-11 Thread Andrew Sayers
There are several people looking into this problem in different ways -
mostly building on existing frameworks such as telepathy.  From a
technological point-of-view, the problem is that securely sharing a
desktop when both people are potentially behind NAT firewalls and
haven't shared public keys is a seriously non-trivial problem.

My approach has been to develop a stand-alone assistant that would guide
you through the necessary steps, but without knowing how to build up a
userbase to help with development, I'm not sure how to proceed.  If
you're interested, the program is available here:

http://bazaar.launchpad.net/~andrew-bugs-launchpad-net/+junk/remote_help_assistant/files

It's a one-file Python script, so you just download it and run the
assistant on both computers at once.  It's designed for use while
talking over a phone line, which is more secure than an Internet
connection, and more in line with the use case I envisioned (a ringing
phone followed by a panicked friend).  If you'd prefer to write a Pidgin
plugin, I think this program would be a good bit of inspiration,
although you'd have difficulty convincing users to phone each other when
an IM session is right there.

- Andrew

* not to say that my idea would be the One True Solution if only I
advertised it better, just that I don't know how to find out

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Remote recovery suggestion, now with initial code

2008-06-01 Thread Andrew Sayers
Thanks - I've updated that whole page now, including this section.  I've
removed the implementation section, because it's just out-of-date
documentation now there's an actual implementation to refer to.  Is
there anything else you'd like me to add/remove/change?

Because popularity contest data is fairly biased, and because I'm likely
to use other packages as well, I've changed the page to talk about using
only packages that are installed in more than 99% of popcon instances.
I've also tried to use more conciliatory language about Perl, so as not
to give the impression of being opposed to it on principle.  For what
it's worth, Perl was actually my first choice, and I begrudgingly learnt
Python after discovering that Perl didn't support IPv6 in the default
install.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Remote recovery suggestion, now with initial code

2008-05-31 Thread Andrew Sayers
We had a discussion in early May about creating a simple mechanism to
make over-the-phone tech support easier.  At the time, our (and
especially my) focus was on recovery from a situation where X wouldn't
start.  I've put some time into the project since we talked about it, so
if there's anyone out there with a little Python experience and an
interest in reducing stress levels in their tech support, I could really
do with your input right now.

I now have an initial Python implementation of a GTK interface at:

https://code.launchpad.net/~andrew-bugs-launchpad-net/+junk/remote_help_assistant

This is a single-file Python script that runs as a pair of GTK
assistants.  You run the assistant on your computer and the person
you're helping runs one on theirs, you pass some information to each
other over the phone (IP address, SSH keys etc.), and it guides you
through the process of setting up a VNC (or shell) session.

There should be a text interface once the program starts to mature, but
right now it would just be an unnecessary  development headache.  The
program is basically usable right now, but is ugly in places and
probably filled with implicit assumptions that I've made.

What I really need is for someone to do some peer review - either using
it and seeing where the bugs are, or if you have some Python experience,
looking over the code and asking pointed questions/suggesting patches.

I've also got a (somewhat out-of-date) blueprint discussing what I'd
like to see at:

https://wiki.ubuntu.com/Recovery/Remote

The blueprint is worth a read if you'd like to check the program out,
but some of it is still aspirations for the future while some of it is
based on old assumptions that turned out to be false.

At present, the only security concern I'm aware of with this program
(besides the fundamental issue of giving control of your computer to
someone else) is that it could put people with weak passwords at higher
risk of their system being compromised - I'll reply to this post with an
explanation, so people can discuss it separately.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Remote recovery suggestion, now with initial code

2008-05-31 Thread Andrew Sayers
There's one serious security concern I have about the remote help
assistant which I'm not sure how to work around: at present, it sends
the helper's username in plaintext over the Internet, and strongly hints
that they're running an SSH server.  That's not a problem if you have
proper security in place, but it puts helpers with weak passwords and
default security settings at increased risk.

The program handles security by using an SSH client on one end, and a
special-purpose SSH server on the other.  Because the helper can't be
assumed to have root access to the system they're running on, the SSH
server is run in the user's own account.  SSH servers running in
ordinary user accounts can only log in with that username, so the
username needs to be transmitted over the Internet before proper
security can be established.  This means that an eavesdropper can sniff
your username and IP address, then start trying to brute-force your
password.

The SSH server run by the assistant itself isn't at risk, because you
can only log in to it using a specified RSA key, and even if you could
break in, you wouldn't get anything useful like a shell.

Does anyone have any suggestions about this?

- Andrew



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Strip incompatible characters from Windows partitions!

2008-05-16 Thread Andrew Sayers
This e-mail summarises a discussion in #ubuntu-motu between myself,
ScottK and persia.  I'll first explain the general problem, then suggest
a messy solution to a surprisingly messy problem.  Most of these ideas
are not my own, and in fact had to be explained to me at some length, so
please don't assume that I know what I'm talking about ;)

Since there wasn't an NTFS expert available during the conversation,
it's possible that the following is only true of FAT filesystems.

Characters like '' and '/' are in fact just the tip of the iceberg -
see https://bugs.launchpad.net/ubuntu/+source/dosfstools/+bug/49217 for
another way that the same problem can bite you.

Because there are no proper standards for Windows filesystems, there's
no common agreement about how to turn the string of bytes that make up a
FAT filename into a string of characters.  For example, a Japanese
computer might look at a filesystem and assume that all the files are
encoded in SHIFT-JIS, while a Western European computer might look at
the same filesystem and assume that all the files are encoded in code
page 1252.

Most irritatingly, FAT filenames can use single-byte encodings (like
ASCII), multi-byte (like UTF-8), or double-byte (like UTF-16).  This
means that a filename might be valid ASCII (perhaps including some
disallowed ASCII characters, perhaps not), but which would be garbled
nonsense if interpreted as such.

The above problems make automatically detecting the character encoding
of files in a FAT filesystem at best hard and sometimes impossible.
Therefore, there's no general way to tell whether '', '/' etc. are
valid characters in a given file in a FAT file.  Even if there were a
way to work out which characters are allowed, ext2-on-Windows drivers
make it possible to have files with disallowed characters in a Windows
system.

Disallowed characters aren't so much a Windows kernel issue as a
pervasive Windows UI issue.  The exception that proves the rule is Emacs
on Windows.  Emacs being Emacs, it pays little attention to the
conventions of young upstarts like Microsoft, so can handle files with
funnily-named characters just fine.

Given the above, my suggestion is that there ought to be a tool that
runs identically in Windows and Linux that interactively converts files.
 It would ask for an initial encoding, target encoding, and target path,
then recurse through all the directories rooted in that path,
translating files as it goes.  Characters that are valid but tend to
cause headaches could be automatically converted, or the user could be
prompted for a better name.  Most of the actual work in this program can
be done by iconv, although it might be worth having a punycode mode that
minimises incompatibility at the expense of readability.  Finally, I
would suggest that the Windows version be run straight from the Ubuntu
CD, rather than being made available from some website somewhere.  As
well as making the program a little bit easier to find, it makes a great
advert for Linux - it solves the problems that Windows causes.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Making apt-get powercut-proof

2008-05-15 Thread Andrew Sayers
If you're amenable to extra scripts being suggested, I'll submit a bug
report(s) as and when it's relevant.

You're right about requiring a user choice, but I'm a bit concerned that
users are going to be confronted with a collection of options that they
don't understand, where one of them is known to be the right choice, but
they have to poke about until they find it.  How would you feel about
adding a mechanism to do a quick check before showing the menu, then
putting (recommended) next to one of the choices?

In the case of the current choices, my suggestion would be that fsck be
recommended if `touch /` returns false (i.e. read-only root filesystem,
even if /etc/mtab denies it), else dpkg is recommended if apt or dpkg
lock files exist (I assume they use lock files?), else xfix is
recommended (it uses dpkg, so can't be run until dpkg is happy).  The
root shell would never be recommended, because people that want a shell
don't need to be told.

If you're happy with this idea, I can submit a skeleton implementation
if you'd like.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Extra hand-holding if `mount -a` fails

2008-05-08 Thread Andrew Sayers
When important filesystems (like /usr and /home) fail to mount, Ubuntu
currently tries to carry on regardless, leading to confusing
higher-level errors.  Ubuntu's /etc/fstab uses UUID=blah to make failed
mounts less likely, but it also means that it's impossible to mount
anything when udev fails to start.

I think that when /etc/init.d/mountall notices `mount -a` return an
error condition, it should provide a simple interface to manually mount
drives, and warn the user to fix the problem once booting is successful.

I've attached a (bash-specific, poorly commented and totally undebugged)
shell script to give a rough idea of what I'd like to see.

Does this seem plausible?

- Andrew


mount-failure-hand-holding.sh
Description: application/shellscript
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
(starting a new sub-thread for a new proposal)

I'm currently swinging back towards remote recovery and remote help
being distinct problems that need different solutions.  There are three
reasons for that:

1) As I mentioned in a previous post, remote recovery needs to be done
   in an extremely defencive way.  Some of the things that could get
   users into a mess include:

   * Failure to mount /home
   * Deleting /root and/or the root account
   * A half-installed upgrade that leaves sshd_config messed up
   * /, /tmp/, /var etc. mounted read-only

   None of these are problems that you need to worry about in the remote
   help case.

2) While I definitely agree with people that say remote help should be
   an over the virtual shoulder exercise so that the user can learn
   some things, remote recovery is generally necessary when they've got
   themselves into a situation where they don't understand the problem,
   and wouldn't understand the solution.

3) From the point of view of remote recovery, the problem with public
   keys is that they're very long and difficult to type.  In a remote
   help situation, you post someone a floppy disk with your public key
   on, whereas with recovery, you'd have to spell it out over the phone.
   My public RSA key is 200 characters, while my DSA key is 580.
   Speaking 1 character per second, it would take more than 3 minutes to
   type out an RSA key.

Passwords strike me as the only practical solution for remote recovery,
but I've asked the SSH team whether they would disable password
authentication in the default configuration.  Their opinion is the one
that matters, so I'll work around them in this specific case if
necessary.  I'd say it's best to wait for the SSH team to reply before
making decisions about remote help.  The question was posted here:
https://answers.launchpad.net/ubuntu/+source/openssh/+question/32326

Given all of the above, here's a rough sketch for my new remote recovery
proposal:

The expert tells the friend a newly-generated one-time random password
that the friend can use to SSH into a chrooted jail.  The jail contains
two pipes: shell_in and shell_out.  A superuser shell is started on the
recovery machine, and all input/output to it is routed over the SSH
connection and through the pipes on the expert's machine.  The expert
can then access a root shell on the recovery machine by doing the
following on the expert's machine:

cat ~remote-recovery/shell_out  cat  ~remote-recovery/shell_in

This reverse login is definitely not great for the recovery machine's
security, so it should only be used in emergencies.  However, it seems
to me that it should be extremely robust in the face of breakage on the
recovery machine.

Going back to a point I made earlier, this isn't an everything-proof
solution.  For example, it's no use if the expert's ISP is running a NAT
that the expert can't control (as my ISP seems to be threatening to do).
 However, that sort of thing strikes me as a problem best left for
version 2, when we start to see what the actual problems are in the real
world.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
Justin,

I agree that a single solution would be best, but I can't see how to
make it work in the case of a system that's mostly broken.  However, it
looks like it's going to be an evidentiary question - either we can make
it work or we can't.  How would you feel about the following working
arrangement:

I'll rewrite my remote recovery blueprint based on ideas discussed
here, focussing solely on the worst case; you write up a remote help
blueprint focussing on the common case.  Then we'll liberally
criticque/merge/steal ideas and see where that goes.  If we wind up with
a single blueprint, great!  If not, we'll have a good solution for the
general case and an ugly solution that's just usable enough to bootstrap
into the general case.

In the spirit of working towards the middle then...

I agree that VNC would be better in the common case.  In fact, using VNC
leaves the door open to someday expanding the solution to Windows-using
friends, although that's definitely a version 2 idea.

If we agree that passwords are best in the case of an emergency phone
call from someone with no prior relationship, I think we should use keys
everywhere else, but leave key exchange up to users.  I agree that
web/e-mail is better for people who aren't that paranoid, but those that
are more paranoid will want the freedom to use whatever mechanism they
trust.

Showing the SSH session to the user in the recovery case is a very good
idea, and should be fairly simple to implement.  Some sort of chat
session is also a good idea, so long as we can implement it so that two
people aren't trying to type on the same command line at the same time.
 That should just involve putting chat and commands in two separate ttys.

Another use case we need to think about is broken video
cards/monitors/etc. that make it impossible for the non-technical friend
to use their computer at all.  This suggests that the expert should be
able to log in by default, rather than having access only granted on
request.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
On the other hand, I'm wrong about that :)

I've just discovered a package called socat, which is an extremely
general command line tool for creating connections between things - more
so even than netcat.  It's in Universe, so it's presumably not that much
of an ask to have it upgraded to main.  I think we can create a general
solution using socat.  In the catastrpohic case, this would work if only
socat and a shell script were still working (instead of ssh and a shell
script).

Let's formulate the problem this way:

We need to create a bidirectional, secure method of communication
between two computers.  Some of the ways to set this up include:

1) Helpee connects to helper
2) Helper connects to helpee
3) Helpee and helper both connect to some shared proxy server

Each of these can be done over IPv4 or IPv6, over the public Internet or
a private connection (such as a modem).

Once the connection is made, we need to start up an arbitrary interface
using that channel.  Possible interfaces include:

1) A VNC-based GUI
2) An X-based GUI (for e.g. broken video cards)
3) A screen-based TUI (for those of us that love screen)
4) A pty-based TUI (so that editors like vi work)
5) A pipe-based TUI (for dire emergencies)
6) A non-interactive mechanism for swapping keys

We can implement this using a collection of interface modules that
request a particular type of connection from socat, and a collection of
socat modules that deliver that connection over whichever protocol has
been configured.

Users can then add extra socat modules to handle their own esoteric
situations.

Does this seem workable?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
Having looked quickly at cryptcat, it seems like some interfaces would
be best served by cryptcat+socat, so that you can get security and a
pseudo-terminal.  To generalise your idea even further, how about a
bidding system?  For example, say the expert asks for a forward remote
shell on the friend's computer:

First, the program asks for a shell connection to the specified IP address:

* SSH doesn't have that IP address in its known_hosts file, so it bids 90
* bash can create a login, but has to sub-contract creation of the PTY.
 It bids 99, on condition that someone else handle the PTY
* telnet can't do security, so bids -1 (i.e. get confirmation before
continuing)

Since a bid exists that has conditions, we do a second round of bidding.
 In the second round, the program asks for a PTY on the specified address:

* socat bids twice:
  - 99, on condition that someone else handle the connection
  - 49 if it has to handle the connection itself

Since there's still a bid with conditions, the third round asks for a
connection to the server:

* SSH still doesn't know the hostname, and isn't designed for that
particular purpose, so bids 80
* cryptcat doesn't know that host name, so bids 90
* netcat can't do security, so bids -1

SSH's bid is ignored, because there's already a higher bid with SSH in it

Since no more bids have conditions, the various options are ranked first
according to the lowest bid in the chain, then according to the number
of links in the chain:

1) SSH (90/1)
2) bash-socat-cryptcat (90/3)
3) bash-socat (49/2)
4) telnet (-1/1)
5) bash-socat-netcat (-1/3)

Each of these is then tried in order.  If the program gets all the way
down to (4) without success, it asks the expert whether he wants to try
insecure connection methods (there might be nothing wrong with telnet -
for example, if the computers are already connected over a modem).

- Andrew

Justin M. Wray wrote:
 Yes, this seems to be the robust sort of approch that seems to cover the most 
 use cases and works at a very low level.
 
 Thoughts on crytpecat verses socat?  It has the benifit of being more secure. 
  I think the solution should work as such:
 
 1). Try SSH, if fail then,
 2). Try cryptcat, if fail then,
 3). Try socat
 
 There will be times when cryptcat will be broken (lib issues etc), so having 
 socat as the last resort seems safe.  But for the sakof passwords and data I 
 do not think it should be the first option.  In addition SSH is far more 
 robost with added complexity, if it is avaliable, I think it should be used.
 
 Thoughts?
 
 The ability to have:
 1) A VNC-based GUI
 2) An X-based GUI (for e.g. broken video cards)
 3) A screen-based TUI (for those of us that love screen)
 4) A pty-based TUI (so that editors like vi work)
 5) A pipe-based TUI (for dire emergencies)
 6) A non-interactive mechanism for swapping keys
 
 Adding only:
 
 7) Custom command
 
 This is the exact sort of options I think should be avaliable in such a 
 solution.  Allowing the helpless user pick when running the recovery 
 command.
 
 The best connection solution is a reverse connection to the helpless user, 
 this bypasses the NAT/Firewall issues.  Ssh allows tunnleing, so when 
 possible we should use this (see above).
 
 Else we may want to look into `rrs` (remote reverse shell).
 
 Thanks,
 Justin M. Wray
 
 Sent via BlackBerry by ATT

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
We're certainly getting there!

I haven't yet given up hope of doing this with a shell script
(evidentiary question again).  The benefit of a shell script is that it
leaves open the possibility of packaging a lite version of the program
as a single architecture-neutral file, so that we can support users of
other unixoid systems that can download a script.

The reason I went for numbered rankings was that it makes it easy to
compare two modules that don't know anything about each other.  If every
module needs a greater than/less than function that knows about every
other module, that makes O(n^2) work for us every time we want to add a
new module.  Or more precisely, O(n^2) work for some poor expert we'll
never meet that happens to need a particular module for his special
case.  On the other hand, if you have a non-numeric solution with linear
complexity, I always like being proved wrong about these things :)

The choice of interface module needs to be made before the choice of
lower-level module, because not all interfaces have the same
requirements.  For example, VNC needs a TCP socket, which has to be
passed in the parameters to SSH.

Finally, I agree that a GUI would be a good default choice here,
although it needs to be written in such a way that the ncurses/plaintext
fallback looks quite natural to the user when everything goes wrong.
However, I don't really do GUI programming, so it's not something I
would be able to do myself.

I'm now going to download dash and see whether I can fight my way out of
that little box.  If not, C it is.

- Andrew

Justin M. Wray wrote:
 I agree with your generalization, and ordering.  It provides fault tolerance, 
 security, and usability.  Making the entire process esasier (the main goal of 
 this project).
 
 I am unsure if I feel adding a numeric value to each option is needed, when 
 simple programming conditions can handle the tasks.  I think the numbers 
 (bids) adds some uneeded complexity/confusion.
 
 The robustness and power of the solution we are now talking about exceeds the 
 power of simple shell scripts and code hacks, thus needing a higher level 
 language.  But I personally see this as a good thing (speed, security, etc).
 
 I think it would be best to go through each option as you have them listed, 
 and try them, once.  If it works use it, if not move on.  Only prompt the 
 user if we get to an insecure option.  But at the same time I think the 
 helpless user should have the ability to over-ride the option, or the 
 helper over-ride the option (with approval from the helpless) at start.
 
 A GUI front-end would le such choices be easily made.  And an expert can 
 easily tell the helpless to type --telnet at the end of the command.
 
 One more note, if we do use something like 'c' we could easily add a socket 
 into the app itself.  So we would have the following options, in said order:
 
 1) SSH
 2) bash-socat-cryptcat
 3) bash-socat-netcat
 4) bash-socat
 5) telnet
 6) bash-socket
 
 (I've changed the order around a bit, to what I think would more secure and 
 sound.)
 
 And after connection, having the following recovery options:
 
 1) A VNC-based GUI
 2) An X-based GUI (for e.g. broken video cards)
 3) A screen-based TUI (for those of us that love screen)
 4) A pty-based TUI (so that editors like vi work)
 5) A pipe-based TUI (for dire emergencies)
 6) A non-interactive mechanism for swapping keys 
 7) Custom command
 
 (Which should have been selected when the recovery-command was first run.)
 
 It seems like we are on track, what do you think?
 
 Thanks,
 Justin M. Wray
 
 Sent via BlackBerry by ATT

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread Andrew Sayers
Okay, I've got the auction part of the dash adventure completed.  In
principle, the rest should be relatively easy.  The code isn't vastly
useful or commented so far, it's just a proof of concept really.

The script doesn't prune unlikely matches (e.g. socat+ssh when ssh is
already provided), because that doesn't work in the general case: say
there are two pipelines, a-b-c and a-c-b.  If a-b-c fails, it
could be due to a problem in a, b, c, or some interaction between the
three.  Without knowing more about the error, we can't assume that
a-c-b will fail.  Here's a rough guide to the script:

* Right now, the script reads bids from remote_help.txt, but will
  eventually take bids by polling a separate set of module scripts

* A module script is run with a to-be-decided set of command line
  arguments.  I'm currently thinking it'll be something like:

  my-module.sh --want remote-shell --remoteuser andrew \
 --remotehost example.com

  this will have to be decided as modules are written - there'll
  doubtless be some rules, some precedents, and some totally
  protocol-specific things

* Modules that sub-contract part of the job will be assumed to handle
  subcontracts internally (it's just a matter of calling remote_help.sh
  again with the appropriate arguments)

* Every module is polled in every auction.  Inapplicable scripts will
  return no bids, bids with a variety of subcontractors will return
  multiple bids

* A bid is a line printed on standard output, of the form:

  integer command line

  The integer is the bid, the remainder of the line is a command to pass
  to /bin/sh

* The highest bidder is repeatedly run until a bidder returns
  successfully (note: currently, all bids are run)

* This would have been a lot easier if I could rely on `sort` and `head`
  existing!

How should we proceed with this?  Set up some space on Sourceforge?  Do
you have any better ideas for names than remote help?

- Andrew


remote_help.sh
Description: application/shellscript
-1 echo problem
3 /bin/false
2 echo foo
2 echo '2'
3 echo bar
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-06 Thread Andrew Sayers
I've now updated the page that Pedro kindly started at
https://wiki.ubuntu.com/Recovery/Remote - this includes all the ideas
I've got so far.  This is my first Ubuntu development thing, so yes, any
help very much appreciated!

You're quite right that the people you have to worry about aren't the
ones that know nothing, but the ones that know just enough to be
dangerous.  In fact, given the desire for robustness (and the Robustness
Principle in general), I think it would be best to design this facility
based on the assumption that the user has been damaged their system as
much as possible without actually disabling the remote-recovery script.
 That should encourage a sufficiently defencive design.

Help with managing a system is an interesting use case, but I'm not sure
if we want to be targeting it with this particular solution.  I agree
that sane defaults with powerful configuration is a good approach for
users that know what the configuration options mean, but newbies with a
broken system should be asked as few questions as possible (especially
when they're paying for a long-distance phone call).  Also, I think
you're talking about an ongoing remote help relationship, rather than an
emergency one shot thing.  How about we break this off into a separate
feature request:

The Add User dialogue in Users settings
(System-Administration-Users and Groups) should have the following
extra options:

* Disallow password logins
* generate an SSH key, using either no passphrase, a randomly generated
passphrase, the login password, or a specific passphrase.  When the user
account is added, the SSH public keys are given in a popup box
* Add specified SSH public keys to .ssh/authorized_keys

Then we can add a page to the help wiki, describing how to create a user
for port-forwarding, how to create an SSH-only user, and how to make
that user an administrator.  That would give intermediate users all the
tools they need to set up a permanent remote help relationship that they
can tune to their particular needs.

Given the above, I've left the iptables things more-or-less intact on
the Remote Recovery page, since it's a good piece of robustness against
newbies.

Finally, two more ideas have occurred to me:

1) Rather than create a remote-recovery user on the recovery machine,
why not just let the expert log in as root?  Given all the other
security measures, it wouldn't be any less secure, and would avoid the
need to have a password kicking about.

2) Experts that have just finished a remote recovery session are
probably the best people there are for providing high quality bug
reports.  Ubuntu already asks for unstructured feedback when installing
a system, so it seems natural to give the same option to these people.
Presumably we need to ask someone at Canonical about whether they'd be
interested in this feedback?  If so, who?

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Making apt-get powercut-proof

2008-05-06 Thread Andrew Sayers
That's a pretty handy tool - would you be interested in an option to
start the remote recovery that's being discussed in a nearby thread?

Also, how would you feel if I suggested the options/dpkg script to the
APT development team as the basis for an init script?  I don't expect it
would add more than a few seconds to boot time (or less if there's a
lockfile that they can check for the existence of), and it would tackle
the specific issue I had, where the problem presented as a missing home
directory, and only turned out to be a package installation issue after
much investigation.

- Andrew Sayers

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-06 Thread Andrew Sayers
At this point, I'm trying to walk the line between unrealistic wouldn't
it be great if... type ideas and overly-strict reliance on solving the
specific problem I have in my head, so I'd like to go back to first
principles for a moment.  Please tell me if any of these are false:

1) It's common for new Linux users to have a technical friend that deals
with their problems.  This is a healthy relationship that we should look
for ways to support
2) People generally don't formalise that sort of thing until it's too late
3) All Linux users can be behind arbitrarily complex sets of
firewalls/NAT, including multiple layers of NAT or firewalls, not all of
which are under either user's control
4) We can expect experts to do some considerable work (e.g. installing
packages and configuring routers), but non-technical users need simple
instructions from the default installation
5) There's some interest in making small changes to the default install
to cater to the above issues
6) Since the people in most need of help are more likely to stick to LTS
releases, we can afford to add this sort of feature gradually, and see
what public reaction is like

The basic solution we're looking at here is to make it possible for the
technical friend to set up an SSH connection to the non-technical
friend's computer, using an account that has some way to execute
superuser commands (with sudo or by actually being the root user).  This
breaks down into three sub-problems:

1) Creating or modifying an account that has the necessary permissions
2) Creating an SSH connection
3) Destroying or reverting an account to its original state

In (1) and (3), I had been concentrating on setting up a mechanism to
trust someone temporarily, but I now realise that's not a particularly
common use case, because if I trust you today, I will probably trust you
tomorrow too.  Getting people to jump through the same set of hoops
every time there's a problem makes life harder than necessary for
non-expert users, which I've been complaining about all thread.

Reliably doing (2) is a hard problem.  The solution I had come up with
strikes me as a good solution for a common use case, but there's no way
to deal with the general case without introducing more complexity.

Solving the three sub-problems individually allows for more flexibility,
because then intermediate users can mix-and-match the parts that they're
interested in.

Creating, modifying, and deleting accounts is a standard problem, and
I've already suggested ways to add the necessary bits into Ubuntu
(specifying an authorised key when creating an account, etc.).  Since I
used the alternate install CD, I don't know whether the default Ubuntu
installation gives you the option to set up extra user accounts after
installing.  If it does, I'd recommend adding a technical friend user
account creation option.

But since most people will click straight through the above option, it
would be good if this was offered explicitly somewhere in the System
menu, and if a program like friendly-recovery could offer the same
functionality from the command line.

If there's an interest in it, I would be happy to maintain some sort of
ssh-strategies script/page that walks people through an increasingly
complex decision tree, trying to set up an SSH connection.  In order to
work easily, there would probably have to be some sort of
ssh-strategies-minimal package in the default install.

I'd be even more happy if Canonical were willing to host a couple of
very simple scripts at ubuntu.com to confirm the user's world-visible IP
address and to reflect half a dozen SSH packets back to the address they
came from.  The former can't be done over HTTP because of the mess of
transparent proxies on the net nowadays, and the latter should be safe
so long as just enough packets to appear in the SSH log are sent, but
not enough to try a password are sent.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-06 Thread Andrew Sayers
Based on this evidence, does anybody object to a bug report being filed
against openssh-server, saying that password authentication should be
disabled by default?  Of course, that leaves all my ideas in serious
trouble, but that's a secondary matter.

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Suggestion to make remote recovery easier

2008-05-05 Thread Andrew Sayers
I'm a Linux user of sufficient experience that friends are starting to
phone me up when there's a problem with their computer.  I guess most
people here know how long and painful those conversations can be, so I
think it would be better if Ubuntu had a mechanism to let me SSH into
people's computers using only instructions that I can describe to
newbies over the phone without confusing them.  Of course, the problem
is doing this in a way that's both secure and robust.  I've got an
approximate outline of how it would work, so could people tell me how
practical this idea is:

* There should be three ways to enable remote recovery:
  - In the GRUB menu, there should be a remote recovery option
  - From the command-line, there should be a remote-recovery command
  - From the GUI, there should be System Tools-Remote Recovery
* Experts should be able to run /usr/sbin/connect-to-remote-recovery to
  prepare their system for a remote recovery.

Running or connecting to a remote recovery should start by doing the
following:

1) Create a remote-recovery user whose home directory is
   /.remote-recovery, and who has no useful permissions
2) Set their home directory to be chmod 500
3) Create a ~remote-recovery/password file, chmod 400
4) Give the remote-recovery user a random password, and put the password
   in ~remote-recovery/password
5) If the SSH server isn't running, enable it.  If it won't enable, try
   various things:
   * If the package doesn't exist, ask if you can install it
   * If /usr or /usr/bin doesn't exist, check whether they're mentioned
 in /etc/fstab, and if so, whether they're mentioned in `mount`,
 then tell the user what's going on, and offer to print the contents
 of both.

Then, running remote recovery should:

1) pop up a warning about how doing this gives complete control of your
   system to a specified computer, and should only be done at the behest
   of someone you trust.
2) Add the remote-recovery user to /etc/sudoers
3) Ask for the IP address and remote-recovery password of the person
   you'll allow access to
4) `ssh [EMAIL PROTECTED] -L22:localhost:`
4a) if that fails, do various diagnostics:
* Does the computer have an IP address?  Does it have a gateway?
* Do a tracepath to that address and print the results
4b) If it succeeds, copy .ssh/id_dsa.pub on the remote host to
~remote-recovery/.ssh/authorized_keys on the local host, then
touch .ssh/id_dsa.pub to confirm that the copying is complete
5) Tell the user whether SSH succeeded or failed
6) Inform the user that they can press ctrl-c to quit remote recovery
7) Wait until `w` reports a remote-recovery user logged in
8) Read lines of text and `write` them to the remote-recovery user's tty
9) When the remote-recovery user logs out, ask whether they want to wait
   for the user to log in again.
9a) If no, go to 10
9b) else go to 7
10) Remove the remote-recovery user, remove them from sudoers, and
delete their home directory

Alternatively, connecting to a remote recovery should do:

1) Find the IP address(es) of the computer
1a) If any addresses are public (not e.g. 192.168.*.*), print them
1b) Otherwise, tell the user to find their public address (e.g. through
the settings page of their wireless router), and make sure that
connections on port 22 are forwarded to private IP address port
22.
2) touch ~remote-recovery/password
3) Create a ~/.ssh/id_dsa with no passphrase
4) Print the contents of ~remote-recovery/password, then print it again,
   using the NATO phonetic alphabet (so that it can be spoken over the
   phone)
5) Make sure the SSH server is running
6) Wait until the ctime of ~remote-recovery/password is less than the
   ctime of ~remote-recovery/.ssh/id_dsa
7) `sudo -u remote-recovery ssh [EMAIL PROTECTED] -p `
8) The user now has a shell on the newbie's computer, as user
   remote-recovery.  They can then read the password in ~/password, and
   sudo whatever they need to sudo.
9) Remove the remote-recovery user and delete their home directory

- Andrew Sayers

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Making apt-get powercut-proof

2008-05-05 Thread Andrew Sayers
A friend of mine was upgrading to Hardy, and (so far as we can tell)
there was a power cut while it was halfway through, which left his
system in a not-especially-useful state.  I think the best solution is
to have a /etc/init.d/{apt-get|dpkg} script that checks for
half-finished installs, and restarts them if necessary.  If so, which
(or both) would be better, and is there anyone here that knows enough
about the two to suggest a complete set of commands that need to be run?
 Also, is this something we should be doing in an Ubuntu-specific way
(e.g. from X), or should I take this idea to Debian?

- Andrew Sayers

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-05 Thread Andrew Sayers
Milosz Derezynski wrote:
 There is IMO no real need for a random password; instead, the user of
 the machine to be recovered should be allowed to enter a password which
 he then can tell to the user recovering the machine remotely. This
 doesn't neccessarily have to be more insecure; a random alphanum
 password is probably better secured against brute force cracking but i
 don't like the fact that the computer hands out a password for the user
 automatically, even if he gets to see it.

I'm not sure if I follow.

If you're complaining about the password on the expert's machine, I'm
suggesting that the newbie SSH in to the expert first because I'm
happier to assume that an expert would know how to poke a hole through
their firewall/NAT router/etc. than I am to assume the same of a newbie.
 Once that first connection is made, everything gets tunnelled over it.

If you're complaining about the password on the newbie's machine,
getting them to make decisions and speak passwords is likely to add
stress and errors, because they might feel silly about being asked more
questions they don't know anything about, might feel silly about
spelling a password out phonetically, and might (as Justin pointed out)
choose a bad password.

Having said all that, how would you feel about these changes:

* The connect-to-remote-recovery script on the expert's machine suggests
a random password, and asks if you want to choose one manually.
* While waiting for an SSH connection, the connect-to-remote-recovery
script on the expert's computer keeps an eye on `w` and/or the ssh log,
waits for the user to press enter, then does `passwd -d remote-recovery`
as soon as they do.  That would make brute-forcing an SSH connection to
the expert's machine impractical
* The remote recovery script on the newbie's computer does `iptables -I
INPUT -i ! lo -m state --state NEW -j DROP` (and the same with
iptables6) before creating the remote-recovery user, and removes those
rules after deleting the user.  That would make brute-forcing any
connection impossible, even if (for example) the newbie had set himself
up a publicly-available FTP server

Also, how would people feel about the following changes based on
unrelated problems:

* Instead of /.remote-recovery, set the user's home directory to
/tmp/rr, so it works even on some weird UMSDOS partition, and gets
auto-deleted if the computer gets unexpectedly rebooted
* Create an init script that deletes the remote-recovery user at boot
(again, in case of unexpected reboots)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Making apt-get powercut-proof

2008-05-05 Thread Andrew Sayers
Milosz Derezynski wrote:
 It could work if after the package is skipped apt recreates the
 dependency list; this might be bad to oversee though (especially without
 a GUI), however adding a printout a la These packages were originally
 meant to be installed: $PACKAGES Since package $PACKAGE was removed
 after the update began, they are NOT going to be installed [Y/n] where
 n would retry with the same package included again. One could even think
 of a skip-broken-packages option. Since non-installed packages remain in
 the dpkg/apt system as to-be-upgraded there is no real problem (if apt
 would additionally save the status then in update-manager they could be
 shown as unchecked with a hint that they failed to install).

I don't think I'd want the actual apt-get command line to be
second-guessing me, so how about this for a feature suggestion (to
update-manager and to synaptic):

If a package takes more than 60 seconds to install, force the details
window to open, and also present a dialogue box saying:

$PACKAGE has taken more than a minute to install.  This is normal for
some packages, but might be a sign of a problem.

The following packages depend on $PACKAGE: $DEPENDENCIES

[ Stop installing ] [ Skip this package ] [ Keep waiting ]

Clicking stop installing, obviously, stops the installation.  The
package that failed is then highlighted in the main window.

Skip this package kills the PID of the shell script that apt is
running.  I'd have to check, but I think that apt does the right thing
in that situation.

Clicking keep waiting sends the dialogue box away for 5 minutes (then
for 10 minutes, then for 15 minutes...)

- Andrew

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss