Re: Re: Possible inclusion of zram-config on default install

2012-12-08 Thread John McCabe-Dansted
On Sat, Dec 8, 2012 at 8:49 PM, Fabio Pedretti  wrote:
> 1. Traditional swap on hard drive is still used and suggested, zram is a
> better alternative and a complement to it, so it may for sure be useful to
> everyone who actually use swap on hard drive.

I am not sure that zram makes good use of a harddrive as backing swap.
My personal experience has been that either
1) Zram is enough, and there is no need to use HDD as swap; or
2) Once Zram is filled, the newer and more frequently used pages get
written to HDD, and the memory used by the stale pages in Zram means
that the system is slower that it would be without zram; or
3) Zram is usually enough but occasionally a misbehaving process will
attempt to allocate all memory. Disabling the large on disk swap means
that the bad process is OOM killed before it leaves the system
unusable due to swap death.

As I understand smart use of harddisk as a backing store and trim
support is not currently supported, as the currently focus is keeping
compcache small simple and obviously correct so it can leave staging,
and work will not begin on adding such features till after 3.8 [1] .
After 3.8 these performance problems may be fixed, and zram may make
effective use on an on disk backing store.

As it stands, I use compcache by itself without any on disk swap.
Clearly on disk swap would be required for hibernate though.

[1] http://code.google.com/p/compcache/issues/detail?id=98

--
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu One needs cloud encryption like LastPass does it

2012-04-07 Thread John McCabe-Dansted
> LastPass may be secure today, but it is trivially easy for LastPass
> (or a hypothetical attacker who gains access to LastPass's
> infrastructure) to compromise that security simply by replacing the
> javascript code which does the client side encryption and decryption
> with some code that also passes the encryption key back up to the
> server (or wherever).

Hmm, in principle Firefox could support native encryption, where you
add the key to Firefox directly before even visiting the website.
Being a bit careful about frames and/or javascript should give you a
secure solution. The major issue then is, if security matters to you,
why do you want to access these files from the web? Are you sitting
down on an untrusted computer and just blindy entering your encryption
key?

Still, adding support for securely encrypted files as a cross browser
standard seems like a fundamentally cool thing to do.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu should move all binaries to /usr/bin/

2011-12-05 Thread John McCabe-Dansted
On Mon, Nov 7, 2011 at 7:16 PM, Colin Watson  wrote:
> On Sat, Nov 05, 2011 at 02:40:31AM +0800, John McCabe-Dansted wrote:
>> We could even enhance which to look in obvious places off the path (perhaps
>> locatedb?)  and print the output on stderr if we really wanted to.
>
> Please don't - 'which' is used in scripts and needs to preserve its
> current behaviour.  Any extra behaviour should be added to a
> different/new program.

There are ways to detect script vs interactive shell, but nobody seems
to want this feature enough to justify adding it to which anyway, so
it is a bit of a moot point.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu should move all binaries to /usr/bin/

2011-12-05 Thread John McCabe-Dansted
On Wed, Nov 2, 2011 at 4:02 AM, nick rundy  wrote:
> There are several situations where I need to find an executable. One that
> comes immediately to mind is when I need to specify what program to use to
> open an online stream and the program I want is not appearing in an offered

I find it quite annoying that Firefox doesn't automatically find the
file for me. I type in the executable name just like I do in bash, but
Firefox doesn't even search the path. In this case, needing to know
where the executable is seems like a bug.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Ubuntu should move all binaries to /usr/bin/

2011-11-04 Thread John McCabe-Dansted
On Thursday, November 3, 2011, John Moser wrote:
>
> find a binary?  Here, I've solved this problem for you, completely.
> It's easy.  Do this:
>
> luser$ which ls
>
> luser$ which gnome-session
>
> luser$ which synaptic
>
> If it isn't in your path, then it's broken.  Something strange has
>

We could even enhance which to look in obvious places off the path (perhaps
locatedb?)  and print the output on stderr if we really wanted to. If users
really want to do this sort of thing we should add this to the GUI, e.g. an
"all executables/applications" gnome vfs metadirectory.


-- 
John C. McCabe-Dansted
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: MSOffice usage comfort: Wine or VM+Rdesktop?

2011-09-03 Thread John McCabe-Dansted
On Thu, Sep 1, 2011 at 8:57 PM, Mihamina Rakotomandimby
 wrote:
> Hi all,
>
> My boss obliges me to use MSword for writing some core business documents.
>
> I have then the choice (duh!) either to:
> - Wine

Generally, it takes wine some time to mature. Until quite recently you
couldn't run Office 2010 under wine at all, and looking at the links
from:
  http://appdb.winehq.org/objectManager.php?sClass=application&iId=31
I'd say Office 2010 would still not be pleasant to use on Wine.

If you have to use MS Office, but have flexibility as to which version
you use, you could run an older version. Office 2007 has basic support
in Wine, and has the same format as Office 2010. However, only Office
2000 has "Gold Support" in Crossover office. 2000 *should* be pleasant
to use in Wine and you can use the official compatibility pack to read
2007/2010 files. Using a version 2003 or older has the advantage of
avoiding the new and "improved" ribbon interface. I recall 2000 as
being quite mature. '97 was pretty good too, but it isn't able to run
Heroforge Iirc.

I find the Word compatibility in LO a bit annoying. A form that is 3
pages in Word, shows up as about 3.2 pages in LO.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Update NTFS-3G

2011-07-28 Thread John McCabe-Dansted
On Fri, Jul 29, 2011 at 2:29 AM, Nedas Pekorius  wrote:
> How about updating NTFS-3G to the newest upstream version 2011.4.12
> while feature freeze haven't started?

2011.4.12 is in sid, and would have propagated to Ubuntu if ubuntu
hadn't had their own customizations (ocelot has 2011.1.15-0.1ubuntu2),
so updating would require a manual sync. Is there something you
considered particularly important in 2011.4.12?

Looking at the changelogs there have been some changes that seem worthwhile.

[1] 
http://packages.debian.org/changelogs/pool/main/n/ntfs-3g/ntfs-3g_2011.4.12AR.4-2/changelog
  e.g. Configuring with --enable-crypto and --enable-extras.
[2] http://www.tuxera.com/community/release-history/
  e.g. ntfs-3g: fixed possible wrong hole size when overwriting compressed data.

--
John McCabe-Dansted
Not an Ubuntu Developer

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Update NTFS-3G

2011-04-25 Thread John McCabe-Dansted
On Mon, Apr 25, 2011 at 4:22 PM, Nedas Pekorius  wrote:
> I would like to ask developers to update 'ntfs-3g' from version
> '2010.8.8' to '2011.4.12'

While I agree that the latest ntfs-3g is really nice (iirc it gave me
a 10x performance boost). I don't think we'll see 2011.4.12 in Natty,
as a feature freeze exception request would need an exceptionally good
reason at this late stage.

+1 for Natty+1 though.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: IronPython and Mono are very old. How can we get an update?

2011-03-18 Thread John McCabe-Dansted
On Sat, Mar 19, 2011 at 7:36 AM, Vernon Cole  wrote:
> By another strange twist of fate, there is a PPA on launchpad which
> allegedly has a current version of mono, but it is only built for LTS
> versions of Ubuntu, so to get the latest version of mono, I have to
> unload Maverick and install an earlier version of Ubuntu. This is
> starting to sound like an episode of "The Twilight Zone."

BTW, here are some standard workarounds:
1) Attempt to install the version of Mono found in the PPA on your
maverick install. Packages built against older versions of libraries
will often work when linked against newer versions.
2) Install debootstrap and install the later Mono in a LTS chroot.
3) Virtual machines

> So, back to my original question: What can I do to help get the distro
> release up to the "latest stable version?" Should I be working on
> Natty?

It is possible the distro release is intentionally following the LTS
releases of Mono. In which case it may be better if you can find an
easy workaround, or persuade the PPA maintainer to support non-LTS
releases of Ubuntu.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Emit backtrace if memcpy is misused?

2010-11-11 Thread John McCabe-Dansted
Apparently there is a change coming in glibc which can trigger silent data
corruption bugs in existing software that misuses memcpy with overlapping
regions.
   http://lwn.net/Articles/414467/#Comments

Perhaps alpha versions of Natty or Natty+1 should test for this (using
LD_PRELOAD or a modified glibc), and emit a backtrace to apport when memcpy
is called with overlapping regions?

This would reduce performance somewhat, but alpha versions are for finding
bugs, not running fast. Even though testing for overlapping regions could
take a large part of the time taken by small memcpy's, apparently less than
1% of time is typically in memcpy, so this shouldn't have a significant
effect on system-wide performance.

Valgrind also emits this warning, but it seems important that this gets
widespread testing, and even in alpha versions running everything under
Valgrind by default is not acceptable.

-- 
John C. McCabe-Dansted
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: LiveCD optimisations

2010-11-01 Thread John McCabe-Dansted
Hi Martin, I intended to send the following to lyx-devel, but my dog
ate it. Louis has mentioned AdvanceCOMP, but I thought you might find
the summary below useful. I think that the figures may be out by about
30%, possibly do to squashfs doing some form of intra-file
de-duplication, but I hope it gives a good feel as to where space can
be saved.

On Thu, Oct 14, 2010 at 5:45 PM, Scott Ritchie  wrote:
> On 10/03/2010 11:50 PM, Martin Pitt wrote:
>>  * Optimize images in packages, as proposed by Louis Simard in May.
>>
> We could do this in a very simple and automated fashion by just running
> pngcrush on every png image.  It could even be a part of the build
> daemon or built into debhelper.

Note that running advdef -z4 (after optipng) can further compress png
images, saving another 3MB. Advdef could also compress the existing gz
files by about 5MB (less if there are less gz files because we remove
man pages etc.). Optimizing svg files with Scour.py could save a
further 7MB on the LiveCD, but we should not run Scour on cards as
card games make use of non-visual tags in those SVG files [1]. Other
XML files can be optimized, but applications expect a particular
format; for a list of XML files safe to optimize see [2].

If we want to go crazy with lossless recompression we could also run
jpegrescan on jpegs saving 0.5MB, and replace gz files with lzma
saving up to 10MB. Optimizing HTML files with webpack and html::clean
could save 100k on the LiveCD (2MB when uncompressed) but doing this
right is hard, and probably not worth it.

By comparison, the changelogs take about 30MB, and manpages take about
15MB. If we keep the changelogs, they would be a good candidate for
lzma recompression since they shrink by 4.4MB.

For more information, see the thread:
  More LiveCD space optimizations
  http://osdir.com/ml/ubuntu-devel-discuss/2010-10/msg0.html

[1] 
http://www.mail-archive.com/ubuntu-devel-discuss@lists.ubuntu.com/msg11350.html
[2] 
http://www.mail-archive.com/ubuntu-devel-discuss@lists.ubuntu.com/msg11410.html

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: More LiveCD space optimizations

2010-10-11 Thread John McCabe-Dansted
On Fri, Oct 8, 2010 at 2:01 AM, Matthias Klose  wrote:
>> Also, there are 12MB of jar files, which are basically zip files. We
>> can also shrink those by 5MB or so with advzip, but that doesn't seem
>> to shrink a .tgz of them so it may not shrink the liveCD. Since zip
>> files compress file by file, we may be able to save space on the
>> liveCD by running "advzip -z -0" on them. That would expand them to
>> 24MB, but reduces the size of a .tgz of them to 4.6MB, possibly saving
>> space on the liveCD if squashfs is similarly efficient.
>
> how does OOo behave with the repacked zip file? is it faster, slower, does it

No, I made a script to open oowriter 100 times. It didn't find any
consistent difference in performance.

> need more memory when it runs?  imo, changes like this should be integrated 
> into

gzip needs less than 1MB to decode (or even encode). The effect on
memory usage is likely to be minimal.
http://tukaani.org/lzma/benchmarks.html

> the package build process, and sent upstream. patches welcome.
>
> same for jar files. are these extracted as fast as without your changes by the
> jvm? if not, then these should be left alone (and afaik there shouldn't be any
> jar files on the live CD).

FYI, most of the jar files come from firefox and openoffice. Firefox
refuses to start without these jar files. I doubt they are used by a
jvm.

Using the 7z deflate instead of gzip shouldn't harm decompression
time. In fact, it should improve speed slightly because there is less
compressed input to parse (If I tar up /etc and compress it with gzip
and advdef, the one compressed with advdev does in fact seem to gunzip
very slightly faster).

The other question is does compressing decompressed jars (or
visa-versa) affect performance. A atom based netbook with a rotational
disk seemed like a good machine to test on as it is towards the low
end of performance. Repeatedly running oowriter and firefox ten times
did not lead to and consistent performance differences (see attaced
timeopen.ar). If there is any difference it would be within a few
percent. This suggests that we can decompress them (to save liveCD
space), or compress them (to save installed space) without having much
effect on performance.

It is also plausible that decompressing the jars saves installed space
on 'Btrfs -o compress' filesystems, but I have not tested whether
Btrfs compression heuristics automatically detect the jars as being
compressible.

-- 
John C. McCabe-Dansted


timeopen.ar
Description: Binary data
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: More LiveCD space optimizations

2010-10-07 Thread John McCabe-Dansted
On Thu, Oct 7, 2010 at 10:05 AM, Louis Simard  wrote:
> Hey :)
>
> Thanks for the interest in this optimisation! Unfortunately I wasn't
> pushy enough in my thread from May-June and it wasn't included in the
> Maverick LiveCD. A pending question is what to do to include the
> recompressed files into the archive's packages [1].

I think this will be discussed at UDS-N, see:
http://archives.free.net.ph/message/20101004.065026.e553efd1.en.html

> 2010-10-06 16:08 GMT John McCabe-Dansted :
>> In May, Louis Simard proposed rencoding PNG files and SVG files to
>> reduce their size [Quoted 1]. I note that we can save further space by:
>>
>> 1) Using advdef on the png files in addition to optipng. This is what
>> optimizegraphics does, and this shrinks the pngs on the Maverick RC
>> liveCD from about 100.1MB to 85.3MB providing a saving of 14.8MB.
>
> So it does; I didn't know about that. Reading the man file for advpng,
> it gave a warning that it was only supported for AdvanceMAME-generated
> PNG files, so I was skeptical, but it does shave off about 4% more
> filesize on average with 'advpng -z4'.

We could test each file to ensure the image is identical, perhaps
using pngtopnm, and md5sum. This would be especially important for
jpegrescan/jpgcrush, which is at version 0.0.0-1.

>> 2) Recompressing gz files with advdef. Using advdef, we can shrink the
>> gz files from 89.5MB to 84.8MB, and provides a saving of 4.7MB.
>
> That's an interesting optimisation; I didn't really know about it
> either. However, I did use 7zip's Deflate compressor to recompress a
> .zip file of OpenOffice.org's from 5.9 MB to 5.4 MB. The method was
> rather crude, but it did the job:
>
> mkdir extracted
> cd extracted
> unzip ../file.zip
> 7z a -tzip -mx=9 -mfb=258 file.repack.zip extracted/*
> rm -r extracted

You mean images_human.zip? I have a hunch that compressing that file
wouldn't actually save space on the liveCD as I can gzip it down to
3.9MB. It may be better to leave it as an uncompressed zip, and let
squashfs deal with it. Recompressing the pngs contained in the zip
sounds worthwhile though. Strangely, even running advzip -z -0
images_human.zip shrinks it by 3%, and even shrinks the corresponding
images_human.zip.gz file

Also, there are 12MB of jar files, which are basically zip files. We
can also shrink those by 5MB or so with advzip, but that doesn't seem
to shrink a .tgz of them so it may not shrink the liveCD. Since zip
files compress file by file, we may be able to save space on the
liveCD by running "advzip -z -0" on them. That would expand them to
24MB, but reduces the size of a .tgz of them to 4.6MB, possibly saving
space on the liveCD if squashfs is similarly efficient.

>> 3) Recompressing jpeg files with jpegrescan. This only saves 0.5MB,
>> but implementing this would add just a couple more lines of code, and
>> jpegrescan does not lose any picture quality [Quoted 2].
>
> jpegoptim indeed performs lossless optimisation of JPEG files by
> editing Huffman tables, and it's used as the basis of jpegrescan.
> However, jpegoptim doesn't make non-progressive files progressive, as
> I understand jpegrescan does. This may make jpegoptim's optimisations
> more transparent to applications that, for some reason, can't decode
> progressive JPEGs and thus have non-progressive JPEGs in their
> packages. However, most applications should be using libjpeg anyway,
> so perhaps this point is moot.
>
>>
>> Together these should shrink the liveCD by over 20MB. This is without
>> even considering the .xml and .svg optimizations Louis proposed.
>>
>> A further 10MB could be saved by recompressing the gz files as lzma.
>
> At what LZMA compression level? Default (7) or --best (9)?

--best

Also, if we want to take replacing deflate with lzma to extremes, we
could replace the deflate compression in the png files with lzma. A
command that does this is "advpng -z -0 $f && lzma --best $f". I found
that this could save 18.7MB. However,  It may also degrade performance
slightly, but I doubt it would be too significant on modern CPUs.
Running unlzma on all 66MB of the .png.lzma files takes:
real1m2.666s
user0m6.540s
sys 0m5.610s

I think the user/sys are the relevant ones, and taking 12s to read
every png doesn't seem too bad. The main thing is that I doubt that it
would work out of the box.

If we use lzma in the squashfs, just deflating them all with advpng -z
-0 could reduce the liveCD size. Probably wouldn't help the installed
size though.

>> This seems reasonable as lzma has reasonable decompression times (e.g.
>> 7ms to decompress a largish manpage like lsof).
>
> 7 ms? What's your 

More LiveCD space optimizations

2010-10-06 Thread John McCabe-Dansted
In May, Louis Simard proposed rencoding PNG files and SVG files to
reduce their size [1]. I note that we can save further space by:

1) Using advdef on the png files in addition to optipng. This is what
optimizegraphics does, and this shrinks the pngs on the Maverick RC
liveCD from about 100.1MB to 85.3MB providing a saving of 14.8MB.
2) Recompressing gz files with advdef. Using advdef, we can shrink the
gz files from 89.5MB to 84.8MB, and provides a saving of 4.7MB.
3) Recompressing jpeg files with jpegrescan. This only saves 0.5MB,
but implementing this would add just a couple more lines of code, and
jpegrescan does not lose any picture quality [2].

Together these should shrink the liveCD by over 20MB. This is without
even considering the .xml and .svg optimizations Louis proposed.

A further 10MB could be saved by recompressing the gz files as lzma.
This seems reasonable as lzma has reasonable decompression times (e.g.
7ms to decompress a largish manpage like lsof). Since the liveCD is
compressed anyway, it seems that if a file is compressed with gzip. it
is worth compressing with lzma.  The command "man" already seems to
have lzma support, but we'd want to test each application to ensure
that it functions correctly when its .gz files are replaced with lzma
files. We could also selectively recompress the gz files, as some .gz
files are actually smaller (by about 40 bytes) than the corresponding
lzma file.

Given that recoding SVG files can save 7MB [1], simply recoding files
could free up 37MB for the Natty LiveCD (and presumably also reduce
the the average size of debs in the repos by about 5%).

[1] 
http://www.mail-archive.com/ubuntu-devel-discuss@lists.ubuntu.com/msg11337.html
[2] http://news.ycombinator.com/item?id=803839

I attach the script I used to check how much space would be saved.
This is purely for reproduction of my results, it is not integrated
into Louis's script.

-- 
John C. McCabe-Dansted


lossless_recompression_iso.sh
Description: Bourne shell script
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Idea #1242: Restoring the bootloader by Ubuntu installation CD

2010-07-25 Thread John McCabe-Dansted
This idea seems fairly easy to implement and has over 4000 votes. I
have attached a simplistic prototype script that I have used a few
times to successfully restore my MBR.

It has a few obvious limitations, but the real issue is how we'd
integrate it with the live CD. Even just adding it as a script to be
run from the command line would IMHO be much better than having
thousands of howtos on the net many of which are out of date and all a
bit scary.

Another minor issue would be how to structure the UI. I guess either
whiptail or pyGTK would be a good choice. Whiptail would probably be
simpler, and could be run without X.

-- 
John C. McCabe-Dansted


fixgrub.sh
Description: Bourne shell script
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: bad KVM network performance with 10GB

2010-07-05 Thread John McCabe-Dansted
On Mon, Jul 5, 2010 at 7:12 PM, Scholte van Mast, Rene
 wrote:
> Hello,
> I have trouble with the network performance inside my virtual machines.

As I understand this mailing list is for development discussion, and
people wanting support are generally invited to report the problem to
one of the support channels listed here:

http://www.ubuntu.com/support/community

Before you do that, a few things I'd check are: (VMs often have
trouble with SMP)

1) Does the problem occur when you only use a single connection on
both host and client?
2) How much CPU is used on the client? (what does top say?)
3) How many CPUs are reported in /proc/cpuinfo
3a) Does increasing the number of CPUs assigned to the client help?
3b) Alternatively, does assigning only a single CPU to the client help?

Also there is a related discussion on tuning for performance on 10GB
links with KVM
http://linux-iscsi.org/index.php/KVM-LIO-Target

Finally it is known the TX mitigation can cause problems with KVM
performance. I am not sure if this is your problem, but I'd look at
look at:
https://bugzilla.redhat.com/show_bug.cgi?id=504647
and see if disabiling TX mitigation helps.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Thoughts on quitting and window controls

2010-04-07 Thread John McCabe-Dansted
On Thu, Apr 8, 2010 at 9:45 AM, Derek Broughton wrote:
>
> Neither do I - but for, apparently, opposite reasons.  I don't understand
> why we need, or even want, "minimize to tray" and "minimize to task bar"
> (aargh, please don't push _my_ buttons, and write "minimise" :-) )
>

OK. Both are correct.


>
> Surely an app should minimize to one or the other, in which case "Close"
> could close.
>

Maybe. But the paradigm isn't really that pressing the Close
button minimizes the window to the systray. The paradigm is that closing a
window closes that window *and* that closing a window never closes a
service. I think most people would be confused if e.g. clicking close on the
main sound preferences dialog stopped the sound server. Arguably, the fact
that you can get the window back via the systray doesn't mean that the
*window* has been minimized to the systray any more than saying that we
"minimize" a document window to the "My Documents" folder.

I think nobody expects the "X" button to close services that were started on
start-up. So AFAICT the debate has more to do with the question:
"1) If we launch a systray service the same way we launch an application:
a)  should we also shut down the service the same way we would shut down an
application?; or
b)  should we shut down the service we would shut down other services found
in the systray?"

Or, another question under debate:
"2) Should closing a window *ever* close a service found in the systray?"

These are related, but note for example that we could specify that
applications should never put an icon in the systray unless the user
explicitly asks them to register themselves as a service, so we could answer
yes to both 1a and 2.

-- 
John C. McCabe-Dansted
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Thoughts on quitting and window controls

2010-04-07 Thread John McCabe-Dansted
On Thu, Apr 8, 2010 at 5:38 AM, Davyd McColl  wrote:

> For what my input is worth, I'd just like to point out that I'm one of
> those people who is annoyed when an app which runs in the systray *exits*
> when I close the interface window (main or otherwise). For apps that support
> the "minimise to tray" functionality instead of "closing the window
> minimises to tray" idea, I find it particularly difficult to re-train myself
> to minimise instead of close. To me, minimise means "minimise to the task
> bar".
>
> Personally, I don't think that an extra button in the title-bar would cut
> the mustard either. Seems that the only viable option that I can see is to
> handle both preferences (far be it for me to force *my* preferences on
> another person) -- perhaps the solution is a global preference for this kind
> of thing so that neither I nor the OP have to configure multiple tray-aware
> apps to bend to our personal preference.
>
> This option could perhaps be a checkbox in the "Window Preferences"
> settings dialog available from System->Preferences->Windows
>

Perhaps this settings dialog should appear the first time that that a window
is closed without closing an app.

I remember the first time that I closed a window without quitting the app I
was quite confused. However since then I have found it more natural that
closing a window doesn't also make the notification icon disappear and e.g.
stop music from playing.

Displaying some from dialog (particularly one that requires some form of
choice from the user) seems like a way to inform the user about the
notification area so they don't think they have to use ps to close the
application. Perhaps closing the dialog without making a choice should close
Rhythmbox (this time)... this would help if a user accidentally click on
RhythmBox at night and just wants it to stop playing loud music right *now*
without having to learn about a notification area.

-- 
John C. McCabe-Dansted
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: 2 panels waste the height needed for web browsing on 16/10 screens

2010-04-03 Thread John McCabe-Dansted
On Sat, Apr 3, 2010 at 4:58 PM, Jérôme Bouat  wrote:
>  > Ubuntu - two bar (desktop edition)
>
> The issue is that small screens are now shipped high performance laptop
> (~1000 €).
>
> On those high performance laptops, I would not use the netbook remix
> flavor but the genuine flavor of Ubuntu.

A $200 dollar netbook runs the "genuine" flavor of Ubuntu well.
Actually I've found the genuine flavor of Ubuntu runs on pretty much
any old hardware while the UNR is painful to use without a 3D
accelerator even with graphics quality set to its lowest.

The UNR seems to be more a way to provide the "maximus" possible
usable screen space and to provide bigger icons that are easier to hit
on those annoying touchpads than to be cut-down* -- Ubuntu is already
much lighter than WIndows Vista/7. I've heard they are moving towards
abiword for UNR, but you can easily install openoffice if that is what
you want (or as Dmitrijs said, install ubuntu-netbook-remix on an
existing Ubuntu install).

* lubuntu is quite a nice cut down desktop, which also only has a
single panel by default.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Rightness of firefox

2010-03-29 Thread John McCabe-Dansted
On Mon, Mar 29, 2010 at 11:54 PM, Fabio Alessandro Locati
 wrote:
> I was looking today the firefox package. I noticed that there is a
> huga amount of patches in that package [1]. If I remember correctly,
> is against firefox license to redistribuite it with the same name,
> same logo if changes are made. If this is still true, I think ubuntu
> should remove immediately or the naming/logos stuff or the patches.

Ubuntu has got permission from Mozilla to make some changes. I don't
know the terms of the deal, but I assume that Ubuntu devs do, and that
they are abiding by them.

"My goal in our own discussions with Mozilla has been to establish
that it really is possible for a distribution that cares about free
software and Mozilla to agree on a framework which gives us both what
we need. The Ubuntu team went as far as preparing packages without the
Firefox name in case we were unable to reach an agreement – but in the
end the fact that we kept the lines of communication wide open meant
that we were able to find a middle ground and ship the packages we
want while still supporting the Firefox name and Mozilla’s work.
Nobody sold out."
  -- http://www.markshuttleworth.com/archives/79

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: FF 3.6.2 doesn't work after Lucid Update

2010-03-23 Thread John McCabe-Dansted
On Wed, Mar 24, 2010 at 12:29 AM, Michael Kappes  wrote:
> since i update my Laptop (T41) whit the last Lucid updates from today -
> Firefox doesn't start. Start from the bash says:

I had some difficulty with firefox-3.6 refusing to start (on 9.10).
Moving ~/.mozilla out of the way fixed the problem for me.

I have a feeling this would be more on topic elsewhere, though it does
loosely fall under "Sharing of experiences with the current
development branch of Ubuntu".

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Evolution & Ubuntu 10.04 LTS

2010-03-10 Thread John McCabe-Dansted
On Wed, Mar 10, 2010 at 7:14 AM, Jan Claeys  wrote:
> How will you revert all people's configuration & data (e.g. files
> created with an incompatible new file format)?

Where the upgrade to the new format is done by dpkg, we could add
downgrade scripts as well as upgrade scripts. As for new files, users
already have to deal with different file formats (I.e. what if someone
emails them a file from a newer/older version of Ubuntu), it may be
wise to manually save the new files back in the old format before
doing the downgrade. If gumptacular automatically upgrades
~/.gumptacular to a new incompatible format then gumptacular is broken
and needs to be fixed (consider, for example, if you login to you
account from both the SunOS lab and a Linux lab each with their own
incompatible  versions of gumptacular - you will have much worse
problems with gumptacular than an occasional downgrade would cause).

In general though I think it would be better to allow the user to
install both the experimental and the stable version side by side,
though this may make the packaging more complicated again.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Evolution & Ubuntu 10.04 LTS

2010-03-09 Thread John McCabe-Dansted
On Wed, Mar 10, 2010 at 2:28 AM, Patrick Goetz  wrote:
> Journal, lwn.net, and breathless reviews across the blogosphere.  Users
> start clamoring for the features of gumptacular 2.0, not knowing how
> they ever lived without them.  So,
>
>    apt-get install gumptacular/ubuntu-experimental

Apt-get already supports this. E.g. it is possible to configure apt-get so that
  apt-get install gumptacular/lucid
will attempt to install the lucid version gumptacular onto a Karmic machine.

> spins out of control and melts the system board.  Problem.  The
> suggested new feature is a way to simply back out of the experimental
> update whenever you get bitten:
>
>    apt-revert gumptacular

Is is what is unsupported. I understand that deb say how to upgrade
you configuration etc. to the new versions, but don't know how to
downgrade.

I understand that supporting downgrade could double the work required
to produce a deb, but I agree that it is worthwhile for at least
important projects like firefox (and of course gumptacular ;).

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Considerations about official localized editions of Live CDs

2009-12-18 Thread John McCabe-Dansted
(Posted just to ubuntu-devel-discuss@lists.ubuntu.com, as I don't think my
mail is relevant to the other lists)

On Fri, Dec 18, 2009 at 2:26 PM, Paul Tagliamonte wrote:
>
> I love this idea!
>
> It will be a considerable amount of overhead for canonical to get (
> EVEN ) more CDs, but it would be a great step to a truly global
> operating system.
>

I can see three forms of cost:
1) Ship-it cost
2) Space on mirrors.
3) Effort to generate images.

Are these the main form of overhead?

If [1] is a problem we could only ship one type of CD. IMHO ship-it is only
of limited use anyway and if we are going to mail people a disk we may as
well mail a DVD; cost is clearly an issue and DVDs can be a bit more
expensive e.g. 50c vs. 40c *, but that does not seem a big deal if the DVD
is only shipped to countries not supported by the main CD.

Space on the mirrors does not seem too much of an issue as the country
mirrors should only need one localisation. Presumably adding 1TB of storage
to archives.ubuntu.com isn't a big deal. It would be kind of cool if the
localized live cds could be generated with a script (similar to jigdo)
avoiding this whole issue.

The human effort to generate the images should not be too great if it is
done by a script. The CPU time shouldn't be a big deal either since the disk
images are only released once every few months.

* according to: http://www.dvddemystified.com/dvdfaq.html#5.1

-- 
John C. McCabe-Dansted
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Install Wizard 'Looks Too Complicated'

2009-12-01 Thread John McCabe-Dansted
On Wed, Dec 2, 2009 at 12:06 AM, Daniel Hollocher
 wrote:
> password.  Any sort of password automation would simplify the
> situation for a few people at the expense of making it more
> complicated for the rest of us.  The level of encryption doesn't seem
> to matter.

OK. The issue where we want to migrate multiple users for whom the
admin does not know the password may be better handled by the
Migration Assistant.
   https://wiki.ubuntu.com/MigrationAssistant/Karmic

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Install Wizard 'Looks Too Complicated'

2009-12-01 Thread John McCabe-Dansted
On Tue, Dec 1, 2009 at 5:27 AM, James Westby  wrote:
>  * It's a feature of dubious value to begin with. After it had taken some
>    time doing its thing you would need to have the user type in the password
>    anyway to confirm (you can't assume, and you can't really show it to them).

Quite. "Cracking" the password is pointless. All we need to know is
whether the password matches the hash (just like windows). So in this
case we could in principle fill the second password field with stars,
and announce a match as soon as the password the user enters matches
the hash from XP.

In fact we don't even need the user to enter the actual password until
they login. We could add support for NT hashes to PAM and copy the NT
hash. We probably wouldn't want to copy the "LM" hash, as this is the
insecure easily broken hash*, and if the user wipes XP we wouldn't
want copies of this to be left lying around. Allowing a mass import of
users from windows may help if the machine has several users, of whose
passwords the administrator may not know (or want to know).

(*) according to:
http://www.enterprisenetworkingplanet.com/netsecur/article.php/3783156/Use-Ophcrack-to-Defuse-Windows-Security-Timebomb.htm

> Can we please spend our time on other worthwhile features and not argue about
> whether "cracking" tools should exist for all to use or not?

Since we can get the NT hash without cracking anything, it may be
worthwhile. If we import the NT hash, detect region and mirror from
IP/traceroute, then we do not need to ask the users any questions. We
could go straight to the "Review and Install" screen that could look
like:

+--[Review and Install] 
| These are the settings detected by Ubuntu. If you are familiar with
these settings you may want to customize them. However if you do not
understand these settings it is safe (and recommended) to leave them
unchanged.
|
| Language: English
|
| Administrator Username: xp
| Administrator Password: * (Same as Windows)
| Other authorized users: john, user, guest (import from windows)
|
| Region: Perth/WA
| Mirror: ftp.iinet.net.au
| Partition Sizes:
|   Main '/' Partition: 100 GiB (%50 of remaining space)
|   Swap Size: 2 GiB
| Keyboard Type: US/International
|
|  [Cancel]   [Install]
|
+---

The Language could be chosen at the boot menu as it is now, or
detected from windows.
-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Install Wizard 'Looks Too Complicated'

2009-11-28 Thread John McCabe-Dansted
On Sat, Nov 28, 2009 at 6:52 PM, Conrad Knauer  wrote:
...
> If the user is connected to the internet, might it be possible to
> guess their physical location (e.g. for time zone) by IP address?
> (http://www.tracemyip.org/ seems to be able to :) as most people will
> want to install their systems where they are going to use them.

Even better is if we could determine the correct mirror. E.g. I use
iinet, and traffic to the ftp.iinet.net.au mirror is free, so I always
set the mirror to iinet so I don't have hundreds of megabytes taken
from my quota.

In general, it would be best if Ubuntu automatically picked a mirror
in the users freezone (if one exists).

(1) We could use the IP address to detect if we are inside the iinet
freezone.  There are a lot of IP ranges owned by iinet (as they have
quite a number of these, see e.g.:
http://forums.whirlpool.net.au/forum-replies-archive.cfm/704521.html,
which mentions the possibility of using a BGP feed to keep up to
date).

(2) We could use traceroute. E.g. I get
 2  nexthop.wa.iinet.net.au (203.59.14.16)  18.911 ms  24.903 ms  25.088 ms
So if nexthop.*.iinet.net.au appears in the traceroute this may
indicate the we should use the iinet mirror

(3) Ping is *not* useful in the case. The ping to the Australian
ubuntu mirror is a fifth of the ping to the iinet mirror, but this is
a small "price" to pay for being in the iinet freezone.
64 bytes from mirror.aarnet.edu.au (202.158.214.106): icmp_seq=1
ttl=56 time=86.7 ms
PING ftp.iinet.net.ua (195.39.196.131) 56(84) bytes of data.
64 bytes from hold.iname.ua (195.39.196.131): icmp_seq=1 ttl=43 time=473 ms

> For the partitioning step, if there is another OS present, the default
> option should be to install along side it; if none, use the whole
> disk.  Partitioning is always a scary step, so that should be
> generally hidden.

The one thing I find more scary than partitioning is the idea of
partitioning happening without warning. I guess they will get warning
when they reach the "review and install" button.

> A look at the pics on
> http://ubuntu-tutorials.com/2009/10/02/ubuntu-9-10-karmic-koala-beta-reviewed-screenshots/
> suggests:
>
> Step 5 (user name, password, computer name) should be the first step
> and really the only things that a user should need to fill in...
> unless there's already an OS on the system that Ubuntu can extract a
> u/n from :)

There are also algorithms for extracting the password from XP as well...

...
> The bottom of the page should then have a [Review and Install] button
> leading to what is now Step 6 which will spell out the changes and
> then an [Install] button at the bottom of that.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Standing in the street trying to hear yourself think

2009-07-08 Thread John McCabe-Dansted
On Sat, Jul 4, 2009 at 2:23 AM, Evan wrote:
> We also seem to have a duplication of effort on several fronts. At last
> glance we have:
>
> - mailing lists
> - IRC
> - wiki
> - launchpad
> - launchpad answers
> - forums

I wrote a blueprint for maintaining a database of errors. I suggest
that for all questions of the form
   "What does this error mean? What should I do?"
The way to get help is via this database. E.g. say I run
  Xvfb :2 -screen 2 1600x1200x32
and get the error
  Couldn't add screen 2
Then instead of hunting round in all the of the above. I simply run
  autohelpsys  Xvfb :2 -screen 2 1600x1200x32
(autohelpsys has not yet been written)
autohelpsys searches the database for a solution. If a solution is not
found it offers to create a new ticket. Someone (possibly me) then
submits something like the the following to the database:

COMMAND=Xvfb
REGEX=Couldn't add screen [[:digit:]]
ANSWER=This is usually caused by selecting 32 bits per pixel. Try 24
or 16 instead.

The database then sends me an email, asking me to confirm that this
fixes my problem.

https://wiki.ubuntu.com/AutoHelpSysSpec

Although the above example is for command line applications, it would
be possible to add a link to GUI warning and error dialogs.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Provide a GUI option in the installer to enable popcon

2009-06-24 Thread John McCabe-Dansted
2009/6/25 Matthew Paul Thomas :
>>...
>> If just 1% of Ubuntu users tick the box, that gives us enough data to
>> improve Ubuntu by justifying our decisions with evidence.
>>...
>
> The absolute size of a sample is more important, statistically, than its
> relative size. In other words, 1136581 popcon submissions is a large
> enough sample regardless of how many Ubuntu users there are in total.
> What is more important now is reducing bias -- bias towards current
> users against potential users, towards users who fiddle with settings
> against users who don't, and so on.

What if the install program randomly invited 1% of users to join popcon?

Something like:

"We want to make sure that Ubuntu suits your purposes. To do this it
would help us to know what software you use. Do you agree to
automatically provide this information to Ubuntu?

We will not store personally-identifying information as we only use
this information to determine the popularity of different software
packages supported by Ubuntu. Only 1% of users are invited to this
survey"

The obvious disadvantage would be that it introduces non-determinism
into the mix, which may mess up some forms of scripting and users that
insist on following step by step instructions exactly.

The obvious advantage is that it gives us sample as unbiased as
possible without mandatory popcon, while still adding only 0.01
mouse-clicks to the average install. Perhaps the submissions to popcon
that come from the specifically invited people could be marked as such
so that we could keep the relatively unbiased sample pure, and compare
it to the sample of people who actively hunted popcon down.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Unit Consistency (LP: #369525)

2009-06-10 Thread John McCabe-Dansted
On Wed, Jun 10, 2009 at 9:01 AM, Christopher Chan <
christopher.c...@bradbury.edu.hk> wrote:

> Besides, I have already made clear in later posts in this thread that I
> really do not care what is used so long as it is uniform across all
> operating systems. If Ubuntu wants to do its thing while other operating
> systems keep convention, be my guest. You bet that I, for one, will not
> be installing it anywhere on school campus because the school has more
> important things to do than preach Ubuntu is right and all other
> operating systems are wrong which is why you have different numbers for
> GB on Ubuntu and XP, Solaris and Mac OS X and I will not risk looking
> like a fool or an Ubuntu/Linux fundamentalist for something the school
> may or may not care about.


Opinion noted.

But how will you explain that you can't burn a 4.5GB file onto 4.7GB DVD?

Preach that Microsoft is right and TDK, Verbatim, Western Digital etc. are
all wrong?

For my myself I don't much care what Microsoft does. But I do have to read
hardware labels, and the DVD example caught me. At first I thought k3b was
being ultra-conservative in case it needed an absurdly large 200MiB index
for some reason.  YMMV.

I do broadly agree that it would be best to discuss this with other OS
vendors, or at least other OSS vendors, before making such a change.
However, my hunch would be that users wouldn't be too scared by "GiB". I'd
imagine at first that they would see GiB where they expect GB and figure
they look much the same, so they probably mean something similar. But maybe
it would still provide a useful clue as to why they can't fit 4.5 GiB file
onto a 4.7GB disk. We'd really have to test this on real users though to be
sure (and this test may be relevant to the other vendors and standards
bodies too).

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: shameful censoring of mono opposition

2009-06-10 Thread John McCabe-Dansted
2009/6/10 Mark Fink 
>
> yes it does and the people behind the censorship need to be exposed
> for what they really are


Moderators?

As I understand, the Ubuntu forums are for useful, constructive posts that
adhere to the Code of Conduct. It would appear to be almost a consensus that
those posts were not useful advocacy, and if taken as coming from the Ubuntu
community would put Ubuntu in a poor light.

If you want an unmoderated forum there are plenty of those, but Developers
don't get a free pass either. I've had contributions (code) rejected for a
number of reasons:
* Use of POSIX in a pure Ansi-C project.
* Use of Perl.
* Disagreement as to whether my feature is actually useful.
* Feature became obsolete by time it was ready to be merged.

There was even a prominent developer who literally broke his back (if you
consider a injury a break) working on the Kernel, and never had his patches
accepted.

Even so, I have to say I find advocacy harder than submitting code.

First of all advocacy requires challenging the believes of others while
maintaining their respect. I do not find this an easy task. First of all,
although a human won't crash if you miss a semi-colon, excessive spelling
and grammatical mistakes will cost respect. But in advocacy, formal
correctness is not nearly enough. If you don't respect them they will lose
respect for you, and won't listen to you when there are so many others to
listen to.

Perhaps most importantly, everything that makes it harder to submit code
makes it harder to submit *bad* code. 90% of your mistakes, in code, will be
found by the compiler. More will be found by basic testing. Feedback from
reviewers helps you ensure that the code you are writing is actually useful
to the intended recipients.

Advocacy may seem easy. There are no compile time errors. However this just
means that it is just that much harder to check for problems in your
advocacy.

For example: "...obviously some of the forum moderators are novell
employees..."

Presumably the people you are trying to convince are people who disagree
with you, but who are not nefarious agents of some malevolent entity. These
people *know* that they themselves are not secretly Novell employees*, so
they know that this is not why they disagree with you. To them it will be
clear that this is an "Ad hominem" attack, even if they don't use those
words.
  http://en.wikipedia.org/wiki/Ad_hominem

So your contribution was rejected. This is a everyday occurrence for a
developer. You have the same options
1) Learn from the feedback given, try to adapt your contribution so it is
accepted.
2) Learn from the feedback given, take your contributions elsewhere.
3) Take your contributions elsewhere.

I'd recommend (2).  For starters moderation isn't really on topic for
ubuntu-devel-discuss.  You could possibly discuss moderation on sounder, but
it seems to be have been discussed to death already. Also, the problem with
unmoderated forums, is that few people read but a few of posts. But even if
they do read your post they are unlikely to be persuaded... I doubt that a
post that doesn't pass moderation is likely to be very persuasive.

I guess the point I am getting at it the moderators are helping you and the
anti-mono position by filtering out posts the don't meet basic standards.
There was one time I was glad I posted to a moderated forum, because the
post I wrote wasn't exactly inflamitory, but it wasn't that constructive
either and all and all I was glad that my less admirable prose wasn't
permanently archived on the web.

* Of course, if you argue eloquently enough you could perhaps persuade
Novell employees of your position. Just because you work at a company
doesn't mean you agree with all their decisions; and someday they might even
be able to reverse some of those decisions. I don't think is likely, but
insulting them isn't going to make it any more likely.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: shameful censoring of mono opposition

2009-06-08 Thread John McCabe-Dansted
2009/6/9 Derek Broughton 
>
>
> Sorry, but no.  You are pretending to have a rational discussion, while
> dismissing perfectly valid arguments.
>
> > The codecs are
> > not-in-Ubuntu the same way as Wine, because they are not installed,
>
> No, they are not.  The codecs are NOT in Ubuntu at all.  Show me where they
> exist in the repositories.  Wine is in the repos.


I don't have strong feelings about Mono, but the OP suggests:

"The solution seems obvious and easy: don’t make Mono or Mono apps
part of the default install. Leave them in the repos for the users who
want them..."

Sounds much like the current policy of Restricted drivers. So from the POV
of the OP "getting rid of mono" means removing it from the default install.
A number of people have interpreted it to mean removing it from the repos,
but doesn't appear to be what the OP wants, indeed AFAICT noone here has
/specifically/ suggested removing Mono from the repos.

To put things in perspective, mono requires 15MiB in .deb form and 44MiB
installed. Thats about 2% of the space on the CD. If, hypothetically, Ubuntu
was prevented from distributing wine, at least we wouldn't have to rebuild
'stable' CD-images.

-- 
John C. McCabe-Dansted
total=0
dpkg -l | grep mono | sed 's/[[:alnum:]]*  //' | sed 's/ .*$//' | sed
s/[[:space:]]//g | while read f
do
grep "Package: $f"
/var/lib/apt/lists/*_pub_ubuntu_dists_jaunty_main_binary-amd64_Packages -A20
#done | grep "^Size: " | sed 's/[^[:digit:]]//g' | while read size ; do
total=$(($total+$size)) ; echo "$size $total $(($total/1024/1024))MiB"
done | grep "^Installed-Size: " | sed 's/[^[:digit:]]//g' | while read size
; do total=$(($total+$size)) ; echo "$size $total $(($total/1024))MiB"
done
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [rfc] improving 32bit user performance/experience...

2009-05-20 Thread John McCabe-Dansted
On Wed, May 20, 2009 at 1:55 AM, Daniel J Blueman
 wrote:
> I was trying to raise a more general point about the minimum spec
> across the board, including the embedded and old-server hardware.
>
> I challenge anyone to find someone using Ubuntu 8.10/9.04 on a
> processor which doesn't support the full i586 instruction set (eg
> i386/i486 or something with incomplete i586 support).

This would be difficult since Ubuntu has dropped support for i486 ages ago:
https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2008-September/005385.html

The i386 is an architecture identifier, not a promise to actually
function on the original i386

It would appear that Ubuntu has preemptively implemented your suggestion :)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-15 Thread John McCabe-Dansted
On Fri, May 15, 2009 at 12:42 AM, Andrew  wrote:
> If its developers really think that users should stick with 1.4 they
> aren't doing a good job promoting that.
>
> [1] http://amarok.kde.org/
> [2] http://amarok.kde.org/wiki/Download:Kubuntu
> [3] http://amarok.kde.org/wiki/Download:Source
>
> - Andrew Starr-Bochicchio

Well this part of the announcement of Amarok 2.0 at least made it
clear there were a number of regressions:
  "It is important to note that Amarok 2.0 is a beginning, not an end.
Because of the major changes required, not all features from the 1.4
are in Amarok 2. Many of these missing features, like queueing and
filtering in the playlist, will return within a few releases. Other
features, such as visualizations and support for portable media
players, require improvements in the underlying KDE infrastructure.
They will return as KDE4's support improves." --
http://amarok.kde.org/en/releases/2.0

This was noted on
   http://brainstorm.ubuntu.com/idea/16533/
   "do not remove Amarok 1.x.x from jaunty or from other future releases."

However not many people voted and it did not attract the attention of
the developers.

Would the developers feel that having upstream mark releases that are
stable but have many outstanding regressions as "Early Adopter
Releases" would make their lives easier? Would it have made a
difference in this case? And would having a version clearly marked as
an "EAR" release help you decide whether you'd download, compile and
install it for personal use?

As a (technical) user if I noticed that a distribution had switched to
an EAR release of something I cared about this might help me
understand whether I wanted to upgrade. It may also help upstream
avoid "should KDE 4.0 have been released" style flamewars.

-- 
John C. McCabe-Dansted

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-15 Thread John McCabe-Dansted
On Thu, May 14, 2009 at 9:30 PM, Vincenzo Ciancia  wrote:
> Il giorno gio, 14/05/2009 alle 19.55 +0800, John McCabe-Dansted ha
> scritto:
>> Quite. I haven't noticed any problems with LaTeX. This may be because
>> I use LyX+xdvi. LyX+Okular seems to be fine too, although Okular is
>> rather sluggish compared to xdvi, it is usable unlike e.g. Acroread
>> (on either Linux or Windows). Forward and backward search "works for
>> me" in Okular.
>
> AFAIK okular does not implement forward search :) If it does then tell
> us how because I want to close the bug. For backward search I think you
> refer to emacs or other tex editors, because lyx does not implement it.
> If it did, lyx would be the perfect creation of the Tex God :) If it
> does, then I have to light a candle for the aforementioned God.

By backwards search I presume you mean what I call an inverse search,
which is supported in LyX although it is not configured by default.
I've managed to get it to work in 9.04 here. The best documentation
appears to be a mailing list post:
  http://www.mail-archive.com/lyx-de...@lists.lyx.org/msg150367.html

AFAICT lyx does not support clicking in the LyX document to jump to
the equivalent in the DVI viewer, which I think is what you call
forwards search.

LyX, Okular and everything else under the sun supports searching
forwards and backwards through text, which is also commonly called
"forward search" and "backwards search", and that what I first thought
you meant.

>> AFAICT Amarok didn't just have a couple of annoying bugs, it was never
>> really ready for widespread use. According to Jeff Mitchel "We've
>> maintained that until 2.1, most users should stick with 1.4.
>> Unfortunately, just as Intrepid shipped with the
>> it's-not-meant-to-be-a-user-release KDE 4.1, Jaunty shipped with
>> Amarok 2.0."
>
> Could you provide a link to this?

Here is your link back
  http://nomad.ca/blog/2009/apr/3/amarok-14-jaunty-ubuntu-904/

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Current situation of amarok, and of latex tools

2009-05-14 Thread John McCabe-Dansted
On Thu, May 14, 2009 at 2:39 AM, Daniel Chen  wrote:
>
> Grave for whom? For you? For what common use cases? These are things
> that are factors to consider when affecting an entire release.

Quite. I haven't noticed any problems with LaTeX. This may be because
I use LyX+xdvi. LyX+Okular seems to be fine too, although Okular is
rather sluggish compared to xdvi, it is usable unlike e.g. Acroread
(on either Linux or Windows). Forward and backward search "works for
me" in Okular.
LyX also has all the features the OP mentioned, though it is more of a
GUI than an text editor

For me the biggest problem with LaTeX is printing. Creating the dvi,
pdf is fine. When I go to print, half the time nothing happens, with
no indication of why. I have come across a number of causes: It could
be that I am out of paper; that the paper is the wrong size; that
cupsd has crashed; that the printer has a hardware failure and needs
to be restarted; or that AppArmor has decided that printing is
dangerous and shouldn't be allowed. In none of these cases has Ubuntu
ever given me any GUI notification of what was wrong.

I agree with Vincenzo Ciancia. We really need a survey. Without
statistics all we really know is that it works for some people and not
for others. With 8 million users this doesn't really tell us much.

> > I messed up my ph.d.
> > thesis with it today, then in complete frustration reinstalled kile
> > 2.0.1 from intrepid, which works like a charm.
>
> I understand your frustration. I, too, have a day job. Are you
> spending your free time fixing kile (and/or kdvi)?

AFAICT Amarok didn't just have a couple of annoying bugs, it was never
really ready for widespread use. According to Jeff Mitchel "We've
maintained that until 2.1, most users should stick with 1.4.
Unfortunately, just as Intrepid shipped with the
it's-not-meant-to-be-a-user-release KDE 4.1, Jaunty shipped with
Amarok 2.0."

Ubuntu introduces regressions far faster than any mortal could be
expect to fix them. More unpaid bug fixers would help slightly, but we
can't solve the problem without limiting the number of regressions and
severity of regressions. Here are some ways this could be achieved:

1) Clear communication with upstream. If we could agree on a way of
clearly marking  (e.g. Early Adopter Release) releases that are not
meant for widespread consumption, then Ubuntu could made a policy of
not making EAR releases the default except in exceptional
circumstances.

Windows has the advantage in this case that it is up to the user which
versions of applications they install, thus a regression in an
application is rarely a regression in windows. There are a number of
avenues to reduce the impact of application regressions on Ubuntu.

2) Make it easier to chose the version of the application you want.
This has become easier, with PPAs and multiple versions of
contraversial applications packaged. There is still some way to go in
making these options more easily available to the user. Perhaps there
should be a standard and easy way to "revert this application", and
the user could be informed of this option during upgrade.

3) Make it easier to revert to an old version of Ubuntu. There is some
work on this using aufs, but currently you can't reboot into the new
version so you can't really tell how well it works, all you know is
whether the upgrade itself is smooth. If you could keep the old
version of Ubuntu around in the same way you can keep old kernels
around this would really help. Btrfs may help here, and the
reflink/cowlink systcalls they are proposing for ext4 may also help.

--
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: High CPU usage applet

2009-05-07 Thread John McCabe-Dansted
On Fri, May 8, 2009 at 8:51 AM, Timo Sirainen  wrote:

> If some process has been eating 100% CPU for hours, there should be some
> kind of a notification that it's happening. Like a small icon could
> appear to gnome panel and clicking it would show the process details and
> allow to kill it.


The following are relevant to this discussion
 http://brainstorm.ubuntu.com/idea/11628 (similar idea)
 http://docs.kde.org/kde3/en/kdebase/kicker/naughty-applet.html (for KDE)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Security Defaults

2009-04-18 Thread John McCabe-Dansted
On Wed, Apr 15, 2009 at 10:24 AM, Null Ack  wrote:
> X security. He makes what seems to be a very sound suggestion about
> Plash and hooking into GTK, thus overcoming the problem of needing to
> in advance make determinations about what a desktop user might do and
> the X security problems.

Chromium also uses this technique, using a trusted file open dialog
box, to prevent a subverted renderer process from uploading arbitrary
files:
http://www.tomshardware.com/reviews/google-chrome-security,2271-3.html

IMHO, Chromium has a very nice architecture.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Looking at Package Management for Karmic or Karmic+1

2009-04-05 Thread John McCabe-Dansted
On Mon, Apr 6, 2009 at 5:10 AM, Mackenzie Morgan  wrote:
>> Are there any problems with enabling automatic updates by default?
>> Most users don't care about updates to the point that they never
>> install them. And even if they would open the update manager, they
>> would more likely just install all updates than select the updates
>> they want. Hell, that's the way I work! How many people actually
>> benefit from any interaction with the update manager?

We may not want to automatically install updates when on a mobile
connection that charges "just a few cents per kilobyte".

> The only trouble is that some updates stop services.  Hal may need to be
> restarted,

If we wait till the computer is idle, how likely is this to cause your
average desktop user any problems?

> and if Firefox isn't restarted after an update it breaks royally.

Perhaps this could be considered a bug? I can see a few ways of fixing this
1) leave the previous version of Firefox installed, or
2) improve Firefox session management so that we can safely restart it
automatically (on idle).
3) change Firefox so it doesn't break so badly.

(Another suggestion was to only install updates on restart. However
this would slow down restart times, and wouldn't help users who do not
restart their computers)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Looking at Package Management for Karmic or Karmic+1

2009-04-05 Thread John McCabe-Dansted
On Sun, Apr 5, 2009 at 1:23 AM, Matt Wheeler  wrote:
> 2009/4/4 Nils Kassube :
>>
>> If you don't trust update-manager you would have to check everything
>> after an update. I don't think anybody will do that even after
>> providing the password. Most users don't even know what to look for to
>> check the system.
>
> That's not the point I'm trying to make. Maybe it's not as big an issue as I
> think, but I meant if update-manager had any possibility of crashing then
> perhaps a malicious user/program could use it to escalate privilieges (I've
> personally found 1 or 2 root escalation bugs in GDM for example, how would
> we guarantee not to have the same problems here)?

Adding something like
   %sudo ALL=NOPASSWD: aptitude update
to the sudoers gives almost the right rights. If there is no user
input into aptitude, then this does not add any new such security
holes.

However, Update-manager allows the user to unselect updates. So to
allow non-root users to do a selective upgrade, we'd have to pass in
the packages to update, running a risk that these package names are
malicious and cause Update-manager to do something bad. I imagine this
risk could be made quite small

Still, an overnight auto-update seems like a sensible default for
novice users who don't need or want to know what an update is. This is
what I set my computer too when I am overseas and leave my computer on
for family to use.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Large files under ubuntu do not appear to work

2009-03-21 Thread John McCabe-Dansted
On Sun, Mar 22, 2009 at 12:17 PM, don fisher  wrote:
> Disk /dev/sda: 8999.9 GB, 834099456 bytes
> 255 heads, 63 sectors/track, 1094179 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Rebooting under ubuntu the output is reported as:
>
> Disk /dev/sda1: 203.8 GB, 203835590656 bytes
> 255 heads, 63 sectors/track, 24781 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x2bd0a78e

Is Fedora really talking about sda (the disk) and Ubuntu talking about
sda1 (a partition on the disk)?

In anycase, launchpad would be the place to report a bug:
  https://launchpad.net/

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: aufs based upgrade tests

2009-03-21 Thread John McCabe-Dansted
On Sat, Mar 21, 2009 at 2:28 AM, Alan Pope  wrote:
> That would fail for users who make one-time changes to their data. For
> example users who download their mail via pop. If they upgrade using
> aufs and then proceed to test the new features of their mail client,
> it might be configured to download new mail via pop and delete it from
> the server. In this case if they choose to not upgrade but throw the
> overlay away they lose their new mail and the ability to get it back.

This could also happen if we overlay /var/spool/mail. The type of user
is who stores mail in /var is likely to be more computer literate that
one who stores mail in ~. However I have already suggested not
overlaying /var/apt/cache, may we shouldn't overlay /var/spool/mail
either. This could get a bit piecemeal though, as we probably want to
overlay at least part of /var,  e.g. /var/lib/apt/lists/.



-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu Desktop Security Defaults

2009-03-17 Thread John McCabe-Dansted
On Mon, Mar 16, 2009 at 3:13 PM, Null Ack  wrote:
> * Having AppArmor actually protecting the desktop build rather than
> what seems as currently a false illusion of coverage with just CUPS
> being protected

The big problem with GUI apps, is that Xorg was not really designed to
be secure, so apps can take control of other apps via X's ability to
send/trap other applications keypresses etc. There is a "untrusted"
mode but it tends to break most existing applications.

Also IMHO, Plash is better suited to GUI apps than AppArmor. It can be
hard to develop a good AppArmor profile for Desktop apps, e.g. I may
choose to open /etc/passwd with OpenOffice. Since I may choose to open
any file with any virtually any application, AppArmor would be of
little use if we do not make questionable assumptions about what files
the user will want to open. Plash is better suited to desktop apps, as
it replaces the GTK file open dialog with a trusted dialog that passes
in the right to open the files the users selects (and only the files
the user selects).

> * Enabling UFW by default or some other firewall by default

I am not sure if this would help much until we protect desktop
applications from each other (above). Ubuntu already has a no open
ports. A firewall could theoretically prevent non-authorized software
from accessing the network, however I understand there currently a
number of ways of non-authorized software to hijack authorized
software. E.g. you would have to allow a bittorrent client to act both
a client and a server, and it would be hard for a firewall to tell
whether bittorrent was run with a weird LD_LIBRARY_PATH that caused
bittorrent to serve the malware.

> In my view the users want to feel secure in knowing that should a zero
> day exploit be identified, that AppArmor or SELinux or foo or whatever
> will trap the damage the exploited service can take beyond the
> standard user is not root UNIX setup.

Unfortunately, at this point the feeling of security would be likely
to be false, as there are currently ways for malware writers to bypass
the additional security that these could potentially bring to GUI
apps.

The good news is that AFAICT all we need is for Xorg to support a more
compatible "untrusted"-like mode so that we could use Plash to give
GTK apps real uncircumventable security, and non-GTK apps could easily
be adapted to use the GTK file chooser.
  http://plash.beasts.org/
(Optimizing Plash to the same extent as AppArmor wouldn't hurt either)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Flash player 10 non free (or free indeed)

2009-03-13 Thread John McCabe-Dansted
It may help to mention that :
gestor=Manager
posterior=Later

You can get get information on relevant software installed by entering

lsb_release -r ; dpkg -l "*flash*" "*swfdec*" "*gnash*" "*firefox*" ;
ls /usr/lib/mozilla/plugins/ ~/.mozilla/plugins

into a terminal.

If this is a bug it can be reported to
  https://bugs.launchpad.net

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


[Prototype] Re: Is disabling ctrl-alt-backspace really such a good idea?

2009-02-11 Thread John McCabe-Dansted
On 2/11/09, Scott Kitterman  wrote:
> It's 9 days until feature freeze, so if you want it different I suggest
> sending patches.

Below is a prototype that I mocked up in half an hour; it gives a 10
second countdown. It would seem that something usable in 8 days is a
possibility if someone familiar with Ubuntu and X wants to

1) Approve the idea, perhaps with additional requirements. Does anyone
have objections to the user-interface of this script?

2) Set Cntl-Alt-Backspace to be a keyboard shortcut for this script.
Would setting this in Gnome/KDE/Xfce suffice? Or would these be likely
to be not working when we need them? My X crashes usually occur after
Gnome has loaded and I have started doing heavy 3D work.

3) Give the script some way to kill X. Perhaps add the following line
to the default sudoers:
   %plugdev ALL=NOPASSWD: killall Xorg

Although this was intended as a prototype, using zenity seems OK.
However wmctrl does not seem to be in
  http://cdimage.ubuntu.com/releases/jaunty/alpha-4/jaunty-desktop-i386.manifest
Is there an alternative way of raising a window to the foreground?

--

#!/bin/bash
# Script to kill X, but give user 10 seconds to cancel


# NOTE: Add keyboard shortcut so this is run when Ctrl-Alt-Del is run.
# NOTE: Allow logged in user to run "sudo /etc/init.d/gdm restart" or
#   "sudo /etc/init.d/gdm" restart without a password

TITLE="Really Restart X Server?"
TEXT="You pressed Ctrl-Alt-Backspace.
This is an emergency sequence to restart a frozen computer.
If you do not want to lose your unsaved data click Cancel NOW!"

which zenity || echo zenity missing, run sudo apt-get install zenity
which wmctrl || echo wmctrl missing, run sudo apt-get install wmctrl

i=0
while [ "$i" -le 100 ]
do
echo $i
sleep 0.1
wmctrl -R "$TITLE" #raise window
i=$(($i+1))
done | (
zenity --progress --title="$TITLE" --text="$TEXT" --auto-close &&
echo TESTONLY sudo killall Xorg
#echo TESTONLY /etc/init.d/gdm restart
)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: You lost a new Ubuntu user

2008-12-28 Thread John McCabe-Dansted
On Sun, Dec 28, 2008 at 2:40 PM, Mackenzie Morgan  wrote:
> On Sat, 2008-12-27 at 13:30 +, richard wrote:
>> On Fri, 26 Dec 2008 08:30:52 +
>> Ian Lynch  wrote:
>>
>> Big snip and a merry Christmas to you all.
>> I've been watching this thread and the one thing that has been missed
>> and it doesn't matter what the Intelligence of the user is like.
>>
>> But if some one gives you a CD saying this is a complete distro, surely
>> no matter how thicK you are you must realise that a complete distro can
>> not fit on a 700Mb CD,
>
> Maybe it's because my first distro was Damn Small Linux, but yes, I *DO*
> expect the whole distro to fit on one CD.  At least the main system.
> Sure, Debian's got 20 CDs, but only the first one is actually needed.  I
> find it insane that Fedora requires 6 CDs!

More specifically, Ubuntu does fit on a single CD, except for language
packs, which presumably Peter (as an English speaker) doesn't need.
AFAICT, the main issue is updates, which have everything to do with
release time and almost nothing to do with space.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Rebuilding a package with debug symbols and no optimization in a _parallel_ fashion?

2008-12-15 Thread John McCabe-Dansted
On Tue, Dec 16, 2008 at 5:37 AM, Martin Olsson  wrote:
> Normally when I want to rebuild a package with no-optimizations and full 
> debug symbols I do:
>
>mkdir some_pkg ; cd some_pkg ; apt-get source SOME_PACKAGE ; cd 
> SOME_PACKAGE_DIR
>DEB_BUILD_OPTIONS="noopt nostrip" fakeroot debian/rules binary
>sudo dpkg -i ../*.deb
>
> This is very useful for debugging etc, but I've noticed one problem. Usually 
> these builds
> run on a single core. So my question is, is there a flag I can pass along 
> with this command
> to enable several jobs for make? Ideally, I would like to parallelize all 
> parts of the package
> generation but if I get only the compilation that alone would be very useful.

I have only used make-kpkg and pbuilder, but I imagine that you want
something like:
 CONCURRENCY_LEVEL=5 DEB_BUILD_OPTIONS="noopt nostrip" fakeroot
debian/rules binary
is what you want (if you have four cores).

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Very bad status of hardware (especially wifi) support in ubuntu, due to the too many accumulated regressions

2008-11-10 Thread John McCabe-Dansted
On Mon, Nov 10, 2008 at 7:57 AM, Bryce Harrington <[EMAIL PROTECTED]> wrote:
> That said, I do like generalizing.  :-)  I think there is a cyclical
> thing in FOSS, where you have some legacy thing that works 80%, and
> upstream decides to get that last 20% it requires a major rewrite.  They
> expect it to get to 90-95%, so distros adopt it, but when the dust
> settles it works at just 85%...
>
> ...and unfortunately the 15% it doesn't cover is different than the 20%
> the legacy system didn't cover, and that 15% is rightfully pissed that
> they are seeing a regression when things worked so well before.

OTOH hand this means that the drivers together cover more than 85%.
Would it perhaps be worth making both drivers easily available on the
same kernel?

I guess ideally we would scan the CVS automatically compiling each
module, and identify the exact revision that caused the regression.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Compiling Ubuntu 7.04's kernel from source

2008-08-05 Thread John McCabe-Dansted
On Tue, Aug 5, 2008 at 4:31 PM, Peter Teoh <[EMAIL PROTECTED]> wrote:
> I started compiling the kernel as provided by Ubuntu source ISO and
> encountered the following error while compiling:
>
> /sda2/linux-source-2.6.20-2.6.20>ALSA lib
> confmisc.c:670:(snd_func_card_driver) cannot find card '0'
> ALSA lib conf.c:3500:(_snd_config_evaluate) function

What command did you just type?
(Not "ALSA lib" surely?)

I think we should move this offlist, since this is more of a bug or
support request than "discussion relating to development of Ubuntu".


-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Disappointed with Ubuntu Server, could be used by such a wider audience

2008-08-02 Thread John McCabe-Dansted
On Sat, Aug 2, 2008 at 6:23 AM, Mackenzie Morgan <[EMAIL PROTECTED]> wrote:
> Because as he said, if you pre-configure everything to
> super-duper-easy-peasy, you've also pre-configured it to
> super-duper-easy-peasy-to-crack.  I'm personally disappointed by
> firewalls that allow outbound by default, because something could phone
> home if I put my trust in an application I shouldn't, but they're
> easy-peasy for users, so that's what people do.  I can manually go
> through and fix it myself, but if some application is running about
> opening who knows how many ports and setting god-knows-what services to
> auto-start and mucking about with insecure options in config files...how
> many months is it going to take me to track all of that down?  No way.

Commercial windows firewall pretty much all block outbound traffic by
default, popping up a dialog box offering  to allow that particular
application to access the internet. I understand that it is fairly
easily for an attacker to phone home though. For example, just run
firefox http://ATTACKER/this-machine-is-cracked.

However if it good practice to prevent e.g. httpd making outgoing
connections this should be done by default. It is fairly easy to do
this with e.g. systrace.

The arguments that it is hard to step up these systems to be secure
seems to be an argument that they should be secured once, by Ubuntu,
with a great deal of scrutiny on whether the configuration really is
secure.  Even if we assume that everyone will hire a UNIX guru we
can't assume that all the "gurus" really are gurus or that they won't
forget one tiny exploit.

Ubuntu desktop already has one server function. I can right click a
file, go to share and share the folder using samba. If you know of any
security flaws with this GUI, please report a bug.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Disappointed with Ubuntu Server, could be used by such a wider audience

2008-08-02 Thread John McCabe-Dansted
On Fri, Aug 1, 2008 at 11:25 PM, Stephan Hermann <[EMAIL PROTECTED]> wrote:
>Serious, for a normal familiy I would advise to by ready made
>appliances..they are tested, and are usable (well not everytime, but

If a security flaw is found in such an appliance it would be much
harder to patch than one found in software.
It does have the advantage that getting root on the appliance doesn't
necessarily give you root on the PC. However we could do something
similar with VM's, chroot jails or Plash.

> And
> the work to stay up2date is much more then you imagine...even on Ubuntu
> and even with apt.
> You know, people with windows, they always get this little icon with
> updates available...how many of them are doing the updates everytime
> this pops up? (same question also comes for ubuntu or any linux distro
> in general).

If a large part of the security model is having a trained monkey wait
for updates to appear and click yes then the security model and UI is
broken and should be fixed. I don't analyze updates to see if they are
"good" or not (how can I? they are binary). I can see only two
advantages to manual updates:  if an update seriously breaks things we
get more warning and we can decide to not update packages that we
intend to remove. These seem easier to work around than being hacked.

> I do like the idea of an entainment home server or a media center
> edition of ubuntu, but it shouldn't be used for webserver or smtp
> server at home (*shiver*)

Having e.g. a simple webserver can be a handy way of copying files
from machine to machine. Ironically it is much easier to get windows
to talk to an http server than samba.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: The non-evil graphics card

2008-06-25 Thread John McCabe-Dansted
On Wed, Jun 25, 2008 at 3:36 PM, Bryce Harrington <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 25, 2008 at 05:07:05PM +1000, Christopher Halse Rogers wrote:
>> On 6/25/08, Markus Hitter <[EMAIL PROTECTED]> wrote:
>> >  probably some of you already read that statement of kernel developers
>> >  about the opening of graphics drivers: > >  www.linuxfoundation.org/en/Kernel_Driver_Statement>
>> >
>> >  Currently I'm using Intel's integrated graphics (G965, G31), but I'm
>> >  about to upgrade to a "real" graphics card.
>> >
>> >  Which vendor should I prefer (or stay with the G31) in order to
>> >  support proper open source graphics drivers? Is there a
>> >  contraindication if I want to use CUDA-like technologies (I'm doing
>> >  FEA, CFD) ?
>> >
>> For high-performance graphics cards you're pretty much limited to ATI
>> or nVidia.  This makes the choice nice and easy: ATI/AMD have released
>> specs, and employ at least one Xorg developer.  nVidia have done
>> neither, and (unsurprisingly) haven't responded to nouveau's
>> request(s) for documentation.
>
> As a slight correction, actually Aaron Plattner, the current maintainer
> of the open source -nv driver, has been employed by nVidia for a while
> now.  (I couldn't say whether he has other duties at nVidia besides
> maintain -nv or if it is his full time job.)
>
> But I would concur that -ati seems to be a good bit further along than
> -nv at present.

One caveat is that the OpenGL* performance of ATI cards has been quite
poor, although apparently the introduction of a programmable memory
controller since the X1800 XT has resulted in OpenGL competitive with
nVidia, see e.g.:
http://www.neoseeker.com/Articles/Hardware/Reports/atiopenglupdate/

(*and even if you use "DirectX" in wine, you are still going to use
the OpenGL driver and will still get poor performance).

> In fact, while -ati still has a ways to go before it's a suitably
> complete replacement for -fglrx, it's been making such good progress
> that I think we can reasonably foresee a day when we start talking about
> moving -fglrx out of main over to multiverse or something.

Back in my day the nvidia drivers  just worked whereas the ATI drivers
(also closed source) just didn't. When I realized that an Intel GMA
simply wouldn't cut it even for casual use, I plugged in an old nVidia
6600GT.  Given that the drivers and support seems to have improved
dramatically, ATI seems like it may be a good choice when buying a new
card. :)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [ubuntu-marketing] Making Canonical's/Ubuntu's contributions more visible

2008-06-04 Thread John McCabe-Dansted
On Thu, Jun 5, 2008 at 7:07 AM, Bryce Harrington <[EMAIL PROTECTED]> wrote:
> 3.  $ sudo apt-get build
>
>Run from within the source tree, this wrappers all the work of
>generating a patch from the current source tree's changes and adding
>it to the package's patch management system (or adding a patch
>management system if one doesn't exist), running debuild, set up a
>pbuilder environment if needed, run pbuilder to produce the
>(unsigned) debs, and place them in the parent directory.

I think debuild already makes a diff.gz. (It would also be nice if,
when doing the share, it would have some way of filtering out the
weird temp files that can appear in a source tree.)

>Would be nice to not have to run it as root, but not sure that
>there's an easy way of running pbuilder as non-root.

There is pbuilder-uml, but that doesn't count as "easy" ;)

Using some from of filesystem virtualisation like Plash may also work,
and it would be nice to be able test the package in a sandbox. A
rather lightweight sandbox would be to let the application run with
Copy-on-write access to the /. This may not suitable for all packages,
but there could be a list of ways that a package could be sandboxed.

> 4.  $ apt-get share [bug id | package-name]
>
>Like you mention, presents user with a list of their outstanding
>patches applicable for the given bug or package (or all in the
>system), prompts for annotation, allows gpg-signing, and uploads to
>the appropriate place.  Maybe a PPA, or maybe sending directly to a
>Launchpad bug ID, with request to add to ubuntu and/or debian.
>
> Of course, the above paints over a huge amount of implementational
> complexity.  Perhaps this could only be achieved for certain well-formed
> packages.

Perhaps when one comes across a non-well-formed package one could fix
it and do an apt-get share :)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: [ubuntu-marketing] Making Canonical's/Ubuntu's contributions more visible

2008-06-04 Thread John McCabe-Dansted
On Tue, May 27, 2008 at 6:20 PM, Bryce Harrington <[EMAIL PROTECTED]> wrote:
> On Tue, May 27, 2008 at 06:11:10PM +0800, John McCabe-Dansted wrote:
>> On Tue, May 27, 2008 at 2:18 PM, Markus Hitter <[EMAIL PROTECTED]> wrote:
>> > - Even if it's distribution specific it's still a commitment to the
>> > whole as long as it's open source. Other developers can look there to
>> > get an idea how some tasks were done. Always better than starting
>> > from scratch entirely.
>>
>> To my mind the biggest contribution downstream projects make is saving
>> developers time. My experience suggests that it if you are a developer
>> and you want to spend less time fighting your distro and more time
>> doing actual productive coding, then Ubuntu is one of the better
>> choices.
>
> Interesting...  Could you explain this in more detail?

In a pure waterfall model downstream projects don't give *anything*
back to upstream projects... except a finished project.

There isn't a clear dividing line between users and developers.  The
time I spend trying to get printing to work is time I don't spend
coding. Just because I know how to do a "simple"
 configure ; wget ; wget ; configure ; make ; vim ; make ; make install
doesn't make it a productive use of a weekend. For this reason I like
using a "Just works" distro.

As others have mentioned previously the Ubuntu is also fairly friendly
to new developers.

>> It could perhaps make things even easier for developers, but thats
>> another kettle of fish.
>
> I'd be interested in hearing your further thoughts on this.  (I've had
> my own thoughts on this, but would love to see other's ideas.)

Well the development aspects of Ubuntu aren't as polished as the
end-user facing applications. Unlike firefox and OO, pbuilder and
make-kpkg don't really "just work". Debhelper seems less
"user-developer friendly" than emerge. A developer has to learn a
programming language or two more or less by definition, but in
practice has to also learn autoconf, automake, make, am_edit,
pbuilder, make-kpkg, svn/cvs just to be able scratch their own itch.

In principle, developing could be as simple as doing "dev edit
" finding whatever you wanted to change, perhaps
changing a constant like MAX_COL from 80 to 160 in your favourite
editor, doing a "dev test-sandbox", and perhaps a "dev install".  Then
when the next apt-get update is run it could be smart enough to use
apt-get source and merge the changes into the new version, unless
conflicts arise.
  Often I find that after finally fixing a problem, I've run out of
time and have to move onto something else. Perhaps then there could be
run a simple "dev share" command which would the developer to, at
their leisure, annotate each of their patches and upload them
somewhere others could re-use and comment on them.  Presumably apport
should also make note of what patches are in use, and bug reports with
patches could have a "test this patch in a sandbox" option and ...

I am not necessarily suggesting that it is wise use of resources at
this time to focus on making development tasks more "user friendly",
just that it is conceptually possible and potentially useful.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu boot speed fall in Hardy

2008-05-12 Thread John McCabe-Dansted
\On Tue, May 13, 2008 at 5:12 AM, Krzysztof Lichota <[EMAIL PROTECTED]>
wrote:

> >  Where/how are these lists of blocks stored?
>
> They are stored in /prefetch directory as prefetch lists for each
> traced app and for boot stages.
> Each file contains list of tuples (device, inode, start-in-pages,
> length-in-pages) which describe what to prefetch.
>
...

> >  Could we use the lists to sort the LiveCD filesystem generation?
>
> It depends what you want to do with it. If you want to feed the list
> to mksquashfs, it can be done. If you want to add prefetching list to
> live CD, this would be harder, as inode numbers are generated during
> generation of SquashFS image.


Presumably we could generate the /prefetch after generating the squashfs and
just put it directly on the iso uncompressed?

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Some fundamental usability issues

2008-05-08 Thread John McCabe-Dansted
On Thu, May 8, 2008 at 9:40 PM, Vincenzo Ciancia <[EMAIL PROTECTED]>
wrote:

> Il giorno gio, 08/05/2008 alle 20.28 +0800, John McCabe-Dansted ha
> scritto:
> > If we define a users work as a user's typing, we could easily save
> > this permanently.
>
> Not quite :) What if I "type" in a video editor and save a changed
> 600mb .avi file? We should record input instead of changed data, but
> that's way out of scope for a versioning filesystem.


I was thinking of explicitly mentioning keylogging. Keylogging is trivial,
*much* easier than a versioning filesystem. Replay is the problem.

For "easily diffable" files we can approximate the keylogging ideal in a
versioning filesystem by guessing whether this file is essentially "typing".
Avi files are not easily diffable in this sense, although e.g. many vector
graphics formats are. If versioning filesystems become popular, then it may
become common to save information such as "foo.avi =
bilinear_rescale(bar.avi,0.5)" along side foo.avi to aid in recovery (and
monitoring, and scripting and ...). We could even  add hooks to manage such
information, but lets leave that for version 9.0 ;)

In any case, the point I was trying to make is reasonable to limit the
bandwidth entering the archive rather than the ultimate size of the archive.
50c a month for a DVD-ROM to backup onto is much less than any of the other
computer related expenses I have. Additionally write-only media is much
safer than an on disk backup, write-only media protects me from 'rm -rf', it
protects me from harddisk failure and if I am sufficiently paranoid I can
easily move the old DVD disks offsite.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Some fundamental usability issues

2008-05-08 Thread John McCabe-Dansted
On Thu, May 8, 2008 at 5:38 PM, Vincenzo Ciancia <[EMAIL PROTECTED]>
wrote:

> Il giorno gio, 08/05/2008 alle 02.24 +0100, chombee ha scritto:
> >
> > Using git is ridiculously difficult and technical by the standards of
> > most normal users, but I see no reason why a versioning system could
> > not
> > be built in to the OS or the desktop environment and function
> > completely
> > without user interaction until the user wants to recover a previous
> > version of something. And that can be made very simple and easy to do.
> > Imagine it being virtually impossible to lose any of your work, ever.
> > Isn't that a killer feature? Why hasn't this happened?
>
> It is technically feasible using fuse, and there have been attempts in
> the past (such as the "wayback" filesystem [1]). OSX does automatic
> backup and versioning, but I don't know how all these systems handle the
> main problem, which is: the file size will grow without bounds. We need


If we define a users work as a user's typing, we could easily save this
permanently. A user typing at 60wpm 24/7 generates less than 200MB a year.
When a small,  easily diffable, file appears in something like My Documents
and is gradually expanded over a few days edited over If a small
version-control friendly file appears on the users desktop I think it is
reasonable to store it permanently. If we notice that a file has the same
md5 sum or name as an already archived file, we could try just doing a diff.

We could have an alert (like the update-manager one) suggesting to the user
that they insert a blank CD/DVD once a month, and then get up to 4.4GiB a
month to play with, which is probably more than enough to permanently store
your average users documents and photos etc. I imagine Privacy would be a
more serious issue than space. Backing up data considered especially
important online is also an option although privacy issues would be too
severe to do this by default IMHO.

a way to delete old revisions, a way to know when the file is large and
> versioning would kill the machine, in the latter case we need a way to
> warn the user and also


Possibly the alert could mention files that haven't been backed up recently,
and why.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: firefox and bad ssl certificates

2008-05-08 Thread John McCabe-Dansted
On Thu, May 8, 2008 at 4:17 PM, Martin Pitt <[EMAIL PROTECTED]> wrote:
>
> Right, but also self-signed certificates (since they prove nothing).


They prove that you are talking to the same server you are talking to when
you first logged on. They also are sufficient to prevent passive wiretapping
attacks.


> I don't consider it a new feature, but a better UI. Firefox has always
> complained about invalid certificates, but until version 2 it was just
> the well-known 'SSL yadayada cannot be verified mumblemumble click
> here to shut me up' popup dialog, and really everyone just clicked
> this away, right? Security click-through dialogs should be abolished,
> since they achieve nothing and are really just an excuse for the
> software provider: "I know it is unsafe, and cannot give you something
> better. Of course you can't know either, but at least I can make it
> your problem now."


However http is more unsafe than an https connection on a self-signed cert,
and we don't even have the token warning on http webpages.

AFAICT This "improvement" only helps users who realize that the "s" in https
is meant to mean secure but somehow don't realise that a big clickthrough
popup warning that the cert is invalid means that the site is in some sense
less secure.

I guess it could vaguely help users who don't know what the "s" means but
have a https: address stored on their machine from some legitimate source,
but have never visited the site so they don't have the correct cert yet.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Suggestion to make remote recovery easier

2008-05-07 Thread John McCabe-Dansted
On Wed, May 7, 2008 at 6:56 AM, Andrew Sayers <
[EMAIL PROTECTED]> wrote:

> 1) Creating or modifying an account that has the necessary permissions
> 2) Creating an SSH connection
> 3) Destroying or reverting an account to its original state thread.
>
...

> Reliably doing (2) is a hard problem.  The solution I had come up with
> strikes me as a good solution for a common use case, but there's no way
> to deal with the general case without introducing more complexity.


Perhaps we could use reverse tunnels. The end user could reverse tunnel into
some server trusted by the admin. See e.g.
  http://gentoo-wiki.com/TIP_SSH_Reverse_Tunnel

  Alternatively, perhaps we could do something like:
mkfifo sshout
mkfifo sshin
ssh [EMAIL PROTECTED] < sshin > sshout &
bash < sshout > sshin &

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Packaging kernel modules.

2008-03-12 Thread John McCabe-Dansted
On Wed, Mar 5, 2008 at 11:33 PM, Matthew Garrett <[EMAIL PROTECTED]> wrote:
>  If you can create a diff to linux-ubuntu-modules and put it in
>  launchpad, and then contact [EMAIL PROTECTED], that ought to
>  do. I wouldn't necessarily expect this to happen until after hardy's
>  released, though.

OK, I've put freeze exception requests in, but will not be surprised
if they are rejected.
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/193552
https://bugs.launchpad.net/ubuntu/+source/linux-ubuntu-modules-2.6.24/+bug/200765

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


New LiveCD patch for low-memory systems (compcache-0.2)

2008-03-09 Thread John McCabe-Dansted
I have provided updated patches against [kx]ubuntu-7.10 (and updated
patch source code) at:
  http://www.ucc.asn.au/~mccabedj/ccache/

These patches now do three things:

1) They use compcache-0.2. This fixes a crasher and provides stats at
/proc/compcache and /proc/tlsf.
2) pagecache-managements scripts are included. "niceubiquity" can be
run in place of ubiquity to have a more responsive system while
installing.
3) atl2-2.0.4 is included, to fix bug 190340

See:
http://code.google.com/p/compcache/
http://people.redhat.com/csnook/atl2/
https://bugs.launchpad.net/bugs/190340i
- atl2.ko too old for Attansic L2 100 Mbit Ethernet

Notes:
1) Can use
cat /proc/swaps
top
etc. to test that compcache is actually running

2) The mouse seems jerky when compcache is active and we are
installing files to hdd with ubiquity. niceubiquity (automatically
added to cd) seems to fix that problem. niceubiquity could do with
more testing. FYI niceubiquity is:
  nice ionice -n7 pagecache-management-lazy200.sh ubiquity
  2b) the pagecache script limits the write buffer to 2MB. AFAICT this
adds 20sec to a 20min install time, but should make system more
responsive while installing.

3) Ubiquity on this LiveCD will install a pristine Ubuntu, without
compcache etc. installed. If this is not desired the files can be
manually copied over:
  /lib/modules/*.ko
  /lib/modules/2.6.22-14-generic/ubuntu/net/atl2/atl2.ko
  /bin/pagecache-management*
  /bin/niceubiquity
  /etc/init.d/compcache

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Zsync for ubuntu isos.

2008-03-09 Thread John McCabe-Dansted
On Sun, Mar 9, 2008 at 7:51 PM, (``-_-´´) -- Fernando
<[EMAIL PROTECTED]> wrote:
>  Yeah, been seeing that for a few days.
>  MAIN (cdimage.ubuntu.com) is really heavy loaded, and rsync times out with 
> that lovely and totally CLEAR message.
>  I guess I should report that as a bug, so that future versions print correct 
> messages to the user.

This wouldn't be an issue with zsync :)
So. Zsync uses less server cpu (in this case making zsync more
reliable than rsync)
as  also mentioned before zsync seems to also use less bandwidth.

Does rsync have any advantage over zsync for Ubuntu isos?

FYI, I have put zsyncs for 7.10 up at:
http://mccabedj.ucc.asn.au/ccache/

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Zsync for ubuntu isos.

2008-03-08 Thread John McCabe-Dansted
On Tue, Mar 4, 2008 at 7:24 AM, (``-_-´´) -- Fernando
<[EMAIL PROTECTED]> wrote:
> Just use rsync:
>  rsync -zhP 
> rsync://cdimage.ubuntu.com/cdimage/daily-live/current/hardy-desktop-i386.iso .

That works. However, when I try to get the alpha-6 release, I get:

$ rsync 
rsync://cdimage.ubuntu.com/cdimage/releases/hardy/alpha-6/hardy-desktop-i386.iso
.
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at
io.c(454) [receiver=2.6.9]

Either a bug here, or I am confused. If I am confused, then zsyncs are
easier to use :).

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: LiveCD patch for low-memory systems (192K)

2008-03-08 Thread John McCabe-Dansted
On Sat, Mar 8, 2008 at 4:20 PM, Alexandre Strube <[EMAIL PROTECTED]> wrote:
>  if we have ram in the first place, why should we swapping and
>  compressing the swap back into ram again? I mean, if you can store the
>  swap in ram, is that you have space, so you didn't even need to swap,
>  no?

Because we can fit twice as many compressed pages in ram as uncompressed.

>  >  PROS:
>  >  P1: Much faster
>  >(zero latency and decompression can be 25% as fast as memcpy)
>  >  P2: No need for swap partition.
>
>  Up to when? There will be a moment when the space reserved for it will
>  finish, no?

Yes. We can increase effective size of memory by about 50% before
needing physical swap.

>  >  CONS:
>  >  C1: Requires Memory.
>
>  Wasn't it EXACTLY for machines without enough ram for running ubuntu 
> normally?

Yes. Again requiring 2K of memory is better than requiring 4K of
memory per page.

>  >  C3: Places more load on CPU
>
>  What is the regular use-case for low-memory machines? Well, its OLD
>  machines. And I mean, OLD. Like pentium 133 or so.

For LiveCD I still cannot support machines with less than 128MB of
ram, even on xubuntu. The alternate install cd can be used on machines
with 64MB of memory (but perhaps another distro would be more
approriate.) On 128MB machines 600MHz cpus are common.

Anyway pentium 133 wasn't low CPU; indeed, the pentium 66 was the
latest and greatest in 1993 when the seminal work "The compression
cache: Using on-line compression to extend physical memory," was
published.

Compressed Caching can improve performance on pentium 133's, even
where physical swap is availiable. See:
  http://linuxcompressed.sourceforge.net/linux24-cc/statistics/0.21_memtest/
(This uses WKdm which is not yet supported by compcache-0.2)

It's amazing what you can do with an Intel pentium [133] processor: ;)
  http://www.youtube.com/watch?v=566ME4nOgRw

>  >  C3 is less important than P4 on new desktops as DualCore single HDD
>  >  machines are now the norm. However in my testing of LiveCD in VMs I
>
>  HA! In YOUR country. Please don't generalize.
>
>  Serious, if you have little ram, it's well possible that you have a
>  low cpu as well. So I don't get the point of this.

Generally, In countries where it new desktops are the norm, DualCores
are the norm.

In any case, planning for the future means planning for multiple core CPUs.

On Sat, Mar 8, 2023 at 4:20 PM, Alexandre Strube <[EMAIL PROTECTED]> writes:
> And I mean, OLD. Like pentium OctCore or so.

Quite ;)

>  By the way, what does it mean the (192K) at the subject?

The patch is a 192K download.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: LiveCD patch for low-memory systems (192K)

2008-03-07 Thread John McCabe-Dansted
On Mon, Mar 3, 2008 at 10:57 PM, Colin Watson <[EMAIL PROTECTED]> wrote:
> [This should be on a development mailing list, surely?]

OK, moved to ubuntu-devel-discuss.

>  On Sat, Mar 01, 2008 at 04:15:22PM +0900, John McCabe-Dansted wrote:
>  > Hello, I have created a low-memory patch for the Ubuntu 7.10 LiveCD.
>  > This patch is based on compcache, see:
>  >   http://code.google.com/p/compcache/
>  > I have tested this on a 180MB VM and a 120MB VM with only-ubiquity. It
>  > worked in both cases.
>
>  Has this been brought to the attention of the Ubuntu kernel team? In
>  order to incorporate this, we'd need the necessary kernel modules to be
>  included in our kernel packages (linux-ubuntu-modules, I imagine), which
>  would definitely involve talking with them.

No, atm I am still working on testing and packaging compcache.

>  I'm assuming there are some downsides, on the "there ain't no such thing
>  as a free lunch" principle.

The downsides of compcache are similar to the downsides of physical
swap. In both cases swapping won't be as fast as having more physical
memory. Compared to physical swap:

PROS:
P1: Much faster
   (zero latency and decompression can be 25% as fast as memcpy)
P2: No need for swap partition.
P3: Less issues if hard disk is unreliable or corrupted
   (might be why we are booting from LiveCD).
P4: Places less load on harddisk

CONS:
C1: Requires Memory.
C2: Requires *Kernel* Memory (limited to ~1GB on i386)
C3: Places more load on CPU
C4: Less well tested than physical swap.
C5: Using compcache without physical swap risks triggering oom-killer
if user loads /dev/rand into memory.

C3 is less important than P4 on new desktops as DualCore single HDD
machines are now the norm. However in my testing of LiveCD in VMs I
have found that the mouse becomes jerky during install to HDD if
compcache is used. This may be because cpu is more important than HDD
on single core machines for having a responsive desktop.

C1 is irrelevant if no other form of swap is available, as 50% of
original size is still better than 100% of original size.
As to C2, I am capping compcache size to 175MB, so that less than
100MB of kernel memory will be used.

C5 isn't much of an issue for LiveCD. For any normal use, the
compression ratio will be almost exactly 50%. In practice the CD will
slow down and become unusable long before the oom killler would be run
anyway, and trying to advoid oom killer on pathological loads is
virtually impossible anyway. We just test the liveCD and see how
little physical memory we can have and still work reliably and
reasonably fast - exactly what what we would do without compcache. The
amount of physical memory required will almost surely be less than
without compcache.

> Can this be limited to systems with (say)
>  <384MB of memory?

We could do this easily, but in principle this would not gain us
anything. Compcache only allocates memory when the swap is used, so if
the kernel doesn't want swap then having Compcache enabled will make
no difference. If the kernel does want swap, it is probably best to
give the kernel swap of some form. Limiting to systems with <384MB
would help people with a large amount of memory avoid possible bugs,
but testing compcache on as many machines as possible and fixing any
bugs we find seems like a better solution.

Disabling or shrinking Compcache when we have access to physical swap
OTOH seems like a good idea. Letting compcache allocate 50% of memory
when HDD swap is available can degrade performace.  Ideally we'd keep
a dynamically sized compcache active at all times, as benchmarks have
shown that this either gives better performance than harddisk swap
alone, or has little effect. See
  https://wiki.ubuntu.com/CompressedMemory
for more information. Compcache-0.2 doesn't support dynamic resizing
effectively however.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Zsync for ubuntu isos.

2008-03-07 Thread John McCabe-Dansted
On Tue, Mar 4, 2008 at 2:24 AM, Colin Watson <[EMAIL PROTECTED]> wrote:
>  As far as I can see, zsync is rsync for when you can't actually use
>  rsync due to lack of server support. However, cdimage does support
>  rsync, so why not just use that directly?

OK. I thought rsync was unpopular among server admins due to server
side CPU load and potential security holes.

As noted on sounder, the mirrors usually do not have rsync, and some
people would prefer to use a mirror (e.g. if it is in their free
bandwidth zone).

zsync seems to have a few more advantages:
1) Easier to use
2) More bandwidth efficient.

I found it hard to find the file I wanted (alpha-6) in rsync://. I
doesn't help that my Firefox doesn't support rsync://. With zsync the
file would be right there and as easy to find as the .jidgo etc.

Rsync reports: (when attempting to update alpha-4 to daily)
$ rsync -zhP 
rsync://cdimage.ubuntu.com/cdimage/daily-live/current/hardy-desktop-i386.iso
.
sent 187.60K bytes  received 478.29M bytes  706.24K bytes/sec
total size is 724.71M  speedup is 1.51

Zsync reports target is 37.7% complete and the .zsync file is 1.4MB in
size. This suggests to me that zsync would use only
724.71*(1-0.377)+1.4=452.9MB, a saving of 25.4MB.

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Packaging kernel modules.

2008-03-05 Thread John McCabe-Dansted
On Wed, Mar 5, 2008 at 11:31 PM, Matthew Garrett <[EMAIL PROTECTED]> wrote:
> On Wed, Mar 05, 2008 at 11:08:00PM +0900, John McCabe-Dansted wrote:
>  > I am trying to package compcache.
>
>  A much better solution is to get it included in linux-ubuntu-modules.

What process do I need to follow to get this approved?

(I have the source package and
http://www.debian.org/doc/maint-guide/ch-update.en.html
I can create a linux-ubuntu-modules source package that includes
compcache, but will that actually help anyone?)

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Packaging kernel modules.

2008-03-05 Thread John McCabe-Dansted
I am trying to package compcache. Control.modules.in does not seem to
be used by debuild. Other packages of compiled modules in Ubuntu don't
seem to use control.modules.in. Am I correct in assuming that
control.modules.in is only used when users compile their own modules
using the module assistant?

Also, is there a recommended way of overriding "uname -r" in upstream modules?
Add a custom path with custom uname executable.  Patch the upstream source?

-- 
John C. McCabe-Dansted
PhD Student
University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Zsync for ubuntu isos.

2008-03-02 Thread John McCabe-Dansted
Zsyncing ubuntu-7.10-desktop-i386.iso to xubuntu saves ~63% of
 bandwidth over downloading the xubuntu.iso directly. Zsyncing 7.10 to
 Hardy alpha-4 saves ~14% of bandwidth; presumably zsyncing between
different alphas saves more than this.
 However Zsync requires .zsync files to be available some where on the net.

 Perhaps we should add .zsync files to the cdimages directories?

A zsync for an iso is 1.2-1.4MB in size.

 (I am testing ubiquity, so I'd like to keep my Hardy iso up-to-date
without blowing my quota.
Also, I notice that manifest and list files are available. Is there
a way to actually use these to build the livecd?)

 --
 John C. McCabe-Dansted
 PhD Student
 University of Western Australia

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss