Re: kopete crashes - only 32 bit solutions
At 2007-11-09T11:08:17+1300, Roger Searle wrote: Hi, kopete crashes when logging in to the msn network, a commonly reported issue at the moment for kubuntu gutsy users, the issue is described here and there is a fix: http://www.kdedevelopers.org/node/3041 The problem being that this is for 32bit - GDebi (the new package installer) reports error: wrong architecture 'i386' which is understandable given I'm an AMD64 installation. There's an amd64 version of the fixed package available. The link[0] is in the Launchpad bug referenced in the URL you supplied. This is the same issue I currently have with installing skype and there doesn't appear to be a 64 bit skype. There is no 64-bit version of Skype for Linux yet. This is one of the prices you pay for using closed source software. You can install a 32-bit version by installing the approriate 32-bit compatibility libraries. There's a howto[1] in the Ubuntu forums (easily found via Google). [0] https://bugs.edge.launchpad.net/ubuntu/+source/kdenetwork/+bug/153500/comments/29 [1] http://ubuntuforums.org/showthread.php?t=432295 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: vmware-server installation - missing libraries?
At 2007-11-07T09:16:16+1300, Roger Searle wrote: Issues are that these libraries are actually on the system, it is just that VMWare doesn't seem to be able to know where they are, the hacks suggested I'll be more explicit this time. Ignore the invalid key errors for now. The install script is confused and reporting the wrong problem. The script is trying to run vmware utilities, but they are failing because they cannot start due to missing libraries. Once you fix the library problem, the invalid key errors will disappear. On Linux with platforms where the 64-bit CPU is backwards compatible with 32-bit code, there are two types of distributions available: pure 64-bit, and multi-architecture. Most distros ship multi-architecture, which means they ship both 32-bit and 64-bit code (install in separate directories), but they usually only install the bare minimum 32-bit. Depending on the distro, you may find the 64-bit libraries in /lib64 and /usr/lib64 and the 32-bit libraries in /lib and /usr/lib or the 32-bit libraries in /lib32 and /usr/lib32 and the 64-bit libraries in /lib and /usr/lib. In this case, you have x86-64 (64-bit) versions of the same libraries installed, but the vmware binary you are using is i386 (32-bit)--the 64-bit libraries are not compatible with it. You need to: - Find out if there is a 64-bit version of the vmware binary available (not sure if there is, suspect not since there's no great need for it). OR - Install the 32-bit versions of the required libraries from your distro's repository. The details on how to do this differs between distributions. For Ubuntu, it looks like they ship a small set of 32-bit libraries in a single package (ia32-libs), and then others are available as lib32name, e.g. lib32z1 for a 32-bit libz.so. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Virtulization question.
At 2007-10-16T21:24:42+1300, David Upex wrote: I want to run Windows XP as a virtual guest using Fedora as the host. If you're running Fedora 7 or newer and you have a modern CPU that has hardware virtualization support, I recommend KVM. It is enabled in the default Fedora kernel, and the rest of the bits you need are in the core repositories. There are HOWTOs floating about on how to get started, the short version is that you make sure the 'kvm' kernel module is loaded, then start the VM up using 'qemu-kvm'. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: CLEAR broadband (was XTRA Broadband dead (again)
At 2007-09-05T22:42:17+1200, Volker Kuhlmann wrote: Lousy apps written in php may be common, but my Linux vendor doesn't update php on a frequent basis because of security bugs in the applications written in php. Sure, the PHP runtime is has had problems. I said that already. Here's an amusing recent example: http://use.perl.org/~Aristotle/journal/33448 In terms of exposure and potential damage, the security problems with the applications built in PHP are far worse (and more easily exploitable, in general) and more widespread than problems with the core runtime. And if it's *that* difficult to write secure code in php, then that says something about the language too (like it's a suboptimal choice for security-cirtical web apps). Not so much the language as the library APIs, which can be a problem in any language if they're designed badly or encourage bad practices. One hasn't heard a lot about sendmail for the past few years, but I hear all the time about php. Was it month of php bugs lately? You haven't been listening very closely. Look at the CVE list for sendmail--there have been plenty in the past few years. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debian 3.1 timezone update
On Tue, September 4, 2007 12:07 pm, Volker Kuhlmann wrote: I can't find an update for the libc6 package (which contains the timezone files) in my usual sources for Debian 3.1. Where do I need to be looking for this? There hasn't been an update issued for 3.1 (Sarge) yet. There's an update for 4.0 (Etch) in the debian-volatile repository. The relevant bugs are 433870 (for Sarge) and 433869 (for Etch). It looks like they dropped the ball with the Sarge bug so you might need to make some noise to get things moving. Worst case, you might need to download the tzdata sources[0] and update them yourself. At 2007-09-04T12:36:05+1200, Nick Rout wrote: In point of fact its not in glibc/libc6 its in the package tzdata (on debian in any event). This is true for Etch but not Sarge. [0] http://www.twinsun.com/tz/tz-link.htm Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debian 3.1 timezone update
At 2007-09-04T12:42:14+1200, Steve Holdoway wrote: To the best of my knowledge, 3.1 is no longer that actively supported, since stable is now 4.0 ( r1 ). If you want a copy of that, let me know. The previous stable releases have limited support (security updates, I would expect the tzdata to be updated, but we'll see) for one year after a new stable release. 3.1 has this until April 2008. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: CLEAR broadband (was XTRA Broadband dead (again)
At 2007-08-28T10:38:52+1200, Steve Holdoway wrote: B*ll*cks. The only reason that sendmail could be less secure is because the configurer didn't know what they were doing. Same as all the bad press that php gets. Blame the workman, not the tools. That, and the terrible track record for security vulnerabilities, sure. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: CLEAR broadband (was XTRA Broadband dead (again)
At 2007-08-28T12:39:43+1200, Steve Holdoway wrote: Not for sendmail, that's for sure. No, definitely for sendmail. I forgot to trim the PHP bit. PHP has had some problems, but mostly it gets a bad rap due to the popular but terrible (wrt security) applications built with it. sendmail has a long, long history of poor security. It's supposed to be better now (as of, 2004ish?), but I certainly still don't trust it a great deal. For instance, remote root vulnerabilities[0] still show up far too frequently for me to feel comfortable with it. [0] CVE-2006-0058, affecting all Unix-like platforms. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: realtime bus timetable information on Linux
At 2007-07-15T19:20:49+1200, Christopher Sawtell wrote: I'm a regular bus user, so I feel somewhat inspired, but poke around as I might I can't find that other file. I wonder if you would be so kind at to give me a hint as to its location. Load the 'Real Time Bus Info' page[0]. There's an iframe in the middle of the page that loads a page containing the starting point for the SVG map stuff[1]. Look at the source of that page you'll see two Javascript functions (svgGetMap, svgGetPlatforms) that return URI fragments for the map[2] and platform[3] data. [0] http://www.metroinfo.org.nz/realtime_map.html [1] http://rtt.metroinfo.org.nz/rtt/public/PlatformETMap.aspx [2] http://rtt.metroinfo.org.nz/rtt/public/PlatformETMap.aspx?ProjectID=CHCMETROSVGMapBuildNo=7 [3] http://rtt.metroinfo.org.nz/rtt/public/PlatformETMap.aspx?ProjectID=CHCMETROSVGPlatformBuildNo=44 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: realtime bus timetable information on Linux
At 2007-07-13T22:16:40+1200, Jim Cheetham wrote: I can't remember the problem; was it just that you couldn't find a working parser to turn the SVG into an image? The major problem is that the current implementation seems to be written specifically for the Adobe SVG browser plugin. If you dig around in the site to find the SVG map data and visit it with Firefox, it'll render the map (slowly), but none of the interactive features work. Not too surprising. Did anyone poke about in the SVG itself and see if it was organised enough to be useful non-visually? (Remembering that it's all XML; it might be parseable) You might be able to do something with it given enough effort, but the Right Thing to do would be to let Environment Canterbury know how unhappy you are that all this useful information is bundled up in such an inaccessible way. Along with the SVG map data, there's a list of platform numbers and physical locations in another SVG file. I've attached a trivial parser for that in the hopes that it might inspire someone... Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED] #! /usr/bin/env python import sys import xml.sax class Handler(xml.sax.handler.ContentHandler): def __init__(self): self.in_title = False self.title_text = [] self.pos = None def startElement(self, name, attrs): if name == 'use': self.pos = int(attrs['x']), int(attrs['y']) if name == 'title': self.in_title = True def endElement(self, name): if name == 'title': self.printPlatform() self.in_title = False self.title_text = [] self.pos = None def characters(self, text): if self.in_title: self.title_text.append(text) def printPlatform(self): flat = ''.join(self.title_text) id, name = flat.split(' ', 1) id = int(id[1:]) print 'platform: %d, location: %s, position: %r' % (id, name, self.pos) def main(args): for path in args: xml.sax.parse(path, Handler()) if __name__ == '__main__': main(sys.argv[1:])
Re: NFS shares
At 2007-07-09T17:45:36+1200, Kerry Mayes wrote: Client has fstab entry: 192.168.0.215/media/sdb1 /mnt/company nfs rw,hard,intr 0 0 ^^ That line is missing the colon that delimits the host and path portions of the FS spec, it should be something like: 192.168.0.215:/media/sdb1 /mnt/company nfs rw,hard,intr 0 0 Failing that, run rpcinfo on the client to make sure it can talk to the server's portmapper: % /usr/sbin/rpcinfo -p server ... 132 udp 2049 nfs 133 udp 2049 nfs 134 udp 2049 nfs ... If rpcinfo doesn't work, make sure the appropriate services are running and there are no firewalls blocking portmapper and NFS traffic. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: /usr/bin/time -v zanyness
At 2007-07-10T12:04:45+1200, John Carter wrote: /usr/bin/time -v ls /dev/null [...] That really doesn't look right. Way too many zeroes everywhere. Yeah, time(1) is pretty broken and virtually unmaintained on Linux. Any idea what to use instead? What figures are you interested in? There don't seem to be many tools around that will give you the same breadth of information time(1) tries (and fails) to. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: /usr/bin/time -v zanyness
At 2007-07-10T15:00:32+1200, John Carter wrote: I suspect all the info is there in /proc/self/status Not all, and not in that file. You'd be better to start with /proc/PID/stat and statm, which are machine readable. There is a bunch of really useful information in /proc/PID, some of which is parsed and displayed by tools from the psutils package, some of which there are ad-hoc tools to deal with, and some of which isn't used by userspace (yet). One of the more useful recent additions is /proc/PID/smaps, which is useful for examining the address space of a process to get a picture of the real memory use. * Looks up command on path. Just let execvp(3) (or another exec.*p variant) do the work. Duplicating the system/shell path search code would be a waste of effort. * forks execs it. * Somehow catches the exit and before /proc/pid/status vanishes... One way to do it is to have the parent install a SIGCHLD handler and read the child's files from /proc before waiting on the child. Another way is to use getrusage(2) rather than trying to read /proc, but see below. Yet another is to use ptrace(2) to cause the child to read it's own /proc files by injecting code into the exit path. * cats /proc/pid/status to stderr Better to copy the data to a user-specified file rather than mess with the (shared) stderr channel. So I thought I'd spend the 5 minutes... The crux of the problem is to catch the child process before /proc/pid/status disapppears. So I started looking at wait calls and found wait4... Aha! It gets all the details I need... Hmm... Let's strace /usr/bin/time Bugger. It uses wait4 and the values are crap there already. ie. The problems not /usr/bin/time, it's deeper in. time(1) is one of the problems--it's showing the user data that has never had sane values on Linux. The lower level problem is that time(1) uses struct rusage. The problem here is that struct rusage is underspecified by POSIX, and Linux only bothers to do accounting for some of the fields (more than POSIX specifies, but still not everything that struct rusage contains). The man page for getrusage on Linux (and kernel/sys.c for the undocumented fields) indicates that only the following fields are updated: 2.4 kernels: ru_utime, ru_stime, ru_minflt, ru_majflt 2.6 kernels: as above, plus ru_nvcsw, ru_nivcsw undocumented: as above, plus ru_inblock, ru_outblock This corresponds with the fields that have sane values in the output of time(1). Ah well, let's not bother with that then... You never specified which fields you're interested in. If you really want everything that time(1) promised to tell you, you're out of luck. If you're interested in collecting statistics for specific things, there are almost definitely good tools already available to help you out. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: chroot to 64 bit...
At 2007-07-06T15:31:40+1200, Steve Holdoway wrote: Anyone know how to chroot to a 64 bit boot partition from a 32 bit host? I've just upgraded the motherboard/cpu/memory on this machine and it crashes on boot. Pain... who needs it? A 32-bit kernel can't execute 64-bit binaries natively, so you can't do it. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Test your C knowledge here.
At 2007-07-01T21:28:35+1200, Derek Smithies wrote: Is the following line of code legal? 2[abcde]; Sure. There's nothing surprising about it, either. Read the section on array subscripting in the standard to see why. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Test your C knowledge here.
At 2007-07-02T12:11:14+1200, Kerry Mayes wrote: Is this actually a trick question? Is it valid in C to have a line of code that just evaluates to b? What sort of statement is that? Yup, it's perfectly legal. If you compile with warnings enabled or check the source with a lint-like tool, you'll probably get a warning informing you that the result of the expression is not used. This warning is useful because it's not often you want to evaluate an expression and ignore the result. And it's 'b' (a single char), not b (a char * pointing to the char values 'b' and '\0'). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Reference on gdb
At 2007-06-27T09:02:04+1200, Kerry Mayes wrote: There is an annoying bug in evolution that I reported but can't get to do it when I have the debugger going. The debug team have told me that you can attach the debugger when it has frozen but I don't know how to do that and can't find an appropriate reference. The official documentation for GDB is here: http://sourceware.org/gdb/documentation/ There are also plenty of basic tutorials around to get you started with using it. Here's the basic crash course I usually give people: 0. It might be useful to install the debuginfo packages for Evolution and the libraries it uses. They should be available from your distro's package repository as separate packages with names like libfoo-debuginfo. Without these, the details available via GDB will be somewhat limited, but might be enough for the developers to track down the problem. 1. Reproduce the hang. 2. Attach to the hung Evolution process with GDB: a. Find the process ID: % pgrep evolution 12345 b. Attach with the debugger: % gdb -p 12345 c. Produce a backtrace for all threads: (gdb) thread apply all backtrace You might want to specify 'full' after 'backtrace' to get the values of local variables. d. Detach from the process and quit GDB: (gdb) quit The program is running. Quit anyway (and detach it)? (y or n) y Depending on the stack depth and the number of threads, the output may be too large to cut and paste from your console easily. One solution to that is to run gdb in batch mode instead: a. Write out a GDB script: % echo thread apply backtrace all backtrace.gdbscript % echo quit backtrace.gdbscript b. Find the process ID: % pgrep evolution 12345 c. Attach with the debugger in batch mode: % gdb -p 12345 --batch -x backtrace.gdbscript -p 12345 backtrace.log 21 d. Look at the backtrace.log file produced. If you decide to install the debuginfo packages, it might be worth following the rest of the steps above first you've got a basic backtrace, you can work out which packages you will need debuginfo for by looking at what functions are on the stack, e.g.: Thread 8 (Thread 1084229968 (LWP 25128)): #0 0x00380b0c7956 in poll () from /lib64/libc.so.6 #1 0x0035e9a23a94 in PR_Poll () from /usr/lib64/libnspr4.so #2 0x2aaab5ee6118 in __cxa_pure_virtual () from /usr/lib64/firefox-2.0.0.4/components/libnecko.so #3 0x2aaab5ee62f8 in __cxa_pure_virtual () from /usr/lib64/firefox-2.0.0.4/components/libnecko.so #4 0x0035ea6775d7 in __cxa_pure_virtual () from /usr/lib64/firefox-2.0.0.4/libxpcom_core.so #5 0x0035e9a2778d in __cxa_pure_virtual () from /usr/lib64/libnspr4.so #6 0x003e484061c5 in start_thread () from /lib64/libpthread.so.0 #7 0x00380b0d062d in clone () from /lib64/libc.so.6 In the above example, I would expect to get a more useful backtrace after installing debuginfo packages for libnspr4, libxpcom_core and libnecko. Note that, in this case, libxpcom_core and libnecko are part of a packaged Firefox install and it's likely that there's a firefox-debuginfo package containing what I need for both of those libraries. While you're inside GDB, you can generate a dump of the process state (core dump) by using the command: (gdb) gcore /path/to/where/i/want/to/save/it/evo-core-hung This file can then be sent to the developers for further inspection. Don't send it to them unless they ask for it, because core files are typically quite large. It's also important to understand that the core file is very likely to contain sensitive private data (passwords, contents of emails), so it's worth trying to reproduce the problem with a fresh profile if you're intending to share a core file with the developers. The result of a system call trace generated with strace(1) is often useful for debugging. You can invoke strace like so: 1. Find the PID: % pgrep evolution 12345 2. Attach with strace: % strace -tt -f -p 12345 21 | tee strace.log lots of output 3. Kill strace by entering Ctrl-C after letting it run for a little while. 4. Look at strace.log for a copy of the system call trace. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: error in Thunderbird
At 2007-06-21T09:23:51+1200, Zane Gilmore wrote: Has anybody else had the problem in Thunderbird while trying to view the source of an email message a pop-up dialog appears with: https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/23273 maybe? dialog contents XML Parsing Error: not well-formed Location: chrome://global/content/viewSource.xul Line Number 2, Column 1: { /dialog contents It's very irritating as Google has been particularly unhelpful and what the heck is the chrome protocol I thought chrome was a mozilla skin? Not skin, all of the non-browser area UI is called chrome. The view source window that you're having problems with is written in XUL (stored in the file chrome://global/content/viewSource.xul) and shipped with Thunderbird. The chrome: protocol is a way to reference and load these internal chrome resources. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: ssh ignoring first password
At 2007-06-11T17:55:07+1200, Christopher Sawtell wrote: On 6/11/07, Zane Gilmore [EMAIL PROTECTED] wrote: :ssh -V OpenSSH_4.3p2 Debian-8, OpenSSL 0.9.8c 05 Sep 2006 iago ~ # ssh -V OpenSSH_4.5p1, OpenSSL 0.9.8d 28 Sep 2006 It would be a good notion to bring it up-to-date, not only because it could well fix your problem but also because ssh is one of the apps. which one should keep fairly close to being in the current epoch. :-) 4.3p2-9 is the latest version in the Debian stable repositories. Zane's on 4.3p2-8. The fixes in -9 are for an interaction between the GSSAPI support in ssh and the ssh-krb5 package. Keeping your system up to date with what the distribution offers is a very good idea. Building newer versions from upstream should be undertaken only as a last resort. Why create more work for yourself? Leave it to the smart and hard working distribution maintainers. Also, by your own logic, your OpenSSH install is out of date. 4.6p1 is the latest upstream version. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: ssh ignoring first password
At 2007-06-11T11:49:50+1200, Zane Gilmore wrote: Any ssh login ignores the first password entry. We have now got into the habit of pressing enter before entering the password. Firstly, you should be using public key authentication, not passwords. The first port of call when debugging an ssh problem is to use the verbose logging option on the client (-v through -vvv) to see what's going on. Next, check if sshd is logging anything after the first failed password authentication attempt. You might need to turn up the log verbosity in sshd_config to get enough information. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-22T18:32:02+1200, Roy Britten wrote: I'm not currently benefiting from hardware graphics acceleration, and sadly have no idea what to do to enable it. The Q965 chipset seems to be pretty new and my google-fu has failed me. Suggestions, including you should have googled for you idiot welcomed. 965Q is supported by recent versions of the Intel X.org drivers. Looking at Ubuntu Feisty's package selection, xserver-xorg-video-i810 is available in the core repository, but xserver-xorg-video-intel is only available via the universe repository. It looks like latter is a newer version of the former, but with a new package name. I'm not really sure. In Fiesty, the i810 driver is 1.7.4, the Intel driver is 1.9.94. Fedora 7 only has i810 and is at version 2.0.0. I know that Fedora's i810 driver was sufficient to get 3D acceleration working on a 965Q based desktop I've got here. From your xorg.conf, it looks like you're already using the Intel driver... $ glxinfo|grep -i direct direct rendering: No Right, so there's a small flock of ducks you need to have page aligned to get 3D acceleration working with Linux. Firstly, the kernel's DRM driver needs to be working. It sounds like you've got the right modules loaded for your chipset, but you should check the kernel log to see if it has recognized your hardware and initialized correctly. Then, look in the Xorg.log in /var/log to see if you're using the correct driver for your video card (i.e. it hasn't fallen back to vesa or something), and that DRI initialized correctly. Make sure you haven't got DRI disabled in xorg.conf. You can probably assume that the appropriate OpenGL bits are already in place and working, most modern distros will install these by default. The NVIDIA proprietary drivers (used to) futz around with these, which could leave the machine in a broken state if you tried to switch to non-NVIDIA drivers. glxgears running fullscreen is claiming ~150 fps, but just looking at it I'd say it's rendering maybe 1.5 fps. glxgears is pretty much useless as a benchmark. Even so, ~150 is definitely in the realm of software rendering. There's no way it's 1.5 unless you're running on a 486-era machine. but the gnome screen resolution controller offers many more -- I'm assuming it's querying my monitor (a ViewSonic VG2021m) for its capabilities. Not directly. It's just querying X via the X RandR protocol. You can run xrandr(1) at the command line and see what's available as well. The days of the rigid configuration of xorg.conf are very numbered--in fact, with the current Fedora packages you can run X without an xorg.conf file at all, and everything Just Works (unless you want to do fancier things like multihead, which still requires some manual configuration for now). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-22T20:17:23+1200, Roy Britten wrote: I don't want to post ~500KB of logs (or dozens of lines extracted therefrom) to the list; could you suggest what I should be seeing? In the kernel logs you're looking for lines starting with [drm], and probably any log entries immediately around lines like that. In the X logs you're mostly looking for lines starting with I810 or Intel. If everything is working correctly, one of the last lines that match either of those will say 'direct rendering: Enabled'. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-22T21:43:36+1200, Roy Britten wrote: Here I see direct rendering: Enabled and yet glxinfo still reports no direct rendering. Would the AIGLX warning messages be significant? They seem to be harmless. [Xorg.0.log] Are there any other (WW) or (EE) lines in Xorg.0.log? Section Module Loaddri EndSection Is glx included in this section? Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T00:15:53+1200, Roy Britten wrote: Run the following and see if it gives you any clues (otherwise, send the output to the list): LIBGL_DEBUG=verbose glxinfo 21 /dev/null At 2007-05-23T08:08:03+1200, Rex Johnston wrote: Using these drivers? http://intellinuxgraphics.org/download.html He's already using some (older) version of these. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T09:12:23+1200, Kerry Mayes wrote: I also seem to remember reading that the only difference between the latest version of the i810 and intel drivers was that the intel driver included the 915resolution functionality. It's called mode setting. Intel have only just recently decided to support open source mode setting code... previously you were stuck with using a hack (i915resolution). There are other changes in the 'intel' drivers vs i810--they are just a new version of the i810 drivers, renamed and with further developments included. At 2007-05-23T09:29:23+1200, Roy Britten wrote: $ LIBGL_DEBUG=verbose glxinfo 21 /dev/null libGL: OpenDriver: trying /usr/lib/dri/i965_dri.so libGL error: dlopen /usr/lib/dri/i965_dri.so failed (/usr/lib/dri/i965_dri.so: undefined symbol: _glapi_add_dispatch) http://www.mail-archive.com/[EMAIL PROTECTED]/msg27359.html suggests that there's a version mismatch between the driver and libGL. Ubuntu provides version 6.5.2-3 of libgl1-mesa-dri and libgl1-mesa-glx which seems pretty recent. Are you sure you haven't installed anything that replaces libGL? I know that the proprietary ATI/AMD drivers and the NVIDIA drivers include their own (incompatible) libGL. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T11:14:23+1200, Roy Britten wrote: Ubuntu does seem to install some ATI and nvndia stuff by default. I removed the xserver-xorg-video-ati, nvidia-kernel-common and linux-restricted-modules-generic packages and rebooted, but that hasn't had any apparent effect. It's none of those. Note that I said _proprietary_ drivers. I don't know (or care, I won't use them) what the package names for the proprietary NVIDIA/ATI drivers are... you'll need to work it out yourself and see if they have ever been installed, and, if they have, whether they've messed with your standard Mesa/libGL install. If it's a fresh install of the distro on this hardware, you can probably just assume they've never been installed. It could be that the Intel driver from universe is somewhat broken. Switch back to the i810 driver and see if that works. You might need to use i915resolution to get the correct display resolution when running with the i810 driver. Maybe what's shipped with Feisty is just broken for your configuration. File a bug in Launchpad if you're still stuck. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T11:30:18+1200, Kerry Mayes wrote: Based on this thread, I've switched from i810 to intel driver (can't get dual monitors working, but I'll get back to that). Um, what problem are you experiencing? What distro, hardware, X configuration, earlier troubleshooting, etc. etc.? We're not all psychic. So I'm getting further than Roy. Further, yes, in that whatever problem you have is a different problem to his. I don't know what the warning about visual 0x5b is - is that a resolution setting? Maybe 915resolution is confusing the driver? It seems like you can safely ignore those warnings. As far as i915resolution, you don't need it with the newer i810/Intel drivers. Is drirc something I need to install, or can I just create a blank config file? You don't need it, you can ignore that error completely. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T12:34:54+1200, Kerry Mayes wrote: The reason it was in brackets was because I wasn't asking about that at this stage. That's what I meant by I'll get back to that. :-) I was asking how to interpret the output of the libgl debug command. That is, I want to get 3d acceleration working before worrying about dual monitors. Oh, I wasn't talking about dual head. I mean, what problem do you have that caused you to start following along with the troubleshooting? So does that mean that 3d acceleration is working? 3D acceleration is working if glxinfo says direct rendering is enabled, OpenGL applications run at a decent speed, and your machine doesn't crash. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Hardware graphics acceleration
At 2007-05-23T13:06:54+1200, Nick Rout wrote: I have read on the 915 resolution homepage that it's functionality was going to be incorporated into the driver, and further info will follow. Looks like this might have happened. There must be an announcement somewhere! Question: Is the BIOS still used for mode setting? Answer: The Intel X.org driver version 2.0 now includes native mode setting code that no longer relies on the BIOS to configure the chip. Most Linux distributions continue to ship older versions of the driver which rely on the BIOS for mode setting. -- http://intellinuxgraphics.org/documentation.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: visibility memory useage in elf binaries
At 2007-05-21T20:52:42+1200, Chris Bayley wrote: What I want to do is something like this: first extract all the symbols and size information using nm -S, then graph the information with an expression in section terms ie (.text|.rodata) vs. an expression in symbol terms ie. symbols starting ('rtos_[.+]|gui_[.+]) thus the above example would show me the combined ROM useage of all the sybols being with 'rtos_' or 'gui_'. Not sure if there's anything already out there, I tend to write ad-hoc scripts for this sort of thing. even cooler would be then to present the information in a pie chart fromat like 'filelight' or konqueror's 'radial view' does with the ability to drill down and look in more detail at a given area. Just feed the output of your script into GNUPlot or some other plotting tool for a static visualization. If you want it to be dynamic and zoomable, you're probably best off to look at the data format that KCachegrind consumes and make your script output that. If you are a programmer and can identify the need I refer to above what are the tools you are already using to glen this kind of analysis of memory usage in your programs ?? What you're talking about above is not looking at memory usage, just program code and data size on size. Whether it consumes much memory is an implementation detail, e.g. on modern UNIX systems the executable is paged in on demand. You might also want to look at the massif plugin for Valgrind for looking at runtime heap usage and the pahole[0] ELF spelunking tools if you're looking to shrink your data structures. [0] http://git.kernel.org/?p=linux/kernel/git/acme/pahole.git;a=summary Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Samba/CIFS interaction [was Re: Thank you]
At 2007-05-15T09:03:12+1200, Zane Gilmore wrote: However, just recently, I had a few problems with permissions on a CIFS connection. When I changed to smbfs it all worked. The ownership of the share refused to change to something other than the UID of the owner of the directory on the directory on the server. Was the server running Samba? There's an extension to the CIFS protocol for Unix-specific features. It's enabled by default with in most versions of the CIFS VFS module. It can cause problems like the one you describe as a result of disparate configurations of the server and client (e.g. if they don't have matching UIDs for the same named users, like the problems you can run into with NFS). I end up turning the Unix extensions features off when the CIFS module is loaded by sticking the following in /etc/modprobe.d/options: install cifs /sbin/modprobe --ignore-install cifs; /bin/echo 0 /proc/fs/cifs/LinuxExtensionsEnabled I don't recommend doing this in general, but if you're in a network none of the UIDs map correctly, it can be handy. You can also control it on a per-mount basis by setting /proc/fs/cifs/LinuxExtensionsEnabled to 0 before mounting and setting it back to 1 after mounting. It should be a mount-time option rather than yet another abuse of /proc, but it's not. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Any nvidia twinhead gurus out there???
At 2007-05-13T14:22:33+1200, Steve Holdoway wrote: up until now, I've been using one of my screens through a kvm switch, necessitating its use via analog video. Now, I'm ditching the kvm, and have connected both monitors digitally. Here's where the problems are starting. The right one just won't sync, and keeps blanking out. Monitors are both Viewsonic vg2021m, and the card is a trusty nvidia GeForce GE 5200 with 128MB. Just so I'm sure I understand your problem, you're getting the expected display on each monitor, but one of them periodically blanks out for a few seconds and then comes back? The DVI electronics on that card probably won't be able to drive two DVI ports at the kind of resolution you're asking for. At a lower resolution you wouldn't have any trouble. I've seen similar problems with 5700 Ultras and 6800 Ultras. I had some luck with using extremely short DVI-D cables and tweaking the refresh rate down to ~50Hz to get a stable display in one case. NVIDIA's DVI implementation has been truly awful until pretty recently. The 7xxx and upwards cards are decent, though. Your choices are: buy a better card, or use VGA for one of the two monitors. You'll have to experiment to work out which DVI link is the weaker one. From memory, it's the 'external' one that is the problem... see below. (II) NVIDIA(0): NVIDIA GPU GeForce FX 5200 at PCI:3:0:0 (GPU-0) (--) NVIDIA(0): Memory: 131072 kBytes (--) NVIDIA(0): VideoBIOS: 04.34.20.16.00 (II) NVIDIA(0): Detected AGP rate: 8X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on GeForce FX 5200 at PCI:3:0:0: (--) NVIDIA(0): ViewSonic VG2021m (DFP-0) (--) NVIDIA(0): ViewSonic VG2021m (DFP-1) (--) NVIDIA(0): ViewSonic VG2021m (DFP-0): 135.0 MHz maximum pixel clock (--) NVIDIA(0): ViewSonic VG2021m (DFP-0): Internal Dual Link TMDS (--) NVIDIA(0): ViewSonic VG2021m (DFP-1): 165.0 MHz maximum pixel clock (--) NVIDIA(0): ViewSonic VG2021m (DFP-1): External Single Link TMDS (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): 1400x1050,1400x1050 (II) NVIDIA(0): Virtual screen size determined to be 2800 x 1050 Has detected the right monitor as being completely different to the left one! 165MHz is the maximum clock frequency for single link DVI. It turns out that you can drive LCD panels at high resolutions and refresh rates by going a little out of spec and running in 'reduced blanking' mode, which is 135MHz. Problem is, the older NVIDIA cards can't really drive the link at 165MHz reliably, despite what they advertise (there's probably a class action waiting there for some keen party). The card have one internal TMDS and another add-on TMDS part (which is separate chip, newer ones are usually both internal). This is the internal vs external TMDS listed in the log output above. Only the internal TMDS supports the sort of configuration that will allow it to run in the 135MHz 'reduced blanking' mode. Work out which of the two DVI ports is associated with the external TMDS and try running the monitor connected to that port via VGA instead. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: truncating a file
At 2007-05-10T09:23:52+1200, Nick Rout wrote: I feel I should know this, but nothing comes to mind. I need to truncate a 6.5G file to the first 2G of the file. The remainder can be thrown away. There is no space left on the filesystem. So whatever I do has to be done to the original, copying the first 2G to another file is not going to cut the mustard. man [2|3p] truncate seems to reveal a c function, but I cannot find a command line equivalent. Use dd(1), e.g.: % ls -lh bigfile -rw-r--r-- 1 kinetik kinetik 954G 2007-05-10 09:52 bigfile % dd bs=1 seek=424935705 if=/dev/null of=bigfile 0+0 records in 0+0 records out 0 bytes (0 B) copied, 5.6991e-05 seconds, 0.0 kB/s % ls -lh bigfile -rw-r--r-- 1 kinetik kinetik 406M 2007-05-10 09:54 bigfile Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: broadband with ubuntu
At 2007-05-10T16:02:57+1200, Matthew Whiting wrote: Try this: sudo echo 'nameserver 4.2.2.2' /etc/resolv.conf And maybe read this thread - its not quite the same but might work... http://www.mepislovers.org/forums/showthread.php?t=6631 it freakin worked!! cheers :) what does this imply? Glad that everything is working now, but... I don't think we've actually found the root cause of the problem. What did you do... run that sudo command line, or follow some instructions from the mepislovers.org link? That sudo command line can't possibly have changed the /etc/resolv.conf file unless you happened to be logged in as root already. Were you? It seems that others in this thread had already established that your DNS resolution was working fine, so I can't really see how making changes in this area would make any difference anyway... Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: reducing folder levels
At 2007-04-27T09:50:56+1200, Roger Searle wrote: Is there a command to take out 4 levels of folders from this structure so the files go into /home/roger/lastfolder? Either during or after extracting from the tarfile? GNU tar has the --strip-path (or --strip-component in newer versions) option. It's a GNU extension, so you can't assume it's available elsewhere. e.g. % tar -tf foo.tar foo/ foo/bar/ foo/bar/baz/ foo/bar/baz/quux/ foo/bar/baz/quux/file5 foo/bar/baz/quux/file1 foo/bar/baz/quux/file4 foo/bar/baz/quux/file3 foo/bar/baz/quux/file2 % tar --strip-component 2 -xvf foo.tar foo/ foo/bar/ foo/bar/baz/ foo/bar/baz/quux/ foo/bar/baz/quux/file5 foo/bar/baz/quux/file1 foo/bar/baz/quux/file4 foo/bar/baz/quux/file3 foo/bar/baz/quux/file2 Note that full pathnames are printed, but it has stripped foo/baz/baz/ when creating the files. % ls quux/ Another option is to use the -s option to pax. pax is part of POSIX, so it's portable and you can expect it to be available on POSIX-confirming systems. e.g. % pax -s ',^\([^/]\+/\?\)\{0\,3\},,' -rvf foo.tar quux quux/file5 quux/file1 quux/file4 quux/file3 quux/file2 pax: ustar vol 1, 9 files, 10240 bytes read, 0 bytes written. Replace the '3' in the regex specified by -s with the number of paths you need to strip off. The situation I have relates to a tar file created by someone else, so I am wanting to deal with a tarfile already created. I would however also be interested in an option to ignore the top x levels of folders that is saved when creating a tar file too. Use the -C option to tar (or just use the 'cd' shell command) to change into the base directory. With pax you can use the -s option with ta regex like the one I used above. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: reducing folder levels
At 2007-04-27T11:19:33+1200, Roger Searle wrote: [EMAIL PROTECTED]:~ rpm -q tar tar-1.15.1-42.2 and man tar shows neither option! never mind, --strip-components is available and does what I'm looking for. If it's a GNU program, you're far better off looking at the info page than the man page. The FSF have some policy for using info in preference to man, meaning that most GNU tools have out-of-date and non-definitive man pages written by someone else. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Ubuntu Feisty Fawn upgrade.
At 2007-04-23T18:10:06+1200, Kerry Mayes wrote: (is there a way of mounting an iso as a virtual cd rom under linux?) installation was easy (if time consuming). mount -o loop /path/to/disk/image /path/to/mount/point Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: scripting, mkdir help
At 2007-04-13T21:12:43+1200, Volker Kuhlmann wrote: but you were using $(..) already, which also only works in bash but not in posix sh. Can we kill this fallacy already? $() is specified by POSIX[0]. If you've got some shell that claims to be POSIX conformant and does not support $(), the shell is broken. [0] http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_06_03 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: scripting, mkdir help
At 2007-04-14T12:46:13+1200, Volker Kuhlmann wrote: On Sat 14 Apr 2007 12:24:43 NZST +1200, Matthew Gregan wrote: Can we kill this fallacy already? $() is specified by POSIX[0]. If you've got some shell that claims to be POSIX conformant and does not support $(), the shell is broken. Or just very old. Your reference is 2004. It has been in POSIX for much longer. They don't make the historical standards available via the web (as far as I can find). Solaris /bin/sh 10 years ago did not support it AFAIR. That shell does not purport to be POSIX compliant. It's just an ancient Bourne shell implementation. You want /usr/xpg4/bin/sh or somesuch for a POSIX compliant shell. This is typical of Solaris in general. The tools available in the default /bin (/usr/bin, etc.) directories are ancient and non-standard, and you have to jump through hoops to get anything vaguely standard compliant. Such is the pain of backwards compatibility, I guess. But I don't really care, I conlcuded a long time ago that the only really portable way of shell programming is using bash. Or tcsh. Well, if you're going to give up on POSIX shell compatibility, you might as well drop shell completely and use a real programming language while you're at it. :-) Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Mailing List issues.
At 2007-04-03T22:25:35+1200, Don Gould wrote: Why is that people on this list are constantly so rude yet feel the need to pull people up? Gets on my goat. Baa. As for GPLHost's sg server, I might look at moving my mail to speed things up or even changing my clug sub. Before you do anything hasty... I'm not particularly convinced that your mail exchanger is at fault. There have been other people reporting delayed and unusually out-of-order message delivery from this list (I've seen it myself, but it doesn't bother me enough to do anything about it). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT: John Backus died
At 2007-03-23T09:19:09+1200, John Carter wrote: Now if we can just kill off FORTRAN we can get back to where we were when LISP arrived and start making progress. :-) When it _arrived_? No thanks. The world has made quite a bit of progress since then. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debian Etch SCSI asynchronous probing
At 2007-03-18T15:16:46+1200, Jasper Bryant-Greene wrote: I've just installed Debian Etch onto a 1GB USB flash drive using Since Etch is not released yet, I guess you mean a weekly or daily build of the Etch candidate release. The fix would probably be to have device-mapper retry its initialisation, but does anyone know of a quick workaround? e.g. disable the async SCSI probing via a kernel boot parameter etc? You can force synchronous probing by specifying scsi_mod.scan=sync on the kernel command line. You could also try suppling rootdelay=5 (where 5 is a number of seconds to wait before attempting to mount the root filesystem). I've got no idea if either of these will work around your specific problem. There's an existing bug (401916) about this in the Debian BTS. You might want to read that for some more background. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: IMAP Server that supports sub folders
At 2007-03-15T12:41:06+1300, Don Gould wrote: I'm currently using dovecot It doesn't support folders within folders It does. If it's not working for you, check your configuration or post specific details about the error you're seeing. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Is it time to boycot the Warehouse?
At 2007-03-15T12:42:19+1300, Don Gould wrote: Do you recall the impact of including IE in windows 95 for Netscape? Hang on, you want to call for a boycott because of the bundling of some search toolbar? You're not, perhaps, a little more concerned that you can't buy a PC that doesn't come bundled with _Windows_? Google, Yahoo!, and the others are already striking similar toolbar bundling deals with the vendors. Microsoft are doing the same thing. There's nothing too strange about this. Besides, to gain any sort of monopoly on search they need to make their search engine as good as the others, otherwise people will keep going to the competition. Microsoft didn't win the browser wars in the late 90s just thanks to bundling, they actually had build a _better_ browser than Netscape could as well, otherwise people still had a reason to download Navigator. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: text session mirroring and logging...
At 2007-03-13T23:26:24+1300, Steve Brorens wrote: Chris was after a way to mirror an ssh session (so that a user at the remote end could watch and learn). Take a look at GNU Screen in multiuser mode. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: executing cron job - fixed - and vi
At 2007-03-10T10:51:23+1300, Volker Kuhlmann wrote: Well whatever they say, you'd need to set both those variables and fact is that VISUAL takes precedence (something they don't say, and which is a bit dubious for an application which clearly need to edit a file). VISUAL is not for the display of a read-only file, it's for setting your preferred visual editor. EDITOR may be set to a non-visual editor. vi is an example of a visual editor, ed is an example of a non-visual editor. Once you know the definition of VISUAL, it's pretty obvious why crontab(1) would prefer VISUAL to EDITOR if both have been set. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: executing cron job - fixed - and vi
At 2007-03-10T11:40:46+1300, Volker Kuhlmann wrote: Btw how do you find this out? man VISUAL isn't informative. Endosmosis. :-) I haven't found any definitive documentation or central registry for these things. It actually turns out to be kind of a pain when these sorts of discussions come up, because there's nowhere to point people... SUSv3 mentions oh, EDITOR or VISUAL might be set but doesn't discuss the meaning of them at all. The Art of Unix Programming mentions them, mentions the visual vs line based thing in a footnote, and vaguely recommends that new programs just look for EDITOR. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Monotone
of things we use that perform heap allocation behind our backs (e.g. lots of the STL), but we don't need to worry about dealing with pointers, lifetime management, etc. with those. You say you did this as a Summer of Code thing? Are you still a student? What year doing what? The project has participated in Summer of Code both of the previous times it has been run, and we'll be signing up again for 2007. I'm not a student--my interaction with SoC has been as a mentor for the monotone project, and a subsequent post-SoC mentor summit at Google. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Monotone
At 2007-03-01T20:40:39+1300, Christopher Sawtell wrote: 1) Can one commit binary files, such as pictures, to a monotone network? Sure, that will work just fine. Changes between versions of a file are stored as a description of only the differences (this is computed using the xdelta algorithm, which is effectively byte based and will work well on any type of file). During the monotone summit last month, we were sharing our collection of photos (around 1GB total) using monotone. There are, however, a few caveats: - The existing content merger (a line-based 3-way merger) won't work on them. Theoretically you can tell monotone to use another merger for certain file types, but nobody has really tried yet, so a little bit of coding would probably be required to get this working. Actually, that's not quite true--one of the monotone developers wrote a special content merger that understood the formatting of ChangeLog files and made merging them much easier as a proof of concept. - Because monotone is primarily designed for managing source code or other trees of files with similar characteristics, it assumes that an entire file can be reconstructed in memory. This means that huge files won't work well because of the memory requirements of dealing with them. There are versioning systems around designed specifically to deal with versioning large files (usually media). As far as I know, they're all commercial offerings, and usually targetted at video game developers and other areas of digital art. 2) Do new branches create copies of completely unchanged files, thus wasting disk space? No, new branches are free in every respect. Creating a branch is just like commiting a new revision (and takes advantage of all the same delta compression, etc.), but happens to have a new branch named stored in the branch certificate. It might be easier to understand this once you realise that the concept of branches in monotone are just a way to filter a view of the complete history in your repository. Imagine if you start with a single revision, A, in your repository: A Now, you commit a child of that, B: A / B You decide to create a new branch, and revision C is the first new revision in that branch: A \ C And then do some more work as D and E: A \ C | D | E On the first branch, you commit some more work--F and G: A / B | F | G The graphs above show the view of the history with everything filtered out except the branch you're working on. Behind the scenes, the graph really looks like this: A / \ B C | | F D | | G E That reminds me of something I forgot to put in the 'Cool stuff' section of my last post. Because the history is represented as a DAG, and because monotone allows you to create a new child revision of any revision, you can branch from or make changes against old revisions at any time in the future. One really neat use of this is described in the DaggyFixes page in the monotone wiki. It's too much go cover here, so I'll just link to the wiki page: http://venge.net/mtn-wiki/DaggyFixes Also, if anybody is playing with monotone, I recommend looking at the monotone-viz tool. It's a visualization tool written in O'Caml that produces very pretty graphs of the history stored in a monotone repository. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Monotone
At 2007-02-26T14:46:41+1300, John Carter wrote: On Mon, 26 Feb 2007, Matthew Gregan wrote: What would you like to know? Something about Monotone specifically, or distributed version control systems in general? All of the above. Well, that will be a very large email or quite a long talk. :-) Here's a little bit (it turned out to be kinda huge) on monotone, and answers to some of your questions... feel free to follow this up with more questions if you're still curious. If there aren't many people on the list interested, it's probably better to take it off list or move it to the monotone-devel list, IRC, or something... it's only Linux related in that it happens to run on Linux, and Linus once considered it while looking for a replacement for BitKeeper before he went of and wrote Git (parts of which were quite strongly influenced by monotone). :-) Introduction: The website is: http://monotone.ca The project was founded by Graydon Hoare in late 2002 and has been building up steam ever since. It's well supported on Linux, OS X, Windows, and most other Unix-like platforms. We provide binaries for common platforms, and there are packages included in most major Linux distros. One goal of the project is to be able to provide a single stand-alone binary without any external dependencies, so you can drop the binary on any given machine and start using it immediately without fighting with installation problems. Cool stuff: Our merging algorithm is provably correct. There was a large amount of discussion on how to do merging correctly about a year ago. Our merge algorithm is a result of that discussion (and a lot of hard work by some of our developers). So far, we're the only VCS to adopt this algorithm. monotone (and Bazaar NG) are the only free systems I know of that version renames and directories correctly, so that merging across renames Just Works. If you're a Java developer, you'll already be aware that this is a pretty fundamental feature in a VCS, and most of them are lacking it. Concepts: monotone versions trees of files and stores snapshots of the trees as the fundamental versioned object. The history is recorded as a DAG (directed acyclic graph) of these tree snapshots, e.g.: A | B / \ C D | / E / \ / F ...where each letter represents a snapshot of the versioned tree. Virtually everything stored by monotone is named and verified via the SHA1[0] cryptographic hash function. C and D represent a case where two changes have been made in parallel--we allow this, and record it in the history as shown above. Many other systems require history to be linear. F represents a merge of the two parallel lines of development into a single revision. This might look like branching and merging, and it is, except in monotone this is also allowed _within_ branches. This means a branch might have more than one head (latest revision), and that's okay. You can always chose to merge the branch heads if you wish. A tree snapshot is recorded in a manifest, which contains the name of each file or directory and SHA1 of the file contents (if it is a file). The manifests are linked together by revisions, which contain the SHA1 of the parent revision, the SHA1 of the manifest for this revision, and a list of changes (including before and after SHA1's of file contents) that transforms the old manifest (that the parent used) into the new one. A revision is named by the SHA1 of the revision text. Note that revisions are chained together via their SHA1--each revision contains the SHA1 of the parent revision. This means that a SHA1 of a particular revision also to includes the SHA1 of every revision, and every manifest, and the contents of every file leading up to it in history. So, when I tell you hey, check out my fix for foo, it's in rev 9ad0ccda9b8f31768378ee6502b18194d3b4a90a, you know that when you check out that revision on your machine it is exactly the same thing (including all prior history) to the revision 9ad0ccda9b8f31768378ee6502b18194d3b4a90a on my machine. Each developer has an RSA public and private keypair and produces a set of signed certificates for each revision they commit. The public key is used to verify that revisions are signed by who they say they are. There is also some basic trust delegation support in the system, and this functionality is currently being reworked to be more flexible and powerful. All this together means that it's impossible for a bug or a malicious attacker to cause unnoticed corruption or sneak security holes into your VCS system (this has happened numerous times to CVS repositories, including one that was a central mirror of the Linux kernel sources), even if they have complete access to the VCS repository data on a central/core project server to do what they like. To share changes, you connect to another user using monotone, and your monotone processes perform some set synchronisation magic to decide which bits the other person
Re: Carter's ramblings...
At 2007-02-14T16:39:35+1300, Joseph Miller wrote: I expect Matt Gregan would hate me forever if I suggest he do a talk on his work with the monotone version control project and the Google Summer of Code :P I'm not much of a presenter, but I could have a crack at it if people were interested in hearing a bit about the project. PS. Matt please don't hate me *wink* Ha, well, apologies in the form of Belgian beer are always accepted. ;-) At 2007-02-14T17:07:46+1300, John Carter wrote: Is there a Monotone Hacker around? Cool!! Very Cool!! Monotone is definitely one of the most interesting of the new breed VC's around. Please Mr Monotone Hacker, Pretty Please! - Tells us more! Sorry for the delay in replying... funny timing actually--I just got back from the Monotone Hackaton/Summit hosted at Google. There's even a set of slides we didn't get to use during a TechTalk for Google Video one of our developers gave during the summit. What would you like to know? Something about Monotone specifically, or distributed version control systems in general? Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: aptitude dependency hell
At 2007-01-23T17:30:25+1300, Volker Kuhlmann wrote: You are hitting the nail square on the head here. It's set to stable (both systems), but changing it to testing makes the aptitude install work just fine! Looks like it's a question of pushing the dependency resolver into setting different priorities. If you want to keep the system running stable and only pull in specific packages from testing, you can leave it set to stable, and then install packages like so: # aptitude -t testing linux-image-2.6.18-3-486 This will select linux-image-2.6.18-3-486 and its dependencies from testing, but only those where the versions in stable are not recent enough. With this sort of configuration, upgrade/dist-upgrade and the like will continue to pull packages from stable. It'll also generally continue to upgrade the packages installed from testing, depending on what's also happening in stable with that package, etc. There's a good explanation of the priority system and why things behave the way they do here[0]. If you change the default version to testing, it will consider all packages in testing to be appropriate to upgrade to, and the next upgrade/dist-upgrade you run will turn your stable install into a testing install. Btw when doing this on a running system, grub's menu.lst isn't updated, but on a new install it just installs the new kernel and a corresponding menu.lst. update-grub(8) and kernel-img.conf(5) are responsible for this. It should be run automatically every time a new kernel is installed (and is definitely run when performing this kernel upgrade on a pristine sarge install). Not sure why it didn't run in your case. Check /etc/kernel-img.conf to see if update-grub is configured to run for the postinst and postrm hooks. [0] http://www.argon.org/~roderick/apt-pinning.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: aptitude dependency hell
At 2007-01-22T14:46:36+1300, Volker Kuhlmann wrote: I have a Debian 3.1 system,but am forced to upgrade the kernel to something newer, so I added [...] Now I need a second system like that, but all I get is # aptitude install linux-image-2.6.18-3-486 ... The following packages have unmet dependencies: linux-image-2.6.18-3-486: Depends: initramfs-tools (= 0.55) but it is not installable or yaird (= 0.0.12-8) but it is not installable or linux-initramfs-tool which is a virtual package. What is APT::Default-Release set to (if anything) on each system? You should find it in /etc/apt/apt.conf or inside a file in /etc/apt/apt.conf.d/. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Struggling with this python bit
At 2007-01-18T21:46:54+1300, Nick Rout wrote: group 1 2..34... title_re = re.compile(r'(.*?) ((?:\d+x\d+)|(?:\d+\.\d+\.\d+))(.*?) ?\((.*)\)') m = title_re.match(entry['title']) What exactly is title_re searching for? anything with a space at the end (.*?) group 1 then( group 2 a pair of digits of the form 123x456 (?:\d+x\d+) or a tuple of digits of the form 12.34.56(?:\d+\.\d+\.\d+) ) then anything, perhaps including space at the end (.*?) ?group 3 then anything surrounded by parenthesis\((.*)\) group 4 Notes: - 'anything' can also be an empty string - expressions in parenthesis form a group, except when they're non-matching (?:) or escaped literal parenthesis \(\) - number of digits in the pair or tuple is not limited to just what the examples show - the optional space would not be included in group 3 See the Python regex syntax documentation[0] for the full story. And assuming that entry['title'] equates to u'Battlestar.Galactica.S03E01E02.HR.HDTV.XviD-SFM' then what should end up in m? The match will fail, and m will be the None object. A valid match would be: Battlestar.Galactica 10.12.06 (S03E01E02) With the groups: 1 = 'Battlestar.Galactica' 2 = '10.12.06' 3 = '' 4 = S03E01E02 ...but I don't know if that's what the program is actually expecting, it just happens to form a valid match for the regex discussed above. Also what should the m.group(1) return? See above for some annotation to the regex to show where each group starts and ends. If the match succeeds. m.group(1) will contain everything up to the first space from the input string. If the match fails, any call to group() (or any attempt to call a method on m) will fail with an AttributeError exception because m is the None object. Looking at the regex only allows us to talk about the syntax. You'd need to show more of the code before it's possible to describe the semantics the groups within the match. [0] http://docs.python.org/lib/re-syntax.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT:[Fwd: TP: How NSA access was built into Windows]
At 2007-01-19T09:00:21+1300, Gabriella Turek wrote: Not a LInux topic by a long shot, but of interest. http://www.heise.de/tp/r4/artikel/5/5263/1.html To bring this topic a little closer to being about Linux and UNIX, I recommend you read Ken Thompson's 1984 ACM paper Reflections on Trusting Trust[0]. [0] http://cm.bell-labs.com/who/ken/trust.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: RAM for T23
At 2007-01-17T18:23:33+1300, Don Gould wrote: Chris, or anyone else, do you know what ram this thing takes off the top of your head. Look at http://www.thinkwiki.org for all things ThinkPad related, including lots and lots of stuff on getting various aspects of Linux working correctly given the peculiarities of each model. Any ideas what should be paid to boost it from 256 to 512 or a gig? PriceSpy suggests that a 256MB module of the appropriate type is around $73 new. 512MB is around $198, so you're looking at spending $400ish to get 1GB... not exactly cheap. You might save some money using second hand modules, but how much you'd want to trust them is another matter--if you go this route, make sure you run memtest86 for a couple of hours and won't have any trouble returning faulty modules. Actually, the advice to run memtest86 applies for new modules too... better to catch problems early. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: More free boxes
At 2007-01-16T13:21:54+1300, Brett Davidson wrote: I REALLY like Dtrace - would love to see that in Linux/BSD. The Linux answer to DTrace is SystemTap[0]. It's still under development, but it's been usable for some tasks (kernel instrumentation) for quite a while already. [0] http://sourceware.org/systemtap/ Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT: Re: More free boxes
At 2007-01-16T13:54:08+1300, Steve Holdoway wrote: Anyone know of anything similar for java? I'm going mad trying to find these memory leaks! And if anyone else tells me there aren't any leaks in Java... I'll get upset (: Well, Sun's Java 6 runtime has DTrace probes built in to it, so if you're on a system with DTrace support, you can instrument your Java application using that. If you're just hunting memory leaks, the NetBeans profiler or the the management tools that ship with the JDK (jconsole/jstat/jhat/jmap) might do the trick. There are also lots of other tools in this area... which one will suit you best is something you'll have to find out. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debugging memory leaks [was Re: More free boxes]
At 2007-01-16T14:56:52+1300, Steve Holdoway wrote: jstat has been the most useful, but only points to a leak in the native layer. I suspect the mysql conduit ( kneejerk reaction! ) as it's pretty old, but cannot easily upgrade it, thanks to idiot developers ): Ah well, if it's a leak in native code that's a whole 'nother story. :-) If you suspect there is something funky going on with a Java-native layer, try running with the strict JNI checks enabled (-Xcheck:jni). The checking in the IBM 5.0 runtime seems to find way more problems than any of the others. For general memory debugging there are lots of tools available. Valgrind is pretty much the best tool available (except, perhaps, one or two expensive commercial offerings) for debugging memory-related issues in native code. It can be somewhat tricky to run Java applications under Valgrind, because both of them tend to consume quite a bit of address space, so when you run them together there's a good chance you'll run out of address space on a 32-bit machine. I've had quite a bit of luck running Java VMs under the most recent versions of Valgrind (3.1 upwards) and Valgrind tends to get less confused if you disable Hotspot (-Xint) or the JITer if you're using a non-Sun runtime. It's much easier to deal with if you can reduce the testcase to something smaller than the whole app, obviously. A big disadvantage with Valgrind is that you can't use it in a production environment because the application will run at least 10-100x slower, and there's no way to attach and detach Valgrind from existing processes. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Linux packaging Re: Holidays with linux? Share your gems!
, it's nice to see the multi-year infighting over who really maintains RPM is starting to wrap up... there's actually somewhere to send patches now! Oops, I did let one complaint slip out.) Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Linux packaging Re: Holidays with linux? Share your gems!
At 2007-01-12T19:53:07+1300, Volker Kuhlmann wrote: I put money on having seen repackaged source tar files with all Debian patches applied since then, though the evidence may have walked off disk by now. Oops, my memory failed me slightly. Yes, with the very early releases, you used to get a patched tarball plus the patch that generated the tarball from the original source. Applying the patch in reverse (patch -R) would regenerate the contents of the pristine tarball. Pretty messy compared with the current system, but the same total set information was available. They stopped doing this a long time ago. Buzz was 1996. The existing system of pristine tarballs has been in place for a very long time. You're right that policy isn't the same as application software behaviour, but software which doesn't strongly discarage bad policy reduces its usefulness. Well, the Debian build tools certainly discourage it. Using them in the documented way will result in verification that a pristine tarball and a patch to apply the Debian-specific changes Having a policy and adhering to it 100% are two different things. Absolutely. While rpm doesn't enforce this policy either, having had it since day 1 meant there was never any deviation. Rubbish. As long as there are people involved, it can be cocked up. Some may count that as a plus, I count that as irrelevant. If it makes it more difficult to cryptosign the file, it becomes a downside. It doesn't make it any more difficult. It's automated as part of the package building process. Yep, and with rpm I wouldn't have to do all that. Bonus from my point. But you need RPM, or a tool that understands RPM format to extract the source and patches, which is pretty inconvenient. Much easier to have one line of trust to the distro vendor, though that's probably more a distro than a tool issue. However, rpm -K onefile looks much simpler to me than the hoops you describe. The 'Release' files are signed by the release masters. You trust these, and the release masters trust the maintainers. This is called a web of trust. The only hoop is establishing the initial trust seed by establishing trust to the release master's key. The same hoop exists for your favourite system too. I was only talking about cryptosignatures for authenticity, anyone can run md5sum against accidental transit damage. You need a trustworthy checksum value supplied by the originator in the first place. I would be interested in your Debian-minded view on that, actually. I'm not Debian minded, I use the right tool for the job. Sometimes, it's not even Linux. Sorry, I'm not going to waste my own time ranting about how much RPM sucks. It won't achieve anything. If you want to see ranting about RPM, feel free to Google for it. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Linux packaging Re: Holidays with linux? Share your gems!
At 2007-01-12T20:19:40+1300, Matthew Gregan wrote: They stopped doing this a long time ago. Buzz was 1996. The existing system of pristine tarballs has been in place for a very long time. To be specific, the next release (Rex, December 1996, the second release of Debian) had switched to the pristine tarball + dsc + diff.gz format for source distribution. -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Linux packaging Re: Holidays with linux? Share your gems!
At 2007-01-06T08:27:30+1300, Volker Kuhlmann wrote: Having a good deal of first-hand experience with dpkgCo, I can say that dependency resolution works well, handling multiple repositories works well a source package format is simply non-existant Depends what you mean. The 'source package' is made up of three files--the pristine tarball from upstream, a description file, and a patch file containing distribution-specific changes. There is no single file like the equivalent SRPM but I don't see that it matters. 'apt-get source', with deb-src sources specified in your sources.list will download (and optionally extract and compile) a package. 'apt-get build-dep' will install any build-time dependencies of the package you want to build. package verification is cumbersome for package files and non-existant after the package is installed Install time package verification will finally be enabled by default in the next Debian release (etch). It has been available for a while but previously you had to make an effort to get it to work (i.e. install and configure it). Post-installation verification is provided by the optional debsums package, which merely requires installation and no configuration to use. and searching had some absent feature(s) (have to check my notes here). What were you missing? It probably exists already. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: C problem
At 2007-01-09T09:21:48+1300, David Merrick wrote: Another small problem with C. My C compiler dosen't sqrt(expression). It gives undefined reference to sqrt error however it accept sqrt(9), Any suggestions? You need to link against the math library, e.g. % cc -o foo foo.c -lm The reason it works when you pass a constant value is that the compiler is optimizing the call to sqrt() away as a constant load of the value of sqrt(9.0). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debian install history?
At 2007-01-09T16:25:18+1300, [EMAIL PROTECTED] wrote: ie I don't want to know every package installed on the system (because that will include the packages supplied by knoppmyth before I started fiddling) - I just want the packages since then. If dpkg's log functionality was enabled in /etc/dpkg/dpkg.cfg, you will find one or more dpkg.log files in /var/log. You might need to use a bit of scripting magic to parse them into a more palatable format. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: cron shutdown
At 2007-01-04T09:33:24+1300, Roger Searle wrote: Having created a cron job (with vi, no less!) to shut down this machine at 8pm each night I am not sure why it will not run. I have in root's crontab: nine:/home/roger # crontab -l 0 20 * * * shutdown -h -t secs 1 Use an absolute path for shutdown, e.g. /sbin/shutdown. Jobs run by cron(8) have a restricted path which is often just /bin and /usr/bin. The man page for crontab(5) on your system has a bit more detail. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Auto-convert many wordperfect files to Openoffice?
At 2006-12-28T16:43:43+1300, Volker Kuhlmann wrote: Obviously there must be a functional import filter in OO, but for batch conversion it's necessary to have that as a stand-alone program. Does something like this exist (google doesn't show me)? Are there alternative solution(s)? First hit on Google looks like it'll do what you need: http://www.xml.com/pub/a/2006/01/11/from-microsoft-to-openoffice.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Debian's md5sum
At 2006-12-07T08:29:06+1300, Volker Kuhlmann wrote: Question 1: What is the official way to put a usable /usr/bin/md5sum into place on Debian? Googling results in gazillion hitting everything else. The fact that dpkg was provides md5sum is (now) considered a bug. If you have a recent enough version of dpkg (1.13.20) and coreutils (5.93-1), you'll find that /usr/bin/md5sum is the coreutils version. If you're stuck with older versions of the packages, you can find the coreutils version at /usr/bin/md5sum.textutils. (textutils, fileutils, shellutils are all now included in the coreutils package) Interest question 2: Why does someone put a retarded key shell program into a default system place to save 9344 bytes, yes ninethousandandafew bytes(!) of disk space? Not sure what you're referring to here. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: CT - what does fork mean?Re: [OT] Perl Question
At 2006-12-03T13:43:13+1300, Don Gould wrote: I've seen reference to fork in a number of posts now and don't understand what it means. It's the way to create new processes on *IX systems. man 2 fork perldoc -f fork Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: mouse pointer speed
At 2006-11-28T22:25:28+1300, Volker Kuhlmann wrote: I got a new mouse with 4 buttons because the wheel as middle button wasn't doing my finger(s) much good no more. However the new rodent moves much faster over the screen, too fast for precision control on window frames for example. If it's a USB mouse, you can try playing with the mousepoll parameter of the usbhid driver. The default value is usually 10ms. e.g. # rmmod usbhid # modprobe usbhid mousepoll=2 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: mouse pointer speed
At 2006-11-29T19:21:55+1300, Volker Kuhlmann wrote: If it's a USB mouse, you can try playing with the mousepoll parameter of the usbhid driver. The default value is usually 10ms. Yes I use it on USB. I can't see where the default is being set or what the default is. Values of 3 and above make no difference, 1 and 2 do have an affect, but it's a bit of a weird one. 2 isn't much different, 1 does slow things down a little bit but it slows down fast movements much more than slow ones. For the time being it's at least an improvement. Thanks Matthew! You can query the default using lsusb, e.g. for my mouse at bus 3/device 2: % lsusb -v -s 3:2 | grep bInterval bInterval 10 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Threads - was Re: === 2007 Meetings ===
At 2006-11-15T11:52:22+1300, Derek Smithies wrote: On Wed, 15 Nov 2006, Carl Cerecke wrote: Threads make programs hard for programmers to reason about. Ultimately this is true of any asynchronicity in a program, not just that caused/provided by threads. When your program is handling multiple disparate real world events (multiple sockets, timers etc) as it communicates with external programs on a different computer, things like determinism disappears. Long disappears. True, but there is a big difference in the difficulty of reasoning about program behaviour between a single-threaded program dispatching around a call to select() (or some other event dispatching mechanism) and an arbitrarily threaded program. Now, when you get computers with multiple cores, and we are seeing more computers with mutliple cores, wouldn't it be nice to write code to take advantage of the cores? Sun have computers with tens of cores. My current desktop has two cores. With threads, I can easily take advantage of all the cores the computer will give me. You can do that with processes, too. The major difference between threads and processes is that threads default to shared-everything, and processes default to shared-nothing. Unfortunately, it's almost impossible to start with threads and move towards a shared-nothing design, because you've given up all of the protection processes give you, and you can't add a lot of that that back without special hardware support. Interestingly, Sun have a system called openmp, which handles threads nicely - the compiler actually works out the parrallelism. The writer gives some guidelines, and the compiler does the rest.. OpenMP is really only useful for making trivially parallelizable code run in parallel. Most problems are not trivially parallelizable. There's nothing magic about OpenMP. Also see Tridge on threads: http://lists.samba.org/archive/samba-technical/2004-December/038301.html Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Booking for Microsoft Talk
Re: Join images
At 2005-01-24T09:21:40+1300, Roy Britten wrote: Boy, it sure does consume RAM. And it's the only software that has managed to crash my debian box: kernel BUG at page_alloc.c:235! Unable to handle kernel NULL pointer dereference What kernel version are you using? Now is probably a good time to check your memory; the kernel should never crash due to the behaviour of an application. Given where the kernel BUG happened (most of the BUG() calls in page_alloc.c are sanity checks) , and the fact that ImageMagick was using a large amount of memory at the time, there's a good chance that your memory is flakey. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Clug
At 2005-01-22T20:44:48+1300, Robert Himmelmann wrote: Is there a good emacs-book somewhere on the net ? I have read the inbuild tutorial but it covers only the basics. http://www.gnu.org/software/emacs/manual/ http://www.xemacs.org/Documentation/21.5/html/ Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Storing a password
At 2005-01-22T23:24:53+1300, Andrew Errington wrote: To log in to this particular library you need your library card number and a PIN. It is sent to the server as a POST form using x-www-form-urlencoded, which implies it is not encrypted during transmission. Probably not, though it's possible that the details are encoded in some form. Is the site using SSL at least? If not, then other than for the sake of 'doing it right' there's not much point worrying about the security of the user's details on your own disk when you're sending it unencrypted across the Internet every day. I am storing the library card number and PIN in a hidden text file in the user's home directory, with only their account having read privileges. I need the PIN as plain text to send to the server, but I don't know enough to make it safer (assuming it needs to be). You can't make it a great deal safer. At some point you need an unencrypted copy of the user's details to send it to the server. It's probably a good idea to programmatically check the file permissions to ensure they are strict enough, i.e. read access for the user only. You could encrypt the details on disk, but at some stage you need to store a secret somewhere to decrypt them, so you've just shifting the problem. If you wanted to be fancy, encrypt the user's details on disk and make the program persistent (i.e. run all the time, don't start it via cron). When the program is first started, the user would be required to enter a password to decrypt the details, which would then be stored in memory only (this is how ssh-agent works). Of course, if you don't trust the other users of the machine where the program is running, having the unecrypted data only in memory is not really any more secure than storing it on disk unencrypted. One other thing, make sure you don't pass the username and password details on the command line to the tool you're using to fetch the page. If you do this, other users on the machine can find out your username and password details easily using tools such as ps(1). Most decent tools that accept usernames and passwords as command line arguments will also let you use environment variables or files as the data source. PS I'm using curl, grep and sed so far. Tried to fit awk in, but not at the moment, although there is still plenty to do. It sounds like you've got the basics working already, but it might be worth considering using a tool that can parse HTML properly. If the page is simple or very unlikely to change, you can get away with using things like grep, sed, and awk, but to do any serious parsing you're wasting your time with these tools. The common scripting languages such as Perl, Python, Ruby, TCL, etc. all have HTML parsing modules available. There may be standalone command line tools that munge HTML into something easier to handle with grep, sed, or awk, but I don't know of any off the top of my head. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Apt gone AWOL
At 2005-01-21T17:49:31+1300, Steve Holdoway wrote: Good advice, but... Just this afternoon I assumed that my fresh install of FC3 had hung whilst updating using the default up2date graphic interface. When I aborted it and used yum upgrade/update ( how apt-esque! ) I found that it was actually 75% through the process. No harm's done in aborting, but nasturtiums may be being cast. That's why I suggested looking at the resource utilization of the update processes and, if need be, attaching with strace or gdb and looking at what's going on. Who knows, you might even find the cause of the problem and be able to submit a useful bug report to the developers. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Run process at startup
At 2005-01-21T11:17:31+1300, Andrew Errington wrote: a) Run automatically when the server is rebooted (with a current uptime of 228 days I am loathe to test that...) Take a look at the skeleton script in /etc/init.d/skeleton for a basic Debian-style init script. Once you've created the appropriate init script(s), link them into the runlevel startup directories as appropriate using update-rc.d(8). Make sure you use start-stop-daemon(8), because it already performs a lot of the work you need to do. To do it properly, you should look at modifying your programs so that they daemonize themselves and drop all unneeded privileges. b) Restart automatically if they ever stop running You have to be careful doing this so that you don't get into a situation where the program dies immediately and ends up in a tight respawning loop. You also have to make sure your program safely handles any previous unclean terminations. If the programs are relatively stable, you shouldn't need to automatically restart them on failure, because failures will be rare and usually suggestive of a serious problem. I am sure there are a number of common techniques for this, but is there a 'best' one? Should the programs run as root, or run as me? Does it _need_ to run as root? It's extremely likely that it doesn't, so don't run it as root. Create a new unprivileged user with logins disabled, the shell set to /bin/false, etc. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Apt gone AWOL
At 2005-01-21T14:59:26+1300, Douglas Royds wrote: Another half hour down, and no visible change. I'm going to have to nuke it. Obviously it's not visibly making any progress, but is it using any CPU, accessing the disk, or anything of the sort? It might be useful to attach to the hung process (mozillla, was it?) with gdb or strace and see what's going on. What should I do? Kill the hung process and the packaging system will continue configuring all of the packages that don't depend on the package that failed to configure. At that point, you're likely to have an inconsistent system to some degree. You'll need to work out why the package hung during the configure stage, find a solution, and then reconfigure it and any other 'broken' packages (which consists of running apt-get in 'fix broken' mode). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Ubuntu install dialup
At 2005-01-21T15:12:26+1300, yuri wrote: I would never put a desktop OS on a box that has a modem *and* a NIC. The only reason a box would have both is when it's acting as a gateway, in which case a firewall OS is called for. Rubbish. What about, for example, a desktop system that's attached to a LAN but uses a modem for dial-up VPN access occasionally? The same basic configuration could be two machines connected via the LAN (e.g. to share files) with one of the machines using the modem for Internet access? Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: sed How do I replace whole word.
At 2005-01-19T09:02:28+1300, C. Falconer wrote: Why the \b ? '\b' matches word boundaries, and Ross already said why... without getting hits on words like cattle. $ echo apple cat dog cattle | sed s/cat/pussy/g apple pussy dog pussytle Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Python Emacs
At 2005-01-17T21:17:37+1300, Robert Himmelmann wrote: I am learning Python and trying to improve my emacs-skills. When I edit a .py-file in emacs and press C-c C-c (Excecute Buffer) I get the message Wrong type argument: seqencep, cpython. Anyone knows what that means? There are bugs for this problem in the bug databases for python-mode[0] itself, the Debian package and probably elsewhere. It's probably being caused by the fact that your Python program has a shebang at the beginning specifying a symlink (or some other form of indirection, e.g. using /usr/bin/env) to the Python interpreter. As a temporary solution, specify a direct path to a Python interpreter. The problem might be fixed in the latest release, but I haven't tested it personally. [0] http://sourceforge.net/tracker/index.php?func=detailaid=1021885group_id=86916atid=581349 Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: sed How do I replace whole word.
At 2005-01-18T15:37:17+1300, Ross Drummond wrote: The info page suggests that; sed -e s/\bcat\b/pussy/g will work, but doesn't. You need to enclose the sed command in quotes, otherwise the shell thinks you're using the backslash to escape the 'b', and strips it from the input before it reaches sed(1). $ set -x $ echo My cat is a ninja robot in disguise. | sed -e s/\bcat\b/pussy/g + echo 'My cat is a ninja robot in disguise.' + sed -e s/bcatb/pussy/g My cat is a ninja robot in disguise. $ echo My cat is a ninja robot in disguise. | sed -e 's/\bcat\b/pussy/g' + echo 'My cat is a ninja robot in disguise.' + sed -e 's/\bcat\b/pussy/g' My pussy is a ninja robot in disguise. $ set +x Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Paradise.net problems
At 2005-01-16T22:43:50+1300, Wesley Parish wrote: I've got a problem with their interpretation of the POP3 command to delete emails on server once they've been downloaded. Because they're not doing anything of the sort. What, exactly, is the problem? You need to be specific. I assume you've already read the appropriate RFCs (at least RFC 1939), and realise that the DELE command, if issued while in TRANSACTIOn state, marks a message as deleted, but that the delete action will not take effect unless the POP3 transaction enters the UPDATE state. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: linux on the desktop making inroads...
At 2005-01-16T22:23:36+1300, Wesley Parish wrote: I concluded during the nineties that the general purpose computer was somewhat clumsily designed, and should instead be split up into function domains, with each function domain being assigned one primary function and being made to do it well; During the nineties, when lots of now-defunct companies were hyping the earl{y,ier} attempts at set-top boxes for the average user? Insightful. with the bus being replaced by a network. Preferably something like as the Fibre Channel, with its several supported protocols - HIPPI, SCSI, TCP/IP, etc. HIPPI? Over Fibre Channel? Seems like an unusual and unrealistic choice, particularly for a product aimed at the home market. In fact, even the suggestion of Fibre Channel seems unusual and unrealistic. Which variant, copper or optical? It seems like expecting a home user to carefully handle delicate optical cable for their simple to use computer/applicance is a bit much to ask, not to go into the cost of using FC as a solution for this problem domain. That way, much of the configuration kaffuffle would cease to exist, since nodes on a network don't need to configure other nodes in order to communicate with them. Hand waving doesn't make any of the hard problems go away. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: kernel 2.6.10 - is it faster?
At 2005-01-14T20:19:42+1300, Robert Fisher wrote: On Fri, 14 Jan 2005 15:04, Nick Rout wrote: I was just about to upgrade. Can someone suggest a test which I could do before and after? If you're not already set up with a benchmark suite and experienced with both the benchmark and with benchmarking in general, the results of any sort of tests at this late stage would be of dubious value. If you just want to play, take a look at lmbench2 (you need around 25 runs on each kernel to get enough results to find variance with any reliability), bonnie/bonnie++, iozone, dbench, AIM7/AIM9, chat, Postal, etc. Also, performing 'real world' tasks like (if you're a kernel developer, or an obsessive tweaker) kernel compiles, desktop startup times, etc. can be interesting. Be prepared to spend a lot of time on this and careful document everything you've done if you want to produce any useful results. (I'm a little surprised to find a Gentoo user interested in performing benchmarks, instead of the usual wild speculation. :-)) Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: File associations for Firefox in KDE
At 2005-01-15T00:16:40+, Keith McGavin wrote: The helper config file is ~/.mozilla/default/ad49qtsw.slt/mimeTypes.rdf While it's fairly minor in this case, it's not a good idea to be careful about what information you're exposing about your specific system configuration. In this case, anyone who cares to search the list archives can now find out the current name of your randomly salted Mozilla profile directory--the name is supposed to be secret for security reasons to prevent malicious attackers from accessing files in your browser profile. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Fedora core 3
At 2005-01-14T18:53:04+1300, Jason wrote: I know this isn't the fedora fan club but.. Don't let the naysayers put you off. what is the best way to update this distro (dependencies?)? I thinking terms like apt-get and yum purly buzz words to me. I am happy to read a guide if someone knows a good place to find one. Start with this stuff: http://linux.duke.edu/projects/yum/ http://www.fedorafaq.org/ More importantly how I can change where it gets the updates from, say somewhere in NZ give the www a rest. The Fedora FAQ, specifically the 'Installing Software' section will explain it. NZ Fedora updates mirror: ftp://ftp.wicks.co.nz/pub/linux/dist/fedora/ Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT Horse activity summary sed
At 2005-01-13T12:12:19+1300, Rik Tindall wrote: - probably better to have been running this: sed -e 's/few/two/' sedsort.txt sedsort.txt Only if you want to turn sedsort.txt into an empty file... [EMAIL PROTECTED] rik $ sed -e 's/figner/finger/' -e 's/few/two/' sedsort.txt sedsort.txt [EMAIL PROTECTED] rik $ cat sedsort.txt [is now empty :-/] ...as you've just discovered. The reason this happens is that the shell opens the files for redirection before executing the new process (sed, in this case). The '' redirection is telling the shell to create a new file, or truncate an existing one, and begin writing to the start of the file. By the time the sed process runs, it's going to be reading input from an empty file and writing it's output (none) to the same empty file. Assuming you're using a Bourne-like shell (I'd guess you're using bash), you can avoid this by enabling the 'noclobber' shell option, i.e. 'set -o noclobber'. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT Horse activity summary sed
At 2005-01-13T13:09:33+1300, Steve Holdoway wrote: Another good reason I would have written the command cat sedsort.txt | sed -e 's/figner/finger/' -e 's/few/two/' sedsort.txt For the reasons just discussed for Rik's case, your solution does not work either. Pipes are a stream, so the shell is not reading the entire 'sedsort.txt' file into a buffer before pushing it down the pipe to sed's stdin. I do it mainly as the left to right flow along the command line makes more sense to me, but it has the added advantage that sed is processing stdin, not the input file directly. Except, in the case above, stdin is connected, via a pipe, to the open file 'sedsort.txt'... which has just been clobbered by the ' sedsort.txt' redirection. Now, if we'd all have been using VMS... this wouldn't be a problem (: Yeah, we'd be too busy staring blankly at a wall and drooling. ;-) Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Mandrake 10.1 Official and autoconf/automake
At 2005-01-13T11:57:49+1300, John Carter wrote: gcc is there, you just can't build most packages from original source. Which packages? Sanely packaged source tarballs come with the auto* tools output files already generated so that the end user doesn't need them installed to do a normal compile. You usually only need the auto* tools if you're building from a copy of the source checked out of revision control, since most developers don't place generated files under revision control. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Changing UI fonts of commercial X11 apps
At 2005-01-11T21:29:24+1300, Hugo Vincent wrote: Is it possible to change the default UI fonts of commercial proprietary applications It depends, and it's usually application or toolkit specific. If the application is using a modern configurable X toolkit, you can often affect its font decisions by altering the global toolkit configuration. Of course, how this is done depends on the toolkit. For many of the applications that use older toolkits like Athena/Motif/Xaw/etc., you can control a number of toolkit and application settings using X resources. These are stored on a per-user basis in either a ~/.Xdefaults or ~/.Xresources file, and on a global basis either in application and toolkit specific files in your distribution's app-defaults directory, or in a similar desktop manager-specific directory such as. These resources need to be loaded into the X server's resource database before they take effect; this can be done in one of the login scripts, or on the fly, and is often handled automatically by modern distributions. For more on X resources, refer to the following man pages: xrdb(1x), appres(1x), editres(1x), and listres(1x). like Acroread For acroread, there should be an example Xresource file available on your system (probably in an app-defaults directory) named AcroRead. This may reveal the specific resource names you need to alter to change the font size. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T09:01:51+1300, Steve Holdoway wrote: It might be an idea at this time to point out that by default every process opens 3 descriptors on startup... 0 = stdin 1 = stdout 2 = stderr It depends how the process was started, and (depending on what you're doing), it's unsafe to assume that any of stdin, stdout, or stderr are open when your process starts. A couple of open source projects have recently discovered this the hard way by corrupting user data. ( or it it just those written in C - I forget - but most of the kernel is anyway... ) The language the executable (or kernel) was written in doesn't really play a part in this--file descriptors are a OS (Unix and others) concept. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T10:09:47+1300, Carl Cerecke wrote: Do you mean the programs in question had no error handling for descriptors 0,1,2 because they expected them to be open and set up correctly? An explanation by example: 1. Process closes stderr. 2. Process exec()s program with previously mentioned bug (i.e. most programs). Program inherits existing file descriptor table. 3. Program open()s critical data file. open() returns first free file descriptor, i.e. 3. 4. (Later) Program writes to stderr (e.g. perror(0)), which results in a write() to the standard fd for stderr, fd 3. 5. Boom. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T13:51:54+1300, Carl Cerecke wrote: Isn't that what dup2 is for? For most cases, yes. Without being able to close and reopen descriptor 0/1/2 there are still things you couldn't do using only dup2(), but the only example I can think of right now isn't that great... Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T11:33:32+1300, Volker Kuhlmann wrote: Thanks Matthew, I was wondering whether it had to do with descriptor inheritance. One could call it a bug in the process though, to call programs with insane descriptors. Nevertheless, what would be the correct way for testing in program whether the standard descriptors are sane? Use fstat() (or your language's equivalent) to check the file descriptors you care about (in increasing order), then reopen the as approriate, e.g.: if (fstat(STDERR_FILENO, stat_buf) == -1) if (open(/dev/null, O_WRONLY) == -1) abort(); (Minor: stderr is fd 2) At 2005-01-12T13:00:18+1300, Carl Cerecke wrote: Presumably you're actually talking about fd 2 in steps 3 4. Thanks, yeah, my brain experienced an off-by-one. This could be partly solved by the OS by never reusing file descriptors lower than 3. It could be, but that's probably not the right solution. It's extremely useful to be able to reopen stdin/stdout/stderr however you please, e.g. easy logging to a file by writing any log output to stderr, then reopening stderr as a log file, or think of what the shell does with /dev/null. If open()-like calls were prohibited in the way that you suggest, you'd need to create a new API for remapping stdin/stdout/stderr. perror would then probably fail silently, but fprintf(2,xxx) (which I'm guessing perror uses) would correctly return -1. glibc certainly uses a technique that can be approximated as fprintf(2, ...), and it doesn't check the return value of fprintf. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T13:59:06+1300, Volker Kuhlmann wrote: That creates a copy of an existing file descriptor, but doesn't modify an existing one. Copying one which has already been closed isn't likely to do anything useful. Right, because to be effective Carl's proposed changes should also require close() to forbid the closing of the standard descriptors. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: TIP: What files does this command try to open (was Re: Changing UI fonts of commercial X11 apps)
At 2005-01-12T14:44:59+1300, Matthew Gregan wrote: Process A: close(STDERR_FILENO); execl(/path/to/other/processb, processb, (const char *)0); Which then becomes Process B, and does: open(/path/to/desired/stderr.log, O_WRONLY); I forgot to mention that I'm aware this particular case can still be handled by Process B using dup2(). As I said, it's not particularly imaginative... I can't think of a _good_ example where dup2() can't be used off the top of my head, but I have a feeling that there may be at least one good case to serve as an example--if I remember it, I'll post it. Process A would is then forced to prepare the file descriptors before Oops; s/would // Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: OT: Python book Python Programming for the Absolute Beginner
At 2005-01-11T07:39:24+1300, Carl Cerecke wrote: yes and no. Syntax is the biggest hurdle to learning programming. You can postpone exposure to the complexities in python for longer than you can do so in Java. I'm not sure I entirely agree. Syntax can be a big hurdle, especially for languages that are filled with syntactically important non-alphanumeric characters (I wonder how APL rates here), but I'm not sure that it's the single biggest hurdle. I'd be curious to see the results of any studies done in this area... On the topic of the subleties of programming, here's a little challenge for someone: write a simple, lazily initialized, thread-safe singleton in Java (or C++ if you prefer). Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]
Re: Spoiler RE: strace and easter eggs (was Re: TIP: What files does this command try to open (was ...))
At 2005-01-12T17:01:50+1300, C. Falconer wrote: (where 26320 was likely to be the next pid) Heh, good luck guessing that on a hardened/security concious operating system. Cheers, -mjg -- Matthew Gregan |/ /|[EMAIL PROTECTED]