Re: Re: Problem upgrading libc6 on sid
> If you are working from remote machine, the problem might be restarting > services which kills your connection. The problem is that it did not behave this way on any of the other containers on either of my hosts. These are all via ssh connections, the same as I have been doing since 2002. --b > Regards, > Marko
Problem upgrading libc6 on sid
I have an openvz container that I am having a hard time upgrading libc6. Whether I use dpkg -i, apt-get, or aptitude, I get the same result. (Reading database ... 56136 files and directories currently installed.) Preparing to unpack .../archives/libc6_2.21-7_i386.deb ... Checking for services that may need to be restarted... Checking init scripts... And then it hangs forever. All of the other containers on this machine have been upgraded. Suggestions on why this particular one one will not upgrade. Suggestions? --b
Re: search engines (hijacked from rapidly proliferating sess files )
On Sat, May 24, 2014 at 3:00 PM, Ralf Mardorf ralf.mard...@rocketmail.comwrote: +1 for StartPage. I'm a StartPage user, I don't use Google anymore, but to be fair, Google shows suggestions while you're typing, so assumed you misspell or you don't remember some terms, Google is easier to use than StartPage, IOW because of this feature you might get better results when using Google. Blessing and a curse. Google's predictive search has it's own subclass of meme ( http://www.telegraph.co.uk/technology/google/6161567/The-20-funniest-suggestions-from-Google-Suggest.html, http://www.halestorm.co.uk/articles/top-ten-funniest-google-instant-searches). I also read an article where someone was questioned because of the google instant or whatever they call it, predicting their search while they were typing. Normal google searches can even get you a visit from the police ( http://www.theverge.com/2013/8/1/4580654/michele-catalano-google-search-pressure-cookers-backpacks-bomb-scare ). I'll stick to startpage.
Re: Accelerated driver for Intel video driver?
Hi...OP here... Does anyone have any suggestions on the original problem with video on the Intel drivers on this Dell laptop? Thanks, --b On Thu, May 22, 2014 at 6:22 AM, Brian a...@cityscape.co.uk wrote: On Thu 22 May 2014 at 12:49:43 +0300, Pertti Kosunen wrote: On 22.5.2014 11:15, Tom H wrote: You must've missed the recent thread about systemd-sysv being pulled in by certain dependencies and replacing sysvinit with systemd. What was the name of this thread? https://lists.debian.org/5377a358.1040...@aol.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/22052014112136.87efcaf26...@desktop.copernicus.demon.co.uk
Re: Accelerated driver for Intel video driver?
Thanks Filip, On Tue, May 20, 2014 at 2:08 AM, Filip fi...@fbvnet.be wrote: On Mon, 19 May 2014 19:18:38 -0400 Intel video is normally working out of the box, but for newer hardware you also need up to date software. So if you are on 7.5, try upgrading Jessie. I apologize, I thought I had specified, the machine in question is running a relatively recent version of sid. I have not been upgrading as much until some of this chaos with systemd settles down. Thus I am running xserver-xorg-video-intel 2:2.21.15-1+b2 amd64 For example, I couldn't get 3d accelleration working on the integrated adapter of my i5-4570S on Debian 7.3, and upgrading to testing solved it. You can check if it's the same problem with: $ glxinfo | grep renderer The output of this is: OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile If there is something with LLVM in the output, it's not working and falling back to software emulation. While you're at it, you can also try glxgears. glxgears gave me: Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. Xlib: extension NV-GLX missing on display :0. 334 frames in 5.1 seconds = 65.750 FPS 320 frames in 5.3 seconds = 60.041 FPS 320 frames in 5.3 seconds = 60.041 FPS 320 frames in 5.3 seconds = 60.044 FPS 320 frames in 5.3 seconds = 60.038 FPS l...and the display was a little choppy. Also the Xlib error appears in both the glxgears and the glxinfo...It's looking for NV-GLX? I don't have an xorg.conf on this machine... Thanks, --b
Accelerated driver for Intel video driver?
I hope I don't run afoul of the self-imposed list monitors or otherwise fan a flame war...:) I have been using nvidia cards in my computers for so long that I haven't really kept up with the state of Intel... My problem is that when I try to play a video on this machine, either with mplayer or vlc, the video starts, and I will get about 1/2 of the frame that is actually playing, usually it is at the top. The audio plays normally. This happens whether in a window or full screen. The machine in question is a Dell Inspiron 5537 laptop with 6GB of RAM that has an Intel Haswell ULT video card in it. lspci shows me: 00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Dell Device 05e9 Looking through /var/log/Xorg.0.log, it appears to be loading several modules, including [ 36918.029] (II) LoadModule: glx [ 36918.127] (II) LoadModule: intel [ 36918.168] (II) LoadModule: vesa [ 36918.195] (II) LoadModule: modesetting [ 36918.197] (II) LoadModule: fbdev [ 36918.219] (II) LoadModule: fbdevhw [ 36918.248] (II) LoadModule: fb [ 36918.293] (II) LoadModule: dri2 [ 36919.740] (II) LoadModule: evdev [ 36919.926] (II) LoadModule: synaptics Other apps seem to behave normally. I can play youtube videos, for example. Please let me know what other information I can provide. Thanks, --b
Why is aptitude so uninstall-happy?
I have seen this before, but I saw it the other day, and was just curious. Why is aptitude so eager to uninstall packages instead of simply opting to install a supporting package rather than upgrade it? I was upgrading libssl1.0.0 the other day as a result of a security finding, and did the whack-a-mole package upgrade, e.g. aptitude install libssl1.0.0. As a result, it gave me the following packages have unmet dependencies. and offered to uninstall several packages (like libssl-dev). Why would it do this by default rather than just upgrading it? Here is an example from my workstation: # aptitude install libssl1.0.0 The following packages will be upgraded: libssl1.0.0 libssl1.0.0:i386 2 packages upgraded, 0 newly installed, 0 to remove and 494 not upgraded. Need to get 2,523 kB of archives. After unpacking 9,216 B will be used. The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1g-3) but 1.0.1g-4 is to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) libssl-dev Accept this solution? [Y/n/q/?] n The following actions will resolve these dependencies: Upgrade the following packages: 1) libssl-dev [1.0.1g-3 (now) - 1.0.1g-4 (unstable)] Accept this solution? [Y/n/q/?] It would seem more logical to go ahead and upgrade the -dev package. And on the servers at work, as I recall, one of the things it wanted to uninstall was php5. Why not just upgrade packages that fail dependencies if possible? I'm not even suggesting installation of a bunch of new packages, but it seems to me that it is a pretty straight-line conclusion that if the package needs to be upgraded, and the -dev package is installed, then it's going to need to be upgraded too. Thoughts? --b
snmpd upgrade failure
I just upgraded a sid box on my network. It is a container running on openvz. during the upgrade, snmpd fails to upgrade, and I wanted to check here before I file a bug. I was running 5.7.2~dfsg-8.1+b1 i386, which was working, however, when I upgraded to 5.7.2.1~dfsg-3_i386, it broke. Apparently there was a patch in place from a year ago (http://sourceforge.net/p/net-snmp/bugs/2449/), which may have been omitted in the latest version. apt gives me: rm: cannot remove '/run/snmpd.pid': No such file or directory invoke-rc.d: initscript snmpd, action stop failed. dpkg: error processing package snmpd (--purge): subprocess installed pre-removal script returned error exit status 1 Starting SNMP services::pcilib: Cannot open /proc/bus/pci pcilib: Cannot find any working access method. invoke-rc.d: initscript snmpd, action start failed. Has anyone else seen this behavior? Thanks, --b
Re: Heartbleed
On Thu, Apr 17, 2014 at 12:15 PM, Robert Holtzman hol...@cox.net wrote: Or Apple, sacrifices your security by wordsmithing. According to them, they don't get malware, their computers just have unwanted programs. Not ever being an Apple user, I hadn't heard that before. When I read your post, I fell off the chair laughing. One more reason why I doubt if I will ever use an Apple computer or anything else. I had pretty much the same reaction. I was listening to Tracy Holtz on the techie geek podcast, and he was relating the story. He used to run a PC repair shop, and a customer brought his or her Mac in to have it cleaned. They had apparently taken it to the genius bar more than once, and they said there was no virus. So they took it to Tracy, who cleaned it and then billed it back to Apple. The regional director of marketing was the one who told him that... That said, I have a (work) iphone 4 that I absolutely loathe because of the walled garden. I have a Nokia N900 that i use for media playback, and I have a set of bluetooth headphones, which I have paired with both. I turned off the HFP and HSP (hands-free and headset profiles) on the N900, but in order to turn off the multimedia profiles on the iphone, I have to buy a $5.99 app. While I can afford that, I refuse to buy an app for a work phone, and I especially refuse to buy an app to do something that took me about 6 seconds in vim on the N900...
Re: Heartbleed (was ... Re: My fellow (Debian) Linux users ...)
On Thu, Apr 17, 2014 at 3:36 AM, ken geb...@mousecar.com wrote: Steve brings up a very good point, one often overlooked in our zeal for getting so much FOSS for absolutely no cost. Since we're all given the source code, we're all in part responsible for it and for improving it. This ethic should be visited not only on lists like this one, but certainly also in CIS classes and definitely in business and governmental administration courses as well. While I can agree in principle with this, in practice, it's not that black and white. Let's look at a real-world example: cars. I, like most on this list, have owned many in my life, can drive them, and even do routine maintenance on them, e.g. brakes, oil changes, changing belts, even changing the odd water pump, a car is a complex system. There are many computers and moving parts that have to work (more or less) in unison for the car to operate properly. There are trained mechanics who know how they tick. Similarly, software such as openssl is a complex beast. Very few people are going to be able to review it, let alone code for it. The two most dire warnings in the crypto code biz are a) never implement your own crypto system, because there are a million ways to do it, and 999,997 of them are wrong, and b) peer review is your friend. But just as I would probably prefer a certified mechanic to rebuild the engine in most modern cars, I would hope that the guys writing the code have a helluva lot more expertise than I do and are checking up behind each other. Plus, like OpenBSD, have mechanisms in place to minimize damage when things do go awry. And right now there is github where over the past couple weeks I've noticed quite a few projects-- in fact, the majority of them-- started by one person but with no other contributors. A significant contribution can be as small as improving documentation. As Steve points out, without more involvement from more people, we're probably headed for repeated such calamities. Well, you are free not to use those. I judge this on a case-by-case basis. For instance, I'm not likely to be an early adopter of Joe's super-secret foolproof cryptosystem with one dev and a handful of commits, but I might just think about using, say, the pitivi video editor at an early beta. Going back to the car analogy, I said above I would want a certified mechanic to rebuild my engine in a modern car, but I have no problem going my neighbor and having him change the brake pads and rotors, or even to do that myself.
Re: Heartbleed
On Wed, Apr 16, 2014 at 5:43 PM, Slavko li...@slavino.sk wrote: Ahoj, I am talking about encryption and the F/OSS in general and i have my privacy in the mind. Here exists a lot of people int today world, which tell, that they have nothing to hide. *Everybody* has something to hide. Everyone. Don't believe me? Offer to put a public webcam in their bathroom. :D The problem is that the people that are wanting to know every single thing about everyone are the same ones that are making the rules as to whether or not you have anything to hide. I expect, that critical applications (openssl, gpg, ssh, gnutls, etc) will not contain these mistakes, and if something similar happens again (because yes - mistakes happens), then discovering these mistakes will not take years, but days or weeks... Is it a my mistake, that i cannot help with this? Am i expecting a lot? Need i switch to proprietary software (yes, i know, that is no solution)? You could, but then, you end up in a situation where a corporate entity will sacrifice your security for their bottom line, for their next quarterly earnings statement. Look at MS, who finally fixed a years-old bug in XP two months before it's end of life...Or Apple, sacrifices your security by wordsmithing. According to them, they don't get malware, their computers just have unwanted programs.
Re: Can't patch Heartbleed bug?
I don't believe that Wheezy was vulnerable to Heartbleed. It was only the 1.0.1f (committed 31 Dec 2011) that incorporated the vulnerable heartbeat feature. My wheezy box has 1.0.1e: ii libssl1.0.0:i386 1.0.1e-2+deb7u6 i386 SSL shared libraries ii openssl 1.0.1e-2+deb7u6 i386 Secure Socket Layer (SSL) binary and related cryptographic tools So you shouldn't have anything to worry about. HTH, --b On Thu, Apr 10, 2014 at 8:56 AM, Dr. Jennifer Nussbaum bg271...@yahoo.comwrote: I'm running Debian Wheezy 7.4 on a server in Amazon's EC2, that i installed, recently, from the official Debian AMI. I havent made any changes to the package infrastructure. I'm trying to fix the Heartbleed bug, but my system seems to think everything is up to date. My /etc/apt/sources.list has: deb http://cloudfront.debian.net/debian wheezy main deb-src http://cloudfront.debian.net/debian wheezy main deb http://cloudfront.debian.net/debian wheezy-updates main deb-src http://cloudfront.debian.net/debian wheezy-updates main I run sudo apt-get update, and things get pulled down. But when I run sudo apt-get upgrade, I get: $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. My openssl is not up-to-date: $ openssl version -a OpenSSL 1.0.1e 11 Feb 2013 built on: Sat Feb 1 22:14:33 UTC 2014 platform: debian-amd64 [...] I've waited a day in case theres some issue with the m irrors not getting updated, but this still happens. Whats the right way to make my system safe? All the instructions Ive seen say apt-get update apt-get upgrade will do it, but not in my case! Thanks! Jen -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/1397134570.73145.yahoomail...@web124502.mail.ne1.yahoo.com
Re: Can't patch Heartbleed bug?
On Thu, Apr 10, 2014 at 9:54 AM, Florian Ernst florian_er...@gmx.netwrote: This is not accurate, OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable. Please see https://www.debian.org/security/2014/dsa-2896 as well as http://heartbleed.com/ Thanks Flo, That's one of the problems with stories like this is that there is a lot of misinformation out there. I started reading on Bruce Schneier's site, and bounced off several sites from there. I guess I either read wrong or hit some misinformation. Also, with the extensive list of apps that need to be restarted, unless you have an overriding reason not to, I would recommend that you reboot instead of trying to cherry pick apps to restart. (The nuke it from orbit. It's the only way to be sure. approach. :) ) Debian did a good job of finding most of the apps that depend on openssl, but I know they missed at least one, puppet. --b
Re: DynDNS no longer free.
s On Tue, Apr 8, 2014 at 5:38 PM, Tom Furie t...@furie.org.uk wrote: On Tue, Apr 08, 2014 at 03:51:01PM -0500, Hugo Vanwoerkom wrote: DynDNS just announced that their free hostname program in 30 days will no longer be gratis. I use that with ddclient to update the IP address for my blog. Are there other free alternatives? This news disappointed me too, given that I have been using their service since the available domains were dyndns.org and two others that I don't remember. There are some free alternatives which I am currently investigating. I am disappointed too. Unfortunately most of those I have looked at seem to require a Windows binary download, or use a web-based interface which would require manual intervention whenever my IP changes (rather defeating the purpose, I think). Over lo these many years, I have run ez-ipupdate on my perimeter to keep my dynamic hostname in sync. So I pulled up the description, which says: Currently supported are: ez-ip (http://www.EZ-IP.Net/), Penguinpowered (http://www.penguinpowered.com/), DHS (http://members.dhs.org/), dynDNS (http://members.dyndns.org/), ODS (http://www.ods.org/), TZO (http://www.tzo.com/), EasyDNS (http://members.easydns.com/), Justlinux (http://www.justlinux.com), Dyns (http://www.dyns.cx), HN (http://dup.hn.org/), ZoneEdit (http://www.zoneedit.com/) and Hurricane Electric's IPv6 Tunnel Broker (http://ipv6tb.he.net/). That should at least give you some choices... --b Cheers, Tom -- But you'll notice Perl has a goto. -- Larry Wall in 199710211624.jaa17...@wall.org
Re: DynDNS no longer free.
On Tue, Apr 8, 2014 at 7:03 PM, Brad Alexander stor...@gmail.com wrote: Over lo these many years, I have run ez-ipupdate on my perimeter to keep my dynamic hostname in sync. So I pulled up the description, which says: Currently supported are: ez-ip (http://www.EZ-IP.Net/), Penguinpowered (http://www.penguinpowered.com/), DHS (http://members.dhs.org/), dynDNS (http://members.dyndns.org/), ODS (http://www.ods.org/), TZO (http://www.tzo.com/), EasyDNS (http://members.easydns.com/), Justlinux (http://www.justlinux.com), Dyns (http://www.dyns.cx), HN (http://dup.hn.org/), ZoneEdit (http://www.zoneedit.com/) and Hurricane Electric's IPv6 Tunnel Broker (http://ipv6tb.he.net/). I apologize. I should have vetted these before posting them. Best as I can tell, ez-ip, penguinpowered, and hn seem to be gone, dhs, ods, easydns are no longer free, tzo got acquired by dyndns (thus under the 30 days left clause), and zoneedit (verisign) is also not free. dyns claims to have a free service, but requires a minimum of a 5 Euro donation. So again, I apologize. I thought the ez-ipupdate list would be more current. Regards, --b
Re: Whole System Encryption, LVM Extended Partition
On Sat, Mar 29, 2014 at 7:49 PM, Patrick Bartek bartek...@yahoo.com wrote: Did a couple of trial installs of Wheezy in VirtualBox in anticipation of the real thing on an as yet to be purchased notebook, and noticed something puzzling with the Guided-Encrypted-LVM partitioning option. (I've never done encryption on my systems before.) The installer used a classic Extended partition, i.e. sda5, instead of a Primary one on which to place the LVMs: /, swap, /home. /boot was a Primary, as expected. Seems like a unneeded use of a logical partition layer on which to place another layer of logical partitions. Any valid reason for doing this? Not that i have found. What you propose is exactly how I do mine. I have a roughly 512MB /boot on sda1 and the rest of the drive on sda2, which contains my encrypted partition, within which I put my LVM. I'd prefer just two Primary partitions: /boot, and the balance of the drive for the encrypted LVM partitions. Any reasons for not doing it that way? It has worked great for many years for me. I've been running this config or one similar to it (I used to put a separate swap partition, but the last nuke and pave, I figured putting the swap partition within the LVM works better and you only have to encrypt one filesystem. I've been running luks-encrypted partitions since, oh, 2005 or 2006, I think. It's been a while. I don't know about your use cases, but here is something that you might be interested in: http://blog.neutrino.es/2011/unlocking-a-luks-encrypted-root-partition-remotely-via-ssh/ This can be fairly easily set up, but protect the script (encrypted thumb drive works), as your encryption passphrase is contained within the script, but if you are dealing with a remote server, you may wish to consider it. HTH, --b Thanks. B -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140329164941.077e3...@debian7.boseck208.net
Re: dirvish still a good choice?
On Fri, Mar 28, 2014 at 7:22 PM, Peter Michaux petermich...@gmail.comwrote: Hi, I see in The Debian Administrator's Handbook [1] that Dirvish is recommended for backups. Looking at the Dirvish website [2] it seems that the project has been inactive since 2008. Perhaps Dirvish is such a simple layer over top of rsync that its been stable and hasn't needed attention in all this time. I have been using backuppc for many years now and am completely happy with it. It is perfect if you a) have more than one box to back up, and b) have a box that you can dedicate to (or even semi-dedicate to) backups. I am not at all familiar with Dirvish, however, the Debian Administrator's Handbook came out in 2012 or 2013, so it was after the last release of Dirvish. Is Dirvish still a good choice in 2014? I say give it a shot. If it works for you and doesn't have any obvious issues on bugs.d.o or any obvious security holes, use it in good health. If not, consider another solution. HTH, --b
rubygems
I have kind of a weird one. I have rubygems installed on a sid server, but it is at version 1.3.7 from squeeze. I don't have squeeze in my sources.list, so it should be upgrading to 1.8.x...But it tells me that 1.3.7 is the latest version. I only have wheezy, squeeze, and sid in my sources.list. Ideas on why it's not being upgraded or why how to get it to upgrade properly? I can get the debs from online and manually dpkg -i them, but that's the ugly approach, and may or may not fix my base problem. Thoughts? --b
Re: Security question concerning jail or virtualization
On Thu, Mar 13, 2014 at 11:39 PM, shawn wilson ag4ve...@gmail.com wrote: Well Linux has LXC which is supposed to be equivalent to jails (also see docker). But use whatever suits you. As are the older-school OpenVZ and Linux VServer technologies. Idk what's current for breaking out of VMs is. It might be good to pay attention to who is using the most entropy and make sure you don't run out. Most VMs use processor VT to isolate things (I don't think any 'jail' does this). The main difference between the jail/container technology and real VMs is that containers share the host node's kernel, while a full virtualization involves representing, to some degree, everything about a physical machine, e.g. BIOS, kernel, etc. I think most providers use OpenStack (a suite of technologies). YMMV On Mar 13, 2014 11:06 PM, Martin Braun yellowgoldm...@gmail.com wrote: Hi I have recently experienced a server being hacked due to a security problem with a PHP application that made it possible for the hacker to gain a web shell. It sounds like perhaps you should investigate a web application test suite. Whether this was running on a physical machine, a VM, or a container, it would not have changed the result of your php app getting hacked. Due to this experience I would like to know what the best way to limit such problems is, especially when hosting web servers for users who may or may not installed unsecure applications on the web server. Auditing your security is probably your best bet. As I said above, maybe some web app testing tools, run scans against your server regularly with Nessus or OpenVAS, plus the security best practices...Good password hygene, bastion hosts (only one type of app on a machine), turning off/uninstalling unneeded apps, especially those with a network presence, etc. What does the big hosters do? What do they use? They hire staffs of sysadmins and security folks. :) The solution can't be too complecated to maintain and I would prefer each user being completely seperated from the main OS and from other users. Depends on what you are trying to protect and what you are trying to defend against. --b
gcc and associated pkgs
What versions of gcc is it safe to remove? I have gcc 4.{1..8} installed on a box, and I'm fairly sure I can get rid of at least 4.1 - 4.6. Also, what associated packages should be removed with it? Should I get rid of equivalent versions of gcc, gcc-base, cpp? Anything else? Thanks, --b
Handbrake crashing
Has anyone seen this behavior in handbrake? I fired it up on both my sid workstation and my sid laptop, and as soon as I click the source and select the cdrom (or sr0 or dvd), it begins scanning the DVD, then the entire app crashes. No errors in the logs, and even when I started from the command line, there were no error messages. I checked bugs.d.o, and didn't find anything resembling the bug The DVD plays back fine in both mplayer and vlc. Any ideas? --b
Re: Handbrake crashing
Thanks On Wed, Feb 5, 2014 at 7:42 AM, berenger.mo...@neutralite.org wrote: You said that you had no message, even when started in a terminal? Not even segmentation fault? That's strange and nobody but the author could help you at this point I guess. That is correct. Not so much as a by your leave. :) Nothing in syslog, daemon.log, debug, nothing. But anyway, you could find more informations by trying to install the dbg package, and to give gdb some work. It could work. Do a check about official dependencies, and verify that they are installed, too. Sometimes Debian's maintainers forgot some (I had the situation sometimes in the past, but in that case you should have a message when you run it from console, except if dev have redirected the output somewhere in the void. Already seen that, too.). You could also try to not use the experimental package ( you're running sid, but this tool seems to be present in testing too. ) for the tool itself and/or it's dependencies. I have found the problem. It appears it is not handbrake, but rather it is libdvdnav4. In the upgrade from 4.2.0 to 4.2.1, some symbols were removed, specifically, dvdnav_dup and dvdnav_free_dup. It is listed in bug 735760http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=735760. I downgraded libdvdnav4:amd64 from 4.2.1-2 to 4.2.0+20130225-4, and it works with the current version of handbrake. --b
todo list software
At the risk of offending someone's tender sensibilities, can anyone recommend a good todo list software? I run KDE on sid, and kmail/kontact/etc seem to be in a bit of a mess. I did a search for todo list on Debian this morning, and a lot of what I came up with hasn't been developed since 2003 or 2006. I was just wondering what others here might use. --b
Re: How can I secure a Debian installation?
On Tue, Jan 28, 2014 at 12:41 AM, Scott Ferguson scott.ferguson.debian.u...@gmail.com wrote: Keep updated, subscribe to the security list, read and follow the fine manual:- https://www.debian.org/doc/manuals/securing-debian-howto/ Another suggestion I would make would be to regularly scan with one or more vulnerability scanners, such as the (free) nessus home feedhttp://www.tenable.com/products/nessus/select-your-operating-system, or OpenVAS http://openvas.org/ or some other scanner. --b
Re: Confused about dist-upgrade
On Thu, Jan 23, 2014 at 4:31 PM, Robin rc.rattusrat...@gmail.com wrote: Were those users using Debian stable? I use Sid so I usually dist-upgrade as long it isn't going obviously affect my system, i.e removing applications I want to keep. My process on sid is apt-get update apt-get upgrade apt-get dist-upgrade I then check dist-upgrade to make sure nothing critical is being uninstalled. If it is, I abandon that step. The reason I end up doing it this way is that a lot of things like the kernel only seem to be upgraded during the dist-upgrade. --b
Preserving LVM across builds
Hey, Have a question that I thought I would post here because I have never done it before. I have a buddy that has a system that is in desperate need of a rebuild. It is truly a Franken-box, with 4 hard drives (2*80GB, 1*160GB, and 1*250GB), and has an Ubuntu build on it and a Mint build. He wants to consolidate it into a single Debian build. The 250GB drive is an LVM PV with a single VG and two LVs. Unfortunately, he doesn't have sufficient drive space to move the data from the drive. My question is what needs to be done (or if it is possible) for him to unplug that drive with the LVM, install Debian on one or more of the remaining drives, then re-incorporate the drive into the new Debian install? Is it possible? And what is the best approach to doing so? Thanks, --b
Re: How to install memoizable?
Thanks Pierre. I was looking through bug reports, I just hadn't gotten to ruby-locale. :) --b On Thu, Dec 26, 2013 at 3:54 AM, Pierre Etchemaïté pe-gm...@concept-micro.com wrote: Hi Brad, Brad Alexander storm16 at gmail.com writes: I tried to dist-upgrade my workstation tonight, and apt-listbugs failed for me:After this operation, 167 MB of additional disk space will be used.Do you want to continue? [Y/n] /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- locale/util/memoizable (LoadError)from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/vendor_ruby/gettext/class_info.rb:3:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/vendor_ruby/gettext/text_domain_manager.rb:13:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/vendor_ruby/gettext.rb:19:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'from /usr/sbin/apt-listbugs:294:in `main' E: Sub-process /usr/sbin/apt-listbugs apt returned an error code (1)E: Failure running script /usr/sbin/apt-listbugs apt The only recent Ruby related package upgrade on my system was ruby-locale, and indeed for the time being I fixed the issue by downgrading this package to version 2.0.9 using http://snapshot.debian.org/package/ruby-locale/2.0.9-1 Pierre. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/loom.20131226t095102...@post.gmane.org
Adding an SSD
What is the best approach for adding an SSD to an existing system? This is on my desktop, with a 750GB spinning HD, and I am adding a 120GB Kingston ssdNow 300. Is the backup/nuke'n'pave the best or most reliable approach from a Debian perspective, or is there a way to partition the SSD and transfer the existing contents of the filesystems on the spinning HD to the SSD without overwriting things like the UUIDs of the partitions on the SSD? What are best practices now that SSDs (and the kernel's handling of SSDs) have theoretically gotten better over the last couple of years? I have paid peripheral attention to the whole SSD discussion, but not enough to be an expert. Then, a coworker made me a deal I couldn't pass up, so I bought it. I've been looking through articles for about the last bit, but a lot of them are from 2012 or before, and I'm wondering if they are out of date, and if so, how far. Finally, I plan to run encrypted partitions, with lvm containers within. From what I have seen in my reading, this is not a problem for SSDs. The encryption layer sits above the filesystem writes, it doesn't actually write to the drive any more than regular writes. So the plan is, due to practical necessity, to have two encrypted volumes, and separate LVM containers within them. On the SSD, the system partitions, like /, /usr, /var, /tmp, /usr/local, etc. On the 750GB drive, /data, ~/.PlayOnLinux, /opt. I'm not sure which way to go with /home. There is plenty of room on the SSD for it, but I am trying to walk the line between the speed of the SSD and beating it up. So any practical experience or advice from those who have done this would be appreciated. Thanks, --b
How to install memoizable?
I tried to dist-upgrade my workstation tonight, and apt-listbugs failed for me: After this operation, 167 MB of additional disk space will be used. Do you want to continue? [Y/n] /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- locale/util/memoizable (LoadError) from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/gettext/class_info.rb:3:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/gettext/text_domain_manager.rb:13:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/vendor_ruby/gettext.rb:19:in `top (required)' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/sbin/apt-listbugs:294:in `main' E: Sub-process /usr/sbin/apt-listbugs apt returned an error code (1) E: Failure running script /usr/sbin/apt-listbugs apt How do I get this required gem? I tried gem install memoizable, however, it failed as well: gem install memoizable Fetching: atomic-1.1.14.gem (100%) Building native extensions. This could take a while... ERROR: Error installing memoizable: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError) from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from extconf.rb:13:in `main' Gem files will remain installed in /var/lib/gems/1.9.1/gems/atomic-1.1.14 for inspection. Results logged to /var/lib/gems/1.9.1/gems/atomic-1.1.14/ext/gem_make.out gem_make.out couldn't load a file, mkmf. This has happened only since my last dist-upgrade.Is something missing here? Thanks, --b
Re: Deadline for jessie init system choice
On Sun, Dec 15, 2013 at 1:19 PM, Brian a...@cityscape.co.uk wrote: On Sun 15 Dec 2013 at 14:03:36 +, Tom H wrote: The goal to have native systemd support in every package with sysv scripts (if accepted) and a decision on a new init system may be related, but only the first is linked to the time of the freeze. My 'couple of years' might have been spurious but, considering the stated goal might only be realised a day before the freeze (or not at all), the merit of putting a focus on a decision date as pre-freeze isn't clear. Personally, I don't see why they feel the need to change. I'm not a big fan of systemd, with it's combined bin and sbin directory trees, it's binary log formats. And if I am honest, it seems to put entirely too much control in systemd's hands, and if I may be so bold, it gets away from the Unix philosophy. Instead of a bunch of small apps that do one thing extraordinarily well (e.g. grep, sed, awk, sort, uniq, etc.), we now have a large overseeer application which seems to take the one ring to rule them all approach. (And yes, I may be generalizing here, since I am just starting my exploration of systemd, but what I have seen so far, I'm not enamored with.) What's wrong with sysvinit? It's not broken. Hell, the BSDs and Slackware still use a BSD-style startup. I feel like Debian is looking for a solution to a problem that doesn't really exist. Even if the decision was made in February/March/April would this imply going into Jessie with a new init system is a realistic possibility? I'm not even going in to the debate about the political pros and cons for going with systemd vs. upstart. Just suffice it to say that there are an awful lot of distros that depend on Debian for this to be a decision to be rushed in to...It's not like sysvinit is going to turn into a pumpkin... Just my 2 cents. --b
Re: Shutting down lessens computer life............
On Mon, Dec 9, 2013 at 8:21 PM, Steven Rosenberg stevenhrosenb...@gmail.com wrote: I don't see well-used laptops lasting longer than 5 years. Something's bound to go wrong. I still have an old PII Toshiba that still works. Now I haven't booted it up in a couple of years, but everything was still functional.
Re: Shutting down lessens computer life............
On Mon, Dec 9, 2013 at 3:41 PM, Charlie aries...@skymesh.com.au wrote: On Mon, 9 Dec 2013 15:27:15 +0100 Gian Uberto Lauri sent: I know that shutting down the machine saves electricity, but heating and cooling is the mechanical stress that hits the non-moving components of your computer, computer that turn off less often live longer. I wonder if the above is right? I've seen it written somewhere before? Maybe it only applies for desktops? Both these are switched off sometimes after only 15 minutes powered up, depending on the charge in the solar batteries. But mostly on for at least 8 hours in 24, but switched on and off no less than 6 times during that period. I remember reading a report in the mid-90s stating that one of the biggest life-shortening properties of powering on and off was heating and cooling of the hard drive bearings. Now, that said, I do not know how much change has occurred in hard drive bearings, though I would have to guess that modern hard drives do not get as hot..
Re: Best Mail Client - Was: [closed] A rookie's query: Want to about Debian and the related
On Tue, Dec 3, 2013 at 6:05 PM, AP worldwithoutfen...@gmail.com wrote: On Tuesday, December 03, 2013 12:46:05 PM Brad Alexander wrote: I like kmail's interfaces. It's just the backend encryption that has a problem. For whatever reason, it won't let me decrypt and add my s/mime certificate on my installation at work, and at home, it uses gpg Can you please let me know what does that mean, backend encryption I meant. If it has encryption, it could rather be better, in my views. I have two use cases. 1) My home machine: I use kmail with gpg keys to sign and encrypt emails. Unfortunately, when I sign+encrypt (I have not tried just signing) to people with whom I use encryption, I have had complaints that the signature verification fails. They are invariably using thunderbird/icedove. Apparently, this is because kmail rewrites the headers (e.g. http://mail.kde.org/pipermail/kdepim-bugs/2013-August/087109.html) 2) On my work laptop: We use SSL certificates for signing/encryption. We got them from Entrust as a pkcs12 file. I have, as yet been unable to import them into kmail. Note that there are a number of still-open bugs in Debian against kmail, kleopatra, and kmail: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=524759 kleopatra: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=594114 I'm not sure what the combination of kmail, gnupg-agent, and kleopatra (and who knows what else) is causing my kmail problems. --b
Re: Best Mail Client - Was: [closed] A rookie's query: Want to about Debian and the related
On Tue, Dec 3, 2013 at 8:42 AM, Ralf Mardorf ralf.mard...@alice-dsl.netwrote: On Tue, 2013-12-03 at 18:19 +0530, AP wrote: close this thread now Then it's time for me to post this link: http://dot.kde.org/2004/11/02/kontactkmail-awarded-best-mail-client KMail was an award winner, for being the best GUI MUA. I disagree ;), for me there still is no best or less best. I had this in mind, when you started the thread :p, but I could resist to mention this. I like kmail's interfaces. It's just the backend encryption that has a problem. For whatever reason, it won't let me decrypt and add my s/mime certificate on my installation at work, and at home, it uses gpg, but folks using icedove isn't attractive either. It seems to suck cpu and memory.
Re: Squeeze dpkg --get-selections usable in any way in Wheezy?
On Tue, Dec 3, 2013 at 5:39 PM, Neal Murphy neal.p.mur...@alum.wpi.eduwrote: On Tuesday, December 03, 2013 05:26:37 PM Lisi Reisz wrote: I have had another failed upgrade. Before I tried to upgrade, I ran dpkg --get-selections and saved the result in a file. I am obviously going to have to install Wheezy from scratch. Is there any way I can make use of Squeeze's package list to give the owner of the box the same applications/packages as she had in Squeeze, mutatis mutandis? This topic was recently discussed, I think within the last 2-3 weeks. ... Ah, 11/4: Re: Installing same packages in a Squeeze installation in a new Wheezy installation. Excerpt from a thread summary I posted: The theory is that --get-selections will list all installed pkgs in a form that can be used by --set-selections to (re)install them. So, on the existing system: # Mount a thumb drive; change 'sdg' as needed mount /dev/sdg1 /mnt # Save the list of pkgs dpkg --get-selections /mnt/current_installed_pkgs.txt # Umount, then unplug the drive umount /mnt And on the new system, first netinstall a basic system. Then # Mount the thumb drive; change 'sdd' as needed mount /dev/sdd1 /mnt # Set the list of pkgs to install dpkg --set-selections /mnt/current_installed_pkgs.txt # Unmount and remove the thumb drive umount /mnt # Start the 'upgrade' apt-get dselect-upgrade When done, the new system should have the same pkgs as the old system. It won't be identical, but it'll be close. Neal beat me to the punch, Lisi. One point I would like to make is that it is probably not a bad idea to capture this from all of your machines as a starting point to have a pool of machine types. For instance, being a believer in bastion hosts, I have a separate firewall, backup machine, fileserver, workstations, laptop, and containers (OpenVZ) for, for instance, mediawiki box, puppet master, etc. Each machine runs a script (installed by puppet) that runs at 4am that does an dpkg --get-selections and writes it to the filesytem. This is backed up daily. This way, if I am building a new box of a type, I have a base config from which to work. --b
Re: MIT discovered issue with gcc
On Sat, Nov 23, 2013 at 6:18 AM, Michael Tautschnig m...@debian.org wrote: This looks very serious indeed, but a quick search of Debian mailing lists didn't show anything being acknowledged for this issue should Debian users be concerned? Probably not more than before, but as much as always: you are using code that hasn't be proved to be correct. But with open-source software at least you know what code you are using, and which bugs are being found. What I have told people in presentations is that the only truly secure computer is one that is turned off, unplugged, packed in concrete, and fired into the sun. Any program at a level not very much above Hello World in the language of your choice is likely to have bugs. I mean, you would have to swear off all software, turn off your computers, get rid of your cell phone, etc. At this point, I'm not quite willing to go that far. As Michael said, it's something to be aware of, but not something to keep you awake at night worrying. --b
top posting
I'm just curious why so many people get so upset about top posting. To my mind, as threads get longer, those keeping up with the thread would not want to scroll through messages that they have already read. I know that I don't. If they are commenting inline, that is fine, but I think that scrolling to the bottom of each and every message is more of a pain than if a respondent posts at the top of the message. And please forgive me for starting this. Thoughts?
Re: tv watching apps
Thanks for the links, guys. So now on to the troubleshooting part. Since both tvtime and vlc have the problem, I am thinking it is the The problem is that I am getting video, but not audio, in both tvtime and vlc. So I am trying to determine where the problem lies. She can get audio, for instance, in iceweasel video, and she can play music in her audio app, but not with the TV capture card. So it's either in the sound system or the capture card audio has failed. So where do I start to look in the sound system? I have had a love-hate relationship with pulseaudio, so I don't want to blame it as a knee-jerk reaction. I'm hoping for some reasoned troubleshooting advice wrt pulse. :) thanks, --b On Fri, Nov 8, 2013 at 5:34 AM, Darac Marjal mailingl...@darac.org.ukwrote: On Thu, Nov 07, 2013 at 09:14:32PM -0500, Brad Alexander wrote: I have a hauppague (bt878) card in my wife's machine. It is connected to cable, and she uses tvtime to watch tv in a window on her computer while she works. Unfortunately, every now and again, she will lose audio on it, and I have the damnedest time getting it to work again. I don't know if it is tvtime itself or pulse audio that is the problem. When tvtime is messed up, I can hear a low hum from the speakers, and when I mute tvtime, it goes quiet. Since the last release was in Nov 2005, I thought I might try to find something more modern that might play more nicely with pulse. However, all of the sites I have hit thus far have had really, really old apps, or the sites are just gone. I have found apps such as kdetv, kwintv, lintv, etc, but all are old and either dead or unmaintained. Does anyone have any recommendations? MythTV and/or XBMC are very capable TV solutions, but may be overkill for what you want. They are basically full-screen Personal Video Recorder solutions (that is, they turn your computer into a TiVo/Sky-Plus type box). If you're after watching TV in a window while getting on with other work, they can be fiddly to work with. But if you're interested in turning the monitor into a TV and sitting back to watch something, then they're good at that. Thanks, --b
Re: top posting
On Wed, Nov 20, 2013 at 6:59 PM, David Guntner da...@guntner.com wrote: Brad Alexander grabbed a keyboard and wrote: I'm just curious why so many people get so upset about top posting. To my mind, as threads get longer, those keeping up with the thread would not want to scroll through messages that they have already read. I know that I don't. If they are commenting inline, that is fine, but I think that scrolling to the bottom of each and every message is more of a pain than if a respondent posts at the top of the message. https://wiki.debian.org/FAQsFromDebianUser#What_is_top-posting_.28and_why_shouldn.27t_I_do_it.29.3F People shouldn't be bottom posting the way you describe, either. Long-standing (literally decades-old) conventions for E-Mail have used the quote-and-reply style for maintaining the flow of a conversation. This is especially important on a mailing list, where many people can contribute to a given conversation. Actually, I can see the point of posting inline, however, leave it to google and other mail apps to go and ruin it. In the gmail web interface, when you reply to an email or even a thread, you get the text entry box, with the message you are responding to hidden by a ... icon. Thus, if you are not paying attention or are trying to respond quickly, it is easily overlooked. I posted in a thread earlier, and was surprised when someone started their response with don't top post. I didn't even realize I had, thanks again to google. It's also a part of that long-standing convention that as a conversation grows larger, you're also supposed to trim out the parts of the text which no longer pertain to what you're replying to (keeping in enough for relevance while getting rid of stuff that is no longer part of the discussion). That is also, sadly, something that a lot of people don't get. But it's not as disruptive to the conversation as suddenly throwing comments on top of an existing conversation flow instead of interspersed with the rest of them. Top posting is even more disruptive when it's thrown in at the top of an existing conversation where people have been quoting and adding their comments among the quoted text. It's just plain lazy and is disrespectful of the other people in the conversation. All right, I can understand and respect that. Given that this topic really isn't specific to Debian, it's probably best if you take the conversation regarding it to the off-topic list: http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic --Dave
Re: Automatic installs
Sorry. Replied privately instead of to the list... Way back in the mists of time, around the time of the squeeze release, I asked here and it was recommended to use apt rather than aptitude... I'm guessing the best practice has changed...? On Tue, Nov 19, 2013 at 3:50 AM, Andrei POPESCU andreimpope...@gmail.comwrote: On Du, 17 nov 13, 16:39:10, Neal Murphy wrote: Actually, if efficiency was important, the --purge option would accept a regex. Or there'd be a --purgex option. Behold the power of aptitude aptitude purge ~c aptitude purge ~o ... Kind regards, Andrei -- http://wiki.debian.org/FAQsFromDebianUser Offtopic discussions among Debian users and developers: http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic http://nuvreauspam.ro/gpg-transition.txt
Re: Automatic installs
Excellent. Thanks, Andrei, On Tue, Nov 19, 2013 at 7:10 PM, Andrei POPESCU andreimpope...@gmail.comwrote: On Ma, 19 nov 13, 17:28:27, Brad Alexander wrote: Sorry. Replied privately instead of to the list... And I'll elaborate on my short reply as promised. Way back in the mists of time, around the time of the squeeze release, I asked here and it was recommended to use apt rather than aptitude... I'm guessing the best practice has changed...? Each has its strengths and weaknesses and I use both: - on my sid laptop I have an aptitude in interactive mode open all the time, to keep the system updated (almost daily), to look up package information, to check out new packages, to remove obsolete ones and other general maintenance of the system I've tried the interactive mode, but it reminds me too much of dselect. :) I started using Debian around 1999 (slink?), and it was pre- or early-days of apt. I could never get through the install, because I had a mental block against dselect. Try as I might, I can't get past the resemblance of aptitude's interactive mode. - for a quick search by name or description I use 'apt-cache search', because it's faster I use apt-cache and apt-file together on both my workstation and laptop. - for complex searches (and possibly actions on the set of packages a search would return) aptitude is better - for maintenance of stable systems I prefer apt-get (update upgrade dist-upgrade) because it's fast and simple. Agreed. this is the only way I upgrade. I run a sid desktop and a sid laptop. However, I don't upgrade nearly as often. I run apticron to keep an eye on critical packages, either function or urgency. - for quickly installing a package I prefer apt-get (faster) except for my sid system where aptitude is already open - etc. Recently aptitude's dependency resolver also couldn't come up with reasonable solutions for some transitions in sid, so I used apt-get instead. I have seen this too. I'm not a fan of the dependency resolution in aptitude. Generally, it gives me a number of non-optimal solutions. I'm not sure what logic gets used, but I have seen cases where it wants to uninstall major portions of the system because a package needs to be upgraded. For a dist-upgrade you should use whatever is advised in the Release Notes for that release, regardless of your usual preferences. Actually, nowadays, I end up having to dist-upgrade to get new kernels, etc. Which kind of defeats the *dist* part of dist-upgade. Regards, --b
Re: testing wants to install systemd
That's interesting. I am on a sid system, and I haven't noticed systemd on my system. I have the support libraries: $ dpkg -l | grep systemd ii libsystemd-daemon0:amd64 204-5 amd64systemd utility library ii libsystemd-login0:amd64 204-5 amd64systemd login utility library I also checked a testing box, and it has the same libs. I also, just out of curiosity, checked apticron on both boxes, but neither of them has systemd on the to be installed list. Maybe a dependency on something previously installed? --b On Tue, Nov 19, 2013 at 7:19 PM, Rob Owens row...@ptd.net wrote: I run a testing system that I depend on to get work done on a daily basis. I noticed today that a dist-upgrade wanted to install systemd. I've never used systemd -- is there anything to fear? For those who have installed it, does the system handle the switch from the old init scripts, or is there a lot of manual intervention and time required? -Rob
Audio from TV capture card
All right, I'm out of ideas. In my wife's sid machine, she has a Hauppague BT878-class TV capture card connected to a cable box. She uses tvtime to watch television when she is on the computer. About a week ago, she suddenly had no audio from the card. She turned off one evening before going to bed, and the next morning, she turned it on, and had no audio. I have looked at in kmix, alsamixer, the kde sound settings, and, I think, even the pavucontrol app (it was one of the pulseaudio apps, can't remember which one -- pulse audio and I have had a contentious relationship at best). Nothing. No changes were made between when she turned tvtime off and when she turned it back on the next morning. I also tried using vlc to connect to the capture card, and still no audio. Can anyone suggest where the problem might lie?
Automatic installs
Have a question about automatic dependencies. I just updated my sid machine, as more and more of KDE 4.11 was uploaded this week. Well, when I did, I got the following during the dist-upgrade: Calculating upgrade... The following packages were automatically installed and are no longer required: akonadiconsole akregator amor ark avogadro-data blinken blogilo bomber bovo cantor cervisia crda cvs cvsservice dnsmasq-base dragonplayer easy-rsa gnugo ...snip... step svgpart sweeper texlive-latex-base texlive-latex-base-doc translate-toolkit umbrello usb-modeswitch usb-modeswitch-data valgrind valgrind-dbg vpnc wireless-regdb wpasupplicant Use 'apt-get autoremove' to remove them. I am deathly afraid to apt-get autoremove, because I see a lot of things in there that I use, such as akregator, gwenview, kwalletmanager, network-manager, etc. I did a quick check, using apt-file, thinking that maybe the package name for the app might have changed: $ apt-file search /usr/bin/akregator akregator: /usr/bin/akregator akregator: /usr/bin/akregatorstorageexporter kdepim-dbg: /usr/lib/debug/usr/bin/akregator kdepim-dbg: /usr/lib/debug/usr/bin/akregatorstorageexporter But they didn't as far as I can tell. So is this for real, and if I apt-get autoremove, it will gut my system, or am I missing some detail and it's all good? Thanks, --b
Re: Automatic installs
Hi Bob, (every time I see your avatar, it makes me want to go back and finish up my pilot's license. :) ) Thanks for your advice. I think the package manager got confused, but I was able to apt-get install kde-full. It added about half a dozen packages, but it also brought the ones i was having issues with back into the fold. Thanks for pushing me in the right direction. --b On Fri, Nov 15, 2013 at 6:46 PM, Bob Proulx b...@proulx.com wrote: Brad Alexander wrote: Calculating upgrade... The following packages were automatically installed and are no longer required: akonadiconsole akregator amor ark avogadro-data blinken blogilo bomber bovo cantor cervisia crda cvs cvsservice dnsmasq-base dragonplayer easy-rsa gnugo ...snip... step svgpart sweeper texlive-latex-base texlive-latex-base-doc translate-toolkit umbrello usb-modeswitch usb-modeswitch-data valgrind valgrind-dbg vpnc wireless-regdb wpasupplicant Use 'apt-get autoremove' to remove them. Did you install these with KDE and then along the way remove the kde meta package? I am deathly afraid to apt-get autoremove, because I see a lot of things in there that I use, such as akregator, gwenview, kwalletmanager, Then in that case mark them as being something you want to keep around. The easy way is simply to fire the install command on them. apt-get install akregator gwenview kwalletmanager ... That will say that they are up to date and then also say that it is marking them as manually installed. There is also the 'apt-mark manual foo' command to mark foo as manually installed too. Either way is fine. For this apt-get install is one less thing to remember. So is this for real, and if I apt-get autoremove, it will gut my system, or am I missing some detail and it's all good? As currently known if you run apt-get autoeremove it will remove those packages. But if you are using those packages then don't do that. :-) Instead simply mark them as being manually installed. Mark a few of the top level ones and then run autoremove, answer 'n'o. Then pick another top level package and mark it. Then run autoremove again and answer 'n'o again. Repeat until you have marked to keep all of the packages that you want to keep. Almost certainly along the way you will find some package that you don't want and will decide to let it go. The automatically installed list is just a helper to help you maintain your system. But it is a simple thing and doesn't know about all cases such as the removal of a kde meta package. You are certainly encouraged to use judgement and drive it the way you want. Bob
Re: free-software phone: neo900
I agree. I hope they succeed As a long time N900 owner, I would love to upgrade my existing unit with a new motherboard and new, more powerful processor. On Thu, Nov 7, 2013 at 1:05 AM, Pete Ley peteley11...@gmail.com wrote: green greenfreedo...@gmail.com writes: Something that might be of interest to Debian users: the neo900, at http://neo900.org, is intended to be a successor of the Nokia N900, with significantly improved specifications and features, as well as full free software support (excluding PowerVR 3D acceleration). It is even (as of this writing) planned to have Debian GNU/Linux as the bundled OS. Comments? This sounds pretty cool. I hope they pull it off in the end, but I might wait for it to drop in price. 6-850 EUR is a bit hefty for my pockets. I'd like to see some kind of reduced-specs version that cost a little less. For instance, I don't need a barometer in my phone. I could also do without the front facing camera or even the 5MP if it meant a price reduction. And with a highly configurable system like Debian GNU/Linux, I wouldn't mind having similar specs to the original N900 for a smaller price tag. I fully commend the project for trying to advance free software in the mobile world; that said, part of the idea for me is options. :) /mytwocents -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/87zjpgkaps@enterprise.sectorq.net
tv watching apps
I have a hauppague (bt878) card in my wife's machine. It is connected to cable, and she uses tvtime to watch tv in a window on her computer while she works. Unfortunately, every now and again, she will lose audio on it, and I have the damnedest time getting it to work again. I don't know if it is tvtime itself or pulse audio that is the problem. When tvtime is messed up, I can hear a low hum from the speakers, and when I mute tvtime, it goes quiet. Since the last release was in Nov 2005, I thought I might try to find something more modern that might play more nicely with pulse. However, all of the sites I have hit thus far have had really, really old apps, or the sites are just gone. I have found apps such as kdetv, kwintv, lintv, etc, but all are old and either dead or unmaintained. Does anyone have any recommendations? Thanks, --b
[OT] Ping mystery
I have sort of a weird one here. On my network, I have my firewall, which has 3 interfaces. eth0 to the internal network, eth1 to the DMZ with a wireless access point hanging off of it, and eth2 out to the interwebs. My workstation is on the internal network, and i have a Nokia N900 on the wifi. I saw in the backup reports that the Nokia failed to back up because of long ping times. So I pinged it, and while every ping came back, and the time= looked normal, it was running at about 1/3 speed. I tried a combination of pings and found the following: * Pings from my workstation to the Nokia by hostname are slow; * Pings from the firewall are normal; * Pings from the access point are normal; * Pings from the workstation by IP are normal. Now, I thought it might be an issue on my dns. It wasn't. The DNS responds normally with 1msec query times. What's more, I also have a Nokia N810 on the access point, and it does not exhibit these symptoms. Anyone got any suggestions on what the cause may be? Right now, everything is working fine, I can ssh/scp to it, etc. But I'd like to know what is causing it. Thanks, --b
Kleopatra fails to decrypt certificate on import
I am running a sid KDE desktop on my machine, and recently received an Entrust certificate. I was able to decrypt the certificate in Iceweasel, however, I use kmail as my mail client. When attempting to import the certificate to kmail, I open the identity, and search for external certificates, which opens Kleopatra. I click on Import Certificates, and click the .p12 file. It prompts for my passphrase (the same one I use in iceweasel), and then says, An error occurred while trying to import the certificate certificatename.p12: Decryption failed What gives? Thanks, --b
Re: sluggish iceweasel
I have seen my iceweasel on my workstation continually grab up more and more memory and/or CPU. I suspect it is either a or more like a combination of ad-ons that are causing a memory leak. I have reduced it to the minimum I need to feel comfortable (adblock plus, request policy, tab mix plus, etc.), and I generally run a st00pid amount of tabs... Of course, I have a fairly beefy machine, so I can go for some period without having to restart. On Wed, Oct 23, 2013 at 11:27 AM, Lisi Reisz lisi.re...@gmail.com wrote: On Wednesday 23 October 2013 14:07:21 Stephen Powell wrote: I have been using the iceweasel web browser for years; but in the past several weeks using an up-to-date jessie system, iceweasel has become very sluggish. Even the simplest operations, like scrolling the screen, have become so sluggish that iceweasel has become almost unusable. I just tried switching to Epiphany, not known for lightning speed, but Epiphany is quite snappy compared to iceweasel now. Is it just me? Or has someone else noticed this too? Is there relief in sight? Not continuously, but repeatedly. Closing it and restarting it usually helps. For a while. I too have deserted to other browsers, but not for long. Iceweasel is the only browser which I know how to view with no style, and I need no style to be able to read many modern sites. So I just groan and restart it. Lisi -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201310231627.39889.lisi.re...@gmail.com
JSON::XS error
Hi all, I'm trying to install and experiment with Thruk on my nagios server, but am running into a perl error, which is why I am posting here. I asked on the thruk irc channel, but that went nowhere. In any case, this is beyond my pitiful perl-foo, so i figured I'd ask. This is on an i386 sid container (openvz) running on proxmox-ve (wheezy). When I try to install the thruk package (thruk_1.76-3_debian8_i386.deb), I get the following error message: Perl API version v5.14.0 of JSON::XS does not match v5.18.0 at /usr/lib/thruk/perl5/i486-linux-gnu-thread-multi-64int/XSLoader.pm line 92. Compilation failed in require at /usr/share/thruk/lib/Thruk/Utils/CLI.pm line 19. BEGIN failed--compilation aborted at /usr/share/thruk/lib/Thruk/Utils/CLI.pm line 19. Compilation failed in require at /usr/bin/thruk line 77. where line 77 of /usr/share/thruk/lib/Thruk/Utils/CLI.pm is: use JSON::XS qw/encode_json decode_json/; I have two JSON/XS.pm on the system, one from thruk and one from the thruk package, and the other from libjson-xs-perl. Can anyone suggest how to get past this error? Thanks, --b
Re: JSON::XS error
Okay. So how do you rebuild the module? On Wed, Oct 9, 2013 at 6:22 AM, Florian Ernst florian_er...@gmx.net wrote: Hello all, On Wed, Oct 09, 2013 at 05:16:27AM -0400, Brad Alexander wrote: [...] This is on an i386 sid container (openvz) running on proxmox-ve (wheezy). When I try to install the thruk package (thruk_1.76-3_debian8_i386.deb), I get the following error message: Perl API version v5.14.0 of JSON::XS does not match v5.18.0 at [...] I have two JSON/XS.pm on the system, one from thruk and one from the thruk package, and the other from libjson-xs-perl. On my test system these both merely differ in perldoc. They don't need to be updated. Can anyone suggest how to get past this error? Consol needs to update their packages they provide for Jessie for Perl API v5.18.0. Most probably a rebuild will suffice, but that needs to be checked. I'm afraid there isn't much you can do at the moment but wait / prod Consol. Oh, and I'd refrain from needlessly installing CPAN packages. HTH, Flo -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20131009102208.ga26...@fernst.no-ip.org
Re: difference Debian, solaris, freebsd
Solaris is not open source, it was created by Sun Microsystems, and it is now owned by Oracle...And all the implied baggage that entails. Oracle is not terribly friendly to open source or free software, hence their stance on OpenOffice.org, and mysql. They allowed OOO to languish to the point of driving developers away from it, which is why it was forked into LibreOffice. They finally washed their hands of it and gave it to the Apache foundation. They have gone out of their way to obfuscate security patches in mysql, and their first action with Solaris was to eliminate the free versions, causing another fork. There are, as someone stated, projects like Illumos, which are forks of the last free version of Solaris. In operation, Solaris has always been slower than Linux, even on native Sparc hardware. Many things in the OS are either crufty non-GNU tools, such as tar, which lacks many of the options that GNU tar has (though there are sites like sunfreeware.com), or they are different for the sake of being different. Like other commercial unixes, they had to do things differently to make them unique, so patching is much more painful than a Debian or RedHat or FreeBSD box. FreeBSD has, arguably, a better package system in the ports tree. Ports is/can be configured to do source-based installs of applications. It also has ZFS, which is arguably the best filesystem available, as long as you have tons of memory. I don't have a lot of experience with FreeBSD, though I am starting to experiment with it. FreeBSD is also open source, though not GPL. It uses the BSD license, which basically states that you can do anything you want to with the software. Personally, I would either stick with Linux or try FreeBSD. And I managed to do this entire email without calling it Slowaris :) On Thu, Aug 29, 2013 at 9:15 PM, Rob Owens row...@ptd.net wrote: On Thu, Aug 29, 2013 at 06:15:32PM +0500, Muhammad Yousuf Khan wrote: what are the major differences btw the three OS. Debian, Solaris, Freebsd i know some command change and stuff. but architecture wise. like unix is propitiatory, and freebst is not not blah blah. but why one should choose Debian or freebsd over others? i am a big fan of debian and i have been using it for years, i have no doubt about its stability and performance it is rock solid. then what is the reason people might willing to use debian over freebsd and vise versa because both are free. stalle. (freebsd unix type) all major server applications like samba,postfix etc are available in both. I can't really compare Debian to Solaris or Freebsd because I don't have much experience with them. But one reason I chose Debian over other Linux distributions is because of the number of packages available, which means I don't have to compile much software, if any. I suspect Debian has more packages available that Solaris or Freebsd do, but I'm not sure. -Rob
Re: 32-bit problems with nvidia packages
Hi Hans, Is it possible that you have some version creep? I had this on my multiarch sid system. There was a bug which made all of the VTs disappear. The quick fix was to upgrade a few of the packages (including nvidia-glx, iirc) to the version from experimental. Well, a few weeks ago, I upgraded and it broke several things, including my cisco anyconnect client, whch I only had in 32bit. Downgrading everything to the same version (304.88 in sid, which I believe is supposed to be a long-term support release of that driver) fixed things quite handily. HTH, --b On Tue, Aug 13, 2013 at 11:11 AM, Hans-J. Ullrich hans.ullr...@loop.dewrote: Hello list, I need to use some 32-bit applications on my amd64 system. These are 3d- accelerated. But it looks lioke there is a bug in the packages. When I for example start googleearth, it starts, but I cannot see the planet. This problem exists since the change to multiarch. However, when I install the installer from Nvidia site (NVidia-bla*.run), and say, to install the 32-bit libs, too, during that install process, verything is working fine. I could do so and do not use the debian packages, but I think, you might want to get this fixed. Sadly the latest Nvidia modules cannnot be build with kernel 3.10 (I mean now those from the NVidia-bla*.run installer). The debian nvidia-kernel module of course can be build (using dkms). I think, I did not miss some libs, these are installed: dpkg --get-selections | grep nvidia glx-alternative-nvidia install libgl1-nvidia-glx:amd64 install libgl1-nvidia-glx:i386 install libxvmcnvidia1:amd64install nvidia-alternative install nvidia-driver install nvidia-glx install nvidia-installer-cleanupinstall nvidia-kernel-commoninstall nvidia-kernel-dkms install nvidia-support install nvidia-vdpau-driver:amd64 install xserver-xorg-video-nvidia install And of course my system is multiarch. Jus got problems with the nvidia- packages. There is already a bugreport relatd to this, but can't remember the number, sorry. Can someone else either report, if he is managed to run it on a 64-bit system and how he did? Or confirm my problem somehow? Would be nice, so I can look, what the reason for this behavior is. Thanks and best regards Hans -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201308131711.26957.hans.ullr...@loop.de
Re: certificates error with FF23
I am seeing the same issue on my internal domain. since these certificates are not directly exposed to the internet, I can't point the ssltest site to my certificate. Is there some way to get firefox to divulge what is giving it heartburn, or else an openssl incantation that can be made to do so? Thanks, --b On Fri, Aug 9, 2013 at 3:06 PM, Vincenzo Ampolo vincenzo.amp...@gmail.comwrote: Thank you David. I'll try to do the same. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/loom.20130809t210620...@post.gmane.org
roundcube-plugins
Is anyone running roundcube? I upgraded this past weekend (08/04), and ever since then, I have been getting these messages to /var/log/dpkg.log every 30 minutes: 2013-08-08 07:32:33 startup packages remove 2013-08-08 07:32:33 status installed roundcube-plugins-extra:all 0.7-20120110 2013-08-08 07:32:33 remove roundcube-plugins-extra:all 0.7-20120110 none 2013-08-08 07:32:33 status half-configured roundcube-plugins-extra:all 0.7-20120110 2013-08-08 07:32:33 status half-installed roundcube-plugins-extra:all 0.7-20120110 2013-08-08 07:32:33 status config-files roundcube-plugins-extra:all 0.7-20120110 2013-08-08 07:32:33 status config-files roundcube-plugins-extra:all 0.7-20120110 2013-08-08 07:32:34 startup archives unpack 2013-08-08 07:32:35 install roundcube-plugins:all 0.9.2-2 0.9.2-2 2013-08-08 07:32:35 status half-installed roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:35 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:35 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 startup packages configure 2013-08-08 07:32:37 configure roundcube-plugins:all 0.9.2-2 none 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:37 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:38 status unpacked roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:38 status half-configured roundcube-plugins:all 0.9.2-2 2013-08-08 07:32:38 status installed roundcube-plugins:all 0.9.2-2 I can't find why it is happening with roundcube-plugins. It showed similar with roundcube-plugins-extra, but in spite of the bug report saying that both could be installed together, it uninstalled -extra when I installed -plugins. Any ideas? thanks, --b
Re: roundcube-plugins
That is the usual behavior, but in my case, installing one uninstalls the other. According to Bug 714135http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=714135, this was marked as fixed, but I still get the install-uninstall behavior. And I am still getting it trying to install and uninstall whichever one is installed. On Thu, Aug 8, 2013 at 9:21 AM, Darac Marjal mailingl...@darac.org.ukwrote: On Thu, Aug 08, 2013 at 09:16:46AM -0400, Brad Alexander wrote: Is anyone running roundcube? I upgraded this past weekend (08/04), and ever since then, I have been getting these messages to /var/log/dpkg.log every 30 minutes: Both roundcube-plugins (0.9) and roundcube-plugins-extra (0.7) provide the zipdownload plugin. If you try to install both, you'll get a file conflict. You can either wait for roundcube-plugins-extra 0.9 to arrive or you can force the overwrite of roundcube-plugins. I can't find why it is happening with roundcube-plugins. It showed similar with roundcube-plugins-extra, but in spite of the bug report saying that both could be installed together, it uninstalled -extra when I installed -plugins. Any ideas? thanks, --b
Re: roundcube-plugins
Okay, so any idea why dpkg is doing the half-installed/unpacked/configured issue every 30 minutes? On Thu, Aug 8, 2013 at 9:50 AM, Darac Marjal mailingl...@darac.org.ukwrote: On Thu, Aug 08, 2013 at 09:36:43AM -0400, Brad Alexander wrote: That is the usual behavior, but in my case, installing one uninstalls the other. According to [1]Bug 714135, this was marked as fixed, but I still get the install-uninstall behavior. Ah, yes. Roundcube-plugins 0.9.2-2 breaks *and replaces* roundcube-plugins-extra = 0.7-20120110. This means the two are incompatible. So you don't get a file-level conflict any more, you get a package-level conflict instead. That is, r-p 0.9.2-1 and r-p-e 0.7-20120110 where co-installable, but you'd get a Trying to overwrite ... which is also in package ... error. Now, you CAN'T install both at the same time. This is fine for the main roundcube packages: installing r-p will remove the old r-p-e package beforehand. But if you want any of the OTHER plugins from r-p-e (personally I want the fail2ban plugin and don't much care about the zipdownload plugin), then you need to wait for an update r-p-e package available. I don't know if there IS going to be an updated roundcube-plugins-extra, but for now, I'm holding back and 0.9.2-1 (having forced the overwrite). But, hey, that's Sid for you :) And I am still getting it trying to install and uninstall whichever one is installed. On Thu, Aug 8, 2013 at 9:21 AM, Darac Marjal [2] mailingl...@darac.org.uk wrote: On Thu, Aug 08, 2013 at 09:16:46AM -0400, Brad Alexander wrote: Is anyone running roundcube? I upgraded this past weekend (08/04), and ever since then, I have been getting these messages to /var/log/dpkg.log every 30 minutes: Both roundcube-plugins (0.9) and roundcube-plugins-extra (0.7) provide the zipdownload plugin. If you try to install both, you'll get a file conflict. You can either wait for roundcube-plugins-extra 0.9 to arrive or you can force the overwrite of roundcube-plugins. I can't find why it is happening with roundcube-plugins. It showed similar with roundcube-plugins-extra, but in spite of the bug report saying that both could be installed together, it uninstalled -extra when I installed -plugins. Any ideas? thanks, --b References Visible links 1. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=714135 2. mailto:mailingl...@darac.org.uk
Re: tntnet
It could be tntnet: # apt-cache search tntnet tntnet - modular, multithreaded web application server for C++ Having said that, while I am not a developer, I have never run tntnet on any of my boxes. --b On Tue, Jul 30, 2013 at 5:37 PM, Wayne Topa linux...@gmail.com wrote: On 07/30/2013 04:55 PM, ChadDavis wrote: I've noticed the tntnet is running on my box. I'm on wheezy. I'd like to turn it off, at the least. But I wonder why it's fired up in the first place. I didn't install it, unless by accident. How might I determine if something else is using it? tntnet? Was that supposed to read telnet? If you meant telnet, see the man page for apt-cache and check for depends and rdepends. If you did mean tntnet I don't think it came from Debiab so I would be careful using matches around that box. ;_) -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/51f83226.2070...@gmail.com
libGL.so.1 and nvidia drivers
I'm using the nvidia drivers from the repos on a sid machine. They were just upgraded to the long-term 319.32-1 from the repos. However, I have cisco's anyconnect, and when I launch it, I get $ /opt/cisco/anyconnect/bin/vpnui /opt/cisco/anyconnect/bin/vpnui: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory However, the library is installed (part of libgl1-nvidia-glx), even though it does not appear in ldconfig -v, and through a whole series of /etc/alternatives symlinks, /usr/lib/x86_64-linux-gnu/libGL.so.1 - /etc/alternatives/glx--libGL.so.1-x86_64-linux-gnu /etc/alternatives/glx--libGL.so.1-x86_64-linux-gnu - /usr/lib/x86_64-linux-gnu/nvidia/libGL.so.1 /usr/lib/x86_64-linux-gnu/nvidia/libGL.so.1 - /etc/alternatives/nvidia--libGL.so.1-x86_64-linux-gnu /etc/alternatives/nvidia--libGL.so.1-x86_64-linux-gnu - /usr/lib/x86_64-linux-gnu/nvidia/current/libGL.so.1 /usr/lib/x86_64-linux-gnu/nvidia/current/libGL.so.1 - libGL.so.319.32 -rw-r--r-- 1 root root 1144680 Jun 19 17:55 /usr/lib/x86_64-linux-gnu/nvidia/current/libGL.so.319.32 So what is it I'm missing here? Why is libGL.so.1 not in ldconfig? ldconfig -v | grep GL /sbin/ldconfig.real: Can't stat /lib/i486-linux-gnu: No such file or directory /sbin/ldconfig.real: Can't stat /usr/lib/i486-linux-gnu: No such file or directory /sbin/ldconfig.real: Path `/lib/x86_64-linux-gnu' given more than once /sbin/ldconfig.real: Path `/usr/lib/x86_64-linux-gnu' given more than once libEGL.so.1 - libEGL.so.1.0.0 libGLEWmx.so.1.7 - libGLEWmx.so.1.7.0 libGLEW.so.1.7 - libGLEW.so.1.7.0 libGLU.so.1 - libGLU.so.1.3.1 libQtOpenGL.so.4 - libQtOpenGL.so.4.8.5 libEGL.so.1 - libEGL.so.1.0.0 I need to get the vpn client running. Can someone shed some light? Thanks, --b
Re: Backup/Restore software?
I back up 20 or so hosts and have about the same story as Gary. As with any backup solution, I do spot-check backups on occasion, just to make sure that in your moment of need, the files are really there. :) I use the default location of /var/lib/backuppc as my default location for my file store as well. I did have it pointed elsewhere, but there are enough extra scripts out there that get heartburn if it is not there, that it is easier to mount the pool in the default location than to go in and edit every script. Another thing I do is to change my config.pl to use a normal user for backups, then give sudo access. I'm not fond of the backup box having unfettered root access to everything on my network. --b On Sat, Jul 13, 2013 at 2:01 PM, Gary Roach gary719_li...@verizon.netwrote: On 07/12/2013 07:52 AM, Rob Owens wrote: - Original Message - From: David Guntnerda...@guntner.com I've been religiously backing up my Windows machine for years with a program called Acronis True Image. It works well, lets me backup my system to a second hard drive in the computer, and will do a weekly full backup and daily incremental backups, cleaning up older backup chains and so on. My Linux machine (Debian 6.0.7 at the moment, but planning on updating to Wheezy soon), on the other hand, has gone far too long without any real backup protection. I'd like to rectify that if I can. :-) Is there a Linux backup package that will do pretty much what I described above? I want to be able to set it and forget it so it just runs every night on its own and that way I have about a week or two's worth of backups to fall back on. I need it to be able to do a full restore in case of a disaster as well as being able to restore selected files/directories in case of a oh why did I rm *that*? moment. :-) I use backuppc and I really like it. It's especially good when you're backing up multiple machines, because it does file pooling -- it will only save 1 copy of a file, no matter how many machines or directories it appears in. (It uses hard links to achieve this). It also does compression. While it's not super-easy to set up, it's got a web interface for managing everything. You just need to learn a little bit about how it works, because there are lots of options. In another post you stated that you wanted to be able to restore an entire system. Just keep in mind that there are things like /dev, /proc, and the mbr that you will need to work around. It's not quite as simple as backing up everything and then restoring everything. My preferred recovery method is to install the OS from scratch, then install all of my packages using dpkg --set-selections mypackagelist (see this page: http://askubuntu.com/**questions/101931/restoring-** all-data-and-dependencies-**from-dpkg-set-selectionshttp://askubuntu.com/questions/101931/restoring-all-data-and-dependencies-from-dpkg-set-selections ) Then restore all my config files, /usr/local, /home, and so-on. -Rob I'm suprised that it took so long for someone to mention backuppc. I've been using it for some time and the biggest problem is forgetting its there. I set mine up to backup 3 systems, all debian wheezy and used rsyncd as the transport agent. You can include your windows system as well. One of the trickiest parts is telling backuppc where to store the data. The best scheme is to set up backuppc to store the backup data in /var/backuppc and then mount your backup disk to /var/backuppc. If you do this don't do what I did and forget to exclude /var/backuppc from the backup list. You get very strange results. Gary R. -- To UNSUBSCRIBE, email to debian-user-REQUEST@lists.**debian.orgdebian-user-requ...@lists.debian.orgwith a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/**51e19606.5080...@verizon.nethttp://lists.debian.org/51e19606.5080...@verizon.net
Encrypted drive
I'm going to be adding a 3TB drive to my fileserver, but I want to use LUKS encryption. The fileserver is kinda long in the tooth, a 2.8GHz P4 with 2GB of RAM. The machine has two drive slots, currently with an 80GB and a 1.5TB drive. What I was considering am using a 16GB thumb drive for the OS, then encrypting and using the 1.5TB and 3TB for swap, /var, /tmp, and the rest for data. Toward that end, I have a few questions. * Is using a thumb drive for / and /usr a bad idea? Would it be better to set up the 1.5 TB with two VGs, one for the OS and one for data? * Writing random data to the hard drive is going to take a *lot* of time. I built a P4 box a while back with a 500GB drive, and as I recall, it took somewhere around 40 hours to write random data to that drive...A 3TB will take something over a week by my estimation. I was considering processing it on my Quad-core AMD machine, but will that be any faster? I'm guessing it will, since the bus speed is faster. * If you have already randomized the drive and written encrypted data to it (e.g. on the faster machine), is there a way to tell the installer that you want an encrypted partition, but don't write random data to it? Any thoughts would be welcome. Thanks, --b
Re: Debian/Linux tutorials.
If you are looking for some more targeted but generic tutorials: The Geek Stuff: http://www.thegeekstuff.com nixcraft: http://www.cyberciti.biz/faq/ HowtoGeek: http://www.howtogeek.com Howto Forge: http://www.howtoforge.com There is a wealth of information on those sites about Linux, other *nixes, and the tools that drive them. --b On Thu, Jun 6, 2013 at 3:04 PM, Lisi Reisz lisi.re...@gmail.com wrote: On Thursday 06 June 2013 19:10:24 Tixy wrote: On Thu, 2013-06-06 at 14:11 +0300, atar wrote: I wanted to know please where can I enrich my knowledge about Linux at general and especially about Debian The Debian Handbook may be useful. It's available on a running Debian system by installing the 'debian-handbook' package, or online as a free download at http://debian-handbook.info/ There are also the IBM Linux tutorials: http://www.ibm.com/developerworks/linux/ Lisi -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201306062004.11747.lisi.re...@gmail.com
Installing libgtk2.0-0:i386 on amd64 system
Hello, As the subject says, I'm having problems installing libgtk2.0-0:i386 on my amd64 sid system. The problem is that I am using the cisco anyconnect client, which requires the i386 version, but when I attempt to install at one (the amd64 version is installed), I get a bunch of dependency issues: # apt-get install libgtk2.0-0:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libgtk2.0-0:i386 : Depends: libcups2:i386 (= 1.4.0) but it is not going to be installed Depends: libgssapi-krb5-2:i386 (= 1.6.dfsg.2) but it is not going to be installed Depends: libk5crypto3:i386 (= 1.6.dfsg.2) but it is not going to be installed Depends: libkrb5-3:i386 (= 1.6.dfsg.2) but it is not going to be installed Recommends: hicolor-icon-theme:i386 but it is not installable E: Unable to correct problems, you have held broken packages. No packages are held on thei machine. I tried downloading the package and installing through dpkg -i (though I didn't follow through with it), but it wanted to uninstall several packages: The following packages will be REMOVED: krb5-multidev krb5-user libcups2-dev libgtk2.0-0:i386 libkadm5clnt-mit7 libkadm5clnt-mit8 libkadm5srv-mit7 libkadm5srv-mit8 libkdb5-4 libkrb5-dev the -dev packages are not so much of an issue, but we use Kerberos at work, so I can't afford to have that removed. Got suggestions on how to get this lib to work? thanks, --b
[Solved] Installing libgtk2.0-0:i386 on amd64 system
Was able to get this going by pulling the package from experimental... Thanks all, --b On Thu, Jun 6, 2013 at 8:26 AM, Brad Alexander stor...@gmail.com wrote: Hello, As the subject says, I'm having problems installing libgtk2.0-0:i386 on my amd64 sid system. The problem is that I am using the cisco anyconnect client, which requires the i386 version, but when I attempt to install at one (the amd64 version is installed), I get a bunch of dependency issues: # apt-get install libgtk2.0-0:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libgtk2.0-0:i386 : Depends: libcups2:i386 (= 1.4.0) but it is not going to be installed Depends: libgssapi-krb5-2:i386 (= 1.6.dfsg.2) but it is not going to be installed Depends: libk5crypto3:i386 (= 1.6.dfsg.2) but it is not going to be installed Depends: libkrb5-3:i386 (= 1.6.dfsg.2) but it is not going to be installed Recommends: hicolor-icon-theme:i386 but it is not installable E: Unable to correct problems, you have held broken packages. No packages are held on thei machine. I tried downloading the package and installing through dpkg -i (though I didn't follow through with it), but it wanted to uninstall several packages: The following packages will be REMOVED: krb5-multidev krb5-user libcups2-dev libgtk2.0-0:i386 libkadm5clnt-mit7 libkadm5clnt-mit8 libkadm5srv-mit7 libkadm5srv-mit8 libkdb5-4 libkrb5-dev the -dev packages are not so much of an issue, but we use Kerberos at work, so I can't afford to have that removed. Got suggestions on how to get this lib to work? thanks, --b
roundcube blank screen after apache upgrade
Hi all, I am having a problem with my roundcube installation since I upgraded my sid box 2 days ago. It's probably a simple fix, but I'm hoping some web guru can help me find it. Apache2 was upgraded from 2.2.3 to 2.4.4, and apparently there were a goodly number of changes. I suspect this is another of those, but after some searching, I have been unable to find the cause. The first issue that I learned about was the change from /etc/apache2/conf.d (which is where the roundcube config lived) to the conf-[available|enabled] scheme similar to mods and sites. I figured this out because after the upgrade, the site was giving me a 404 error, could not find /roundcube. Once I found this problem, I started getting 403 Forbidden errors. The root cause of this turned out to be the change of syntax in the config files from Allow from all/Deny from all to Require all granted/Require all denied. I fixed these, so now, I'm getting a blank page, and nothing in the apache2 error log. Has anyone else upgraded to 2.4.4 and is running roundcube that may have an idea of the problem? Thanks, --b
Re: roundcube blank screen after apache upgrade
On Sat, Jun 1, 2013 at 6:20 PM, Jochen Spieker m...@well-adjusted.de wrote: Brad Alexander: I am having a problem with my roundcube installation since I upgraded my sid box 2 days ago. It's probably a simple fix, but I'm hoping some web guru can help me find it. Apache2 was upgraded from 2.2.3 to 2.4.4, and apparently there were a goodly number of changes. Yes. Apache 2.4 has an almost completely new config file format. I imagine the Apache config provided by Roundcube (you installed roundcube using apt?) has not been adapted yet. Probably not, since the version in jessie/sid is 0.7.2. Latest on the site is 0.9.1, and even experimental only has 0.8.6. There is a bug on this, #669804...They said they were going to wait until after the release of apache 2.4, so hopefully it will be soon. Once I found this problem, I started getting 403 Forbidden errors. The root cause of this turned out to be the change of syntax in the config files from Allow from all/Deny from all to Require all granted/Require all denied. I fixed these, so now, I'm getting a blank page, and nothing in the apache2 error log. Did you increase Roundcube's debug level in /etc/roundcube/main.inc.php? I did. Still nothing in the roundcube/errors log file nor in the apache2/error.log. --b J. -- I wish I had been aware enough to enjoy my time as a toddler. [Agree] [Disagree] http://www.slowlydownward.com/NODATA/data_enter2.html
Re: Kernel 3.8 + nvidia?
I was beaten to the punch. Before I could roll back to 304.88, 319.17 came out. I upgraded to it, and am now running with nvidia on 3.8.12. --b On Thu, May 16, 2013 at 7:47 PM, Brad Alexander stor...@gmail.com wrote: Had a chance to look in to this this evening. Apparently the nvidia-smi nvidia-settings nvidia-kernel-source libnvidia-ml1:amd64 packages are still at 304.88 rather than 313.30. I guess I'm going to try rolling everything back to 304.88... On Thu, May 16, 2013 at 4:54 AM, Brad Alexander stor...@gmail.com wrote: Well, I upgraded, which installed a newer version of the nvidia drivers (313-30). So I gave them a spin on 3.8-1. Booted up on it, and did a dpkg-reconfigure nvidia-kernel-dkms. It built fine, but at the end, a message popped up saying nvidia: Running module version sanity check. Error! Module version 313.30 for nvidia.ko is not newer than what is already found in kernel 3.8-1-amd64 (313.30). You may override bhy specofying --force. And when I try to insert the module, I get: # modprobe nvidia [ 229.364270] Module len 8728526 truncated ERROR: could not insert nvidia: Exec format error Any ideas? --b On Wed, May 15, 2013 at 11:14 AM, Brad Alexander stor...@gmail.comwrote: In my case, possibly my upgrades from experimental. I think I have seen a couple of other threads where pulling previous updates from experimental has lead to problems once the freeze ended. I'm going to downgrade to the nvidia drivers in sid and try 3.8 again. On Wed, May 15, 2013 at 2:46 AM, Mark Allums m...@allums.com wrote: On a Dell Precision M4600 laptop, I installed a clean Wheezy release, installed nvidia drivers along with the dkms packages that go with. Because of network driver trouble, I took the kernel from sid. After that, everything works fine. I mean, video is fine. Networking still a problem. Here is my version information: $ dpkg -l | grep nvidia ii glx-alternative-nvidia 0.2.2 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-1 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-glx:amd64304.88-1 amd64NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-1 amd64transition libgl.so diversions to glx-alternative-nvidia ii libxvmcnvidia1:amd64 304.88-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 304.88-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 304.88-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20120630+3 amd64Cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20120630+3 amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 304.88-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-settings304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-support 20120630+3 amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 304.88-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 304.88-1 amd64NVIDIA binary Xorg driver linux-image-3.8-1-amd643.8.12-1 amd64Linux 3.8 for 64-bit PCs ii linux-image-amd64 3.8+47 amd64Linux for 64-bit PCs (meta-package) Thanks. That's good to know. Only some folks are affected. I wonder why?
Re: Kernel 3.8 + nvidia?
Well, I upgraded, which installed a newer version of the nvidia drivers (313-30). So I gave them a spin on 3.8-1. Booted up on it, and did a dpkg-reconfigure nvidia-kernel-dkms. It built fine, but at the end, a message popped up saying nvidia: Running module version sanity check. Error! Module version 313.30 for nvidia.ko is not newer than what is already found in kernel 3.8-1-amd64 (313.30). You may override bhy specofying --force. And when I try to insert the module, I get: # modprobe nvidia [ 229.364270] Module len 8728526 truncated ERROR: could not insert nvidia: Exec format error Any ideas? --b On Wed, May 15, 2013 at 11:14 AM, Brad Alexander stor...@gmail.com wrote: In my case, possibly my upgrades from experimental. I think I have seen a couple of other threads where pulling previous updates from experimental has lead to problems once the freeze ended. I'm going to downgrade to the nvidia drivers in sid and try 3.8 again. On Wed, May 15, 2013 at 2:46 AM, Mark Allums m...@allums.com wrote: On a Dell Precision M4600 laptop, I installed a clean Wheezy release, installed nvidia drivers along with the dkms packages that go with. Because of network driver trouble, I took the kernel from sid. After that, everything works fine. I mean, video is fine. Networking still a problem. Here is my version information: $ dpkg -l | grep nvidia ii glx-alternative-nvidia 0.2.2 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-1 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-glx:amd64304.88-1 amd64NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-1 amd64transition libgl.so diversions to glx-alternative-nvidia ii libxvmcnvidia1:amd64 304.88-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 304.88-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 304.88-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20120630+3 amd64Cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20120630+3 amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 304.88-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-settings304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-support 20120630+3 amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 304.88-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 304.88-1 amd64NVIDIA binary Xorg driver linux-image-3.8-1-amd643.8.12-1 amd64Linux 3.8 for 64-bit PCs ii linux-image-amd64 3.8+47 amd64Linux for 64-bit PCs (meta-package) Thanks. That's good to know. Only some folks are affected. I wonder why?
Re: Kernel 3.8 + nvidia?
Had a chance to look in to this this evening. Apparently the nvidia-smi nvidia-settings nvidia-kernel-source libnvidia-ml1:amd64 packages are still at 304.88 rather than 313.30. I guess I'm going to try rolling everything back to 304.88... On Thu, May 16, 2013 at 4:54 AM, Brad Alexander stor...@gmail.com wrote: Well, I upgraded, which installed a newer version of the nvidia drivers (313-30). So I gave them a spin on 3.8-1. Booted up on it, and did a dpkg-reconfigure nvidia-kernel-dkms. It built fine, but at the end, a message popped up saying nvidia: Running module version sanity check. Error! Module version 313.30 for nvidia.ko is not newer than what is already found in kernel 3.8-1-amd64 (313.30). You may override bhy specofying --force. And when I try to insert the module, I get: # modprobe nvidia [ 229.364270] Module len 8728526 truncated ERROR: could not insert nvidia: Exec format error Any ideas? --b On Wed, May 15, 2013 at 11:14 AM, Brad Alexander stor...@gmail.comwrote: In my case, possibly my upgrades from experimental. I think I have seen a couple of other threads where pulling previous updates from experimental has lead to problems once the freeze ended. I'm going to downgrade to the nvidia drivers in sid and try 3.8 again. On Wed, May 15, 2013 at 2:46 AM, Mark Allums m...@allums.com wrote: On a Dell Precision M4600 laptop, I installed a clean Wheezy release, installed nvidia drivers along with the dkms packages that go with. Because of network driver trouble, I took the kernel from sid. After that, everything works fine. I mean, video is fine. Networking still a problem. Here is my version information: $ dpkg -l | grep nvidia ii glx-alternative-nvidia 0.2.2 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-1 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-glx:amd64304.88-1 amd64NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-1 amd64transition libgl.so diversions to glx-alternative-nvidia ii libxvmcnvidia1:amd64 304.88-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 304.88-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 304.88-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20120630+3 amd64Cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20120630+3 amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 304.88-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-settings304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-support 20120630+3 amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 304.88-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 304.88-1 amd64NVIDIA binary Xorg driver linux-image-3.8-1-amd643.8.12-1 amd64Linux 3.8 for 64-bit PCs ii linux-image-amd64 3.8+47 amd64Linux for 64-bit PCs (meta-package) Thanks. That's good to know. Only some folks are affected. I wonder why?
Re: Kernel 3.8 + nvidia?
In my case, possibly my upgrades from experimental. I think I have seen a couple of other threads where pulling previous updates from experimental has lead to problems once the freeze ended. I'm going to downgrade to the nvidia drivers in sid and try 3.8 again. On Wed, May 15, 2013 at 2:46 AM, Mark Allums m...@allums.com wrote: On a Dell Precision M4600 laptop, I installed a clean Wheezy release, installed nvidia drivers along with the dkms packages that go with. Because of network driver trouble, I took the kernel from sid. After that, everything works fine. I mean, video is fine. Networking still a problem. Here is my version information: $ dpkg -l | grep nvidia ii glx-alternative-nvidia 0.2.2 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-1 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-glx:amd64304.88-1 amd64NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-1 amd64transition libgl.so diversions to glx-alternative-nvidia ii libxvmcnvidia1:amd64 304.88-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 304.88-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 304.88-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20120630+3 amd64Cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20120630+3 amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 304.88-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-settings304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-support 20120630+3 amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 304.88-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 304.88-1 amd64NVIDIA binary Xorg driver linux-image-3.8-1-amd643.8.12-1 amd64Linux 3.8 for 64-bit PCs ii linux-image-amd64 3.8+47 amd64Linux for 64-bit PCs (meta-package) Thanks. That's good to know. Only some folks are affected. I wonder why?
Re: audit security
It depends on what you are looking for. You could set up Nessus (or nmap or something similar) to run active scans. Nessus has a (free) home feed, as well as a scheduling option. Another front end for it would be Seccubus ( http://seccubus.com/). Something lighter would be tiger, which emails you differences/changes From the Debian description, Debian's TIGER incorporates new checks primarily oriented towards Debian distribution including: md5sums checks of installed files, location of files not belonging to packages, check of security advisories and analysis of local listening processes. You might also check out chkrootkit and rkhunter. --b On Tue, May 14, 2013 at 3:25 PM, Pol Hallen de...@fuckaround.org wrote: Apart from subscribing to debian-security-announce? Hi and thanks for your reply. I already subscribed to security announce. The difference is that portaudit show me only security hole from installed packages and an email from each server. I've several servers and I can't remember which services (daemons, ecc.) installed. Pol -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/51928f9c.2080...@fuckaround.org
Re: Kernel 3.8 + nvidia?
Thanks, Mark, I'm glad it wasn't just me. I was lucky. I managed to boot back into 3.2.0, which is where I am now. Had to reinstall the nvidia driver, but all is good for now. --b On Thu, May 9, 2013 at 10:36 PM, Mark Allums m...@allums.com wrote: I upgraded yesterday, and also installed 3.8. I was wondering, has anyone else run into issues with the nvidia drivers with this kernel? I realize I have a kind of franken-driver situation: ii glx-alternative-nvidia 0.3.0 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-2 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-alternatives-ia32304.88-2 amd64simplifies replacing MESA libGL with GPU vendor libraries (32-bit) ii libgl1-nvidia-glx:amd64313.30-1 amd64NVIDIA binary OpenGL libraries ii libgl1-nvidia-glx:i386 313.30-1 i386 NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-2 amd64transition libgl.so diversions to glx-alternative-nvidia ii libnvidia-ml1:amd64304.88-2 amd64NVIDIA management library (NVML) runtime library rc libxvmcnvidia1:amd64 304.84-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 313.30-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 313.30-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20130505+1 amd64cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20130505+1 amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 313.30-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-kernel-source 304.88-2 amd64NVIDIA binary kernel module source ii nvidia-settings304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-smi 304.88-2 amd64NVIDIA System Management Interface ii nvidia-support 20130505+1 amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 313.30-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 313.30-1 amd64NVIDIA binary Xorg driver but that was a result of the driver bug a couple of months ago which made the VTs go away, and the fix was to use the nvidia driver from experimental. Has anyone seen any issues with 3.8, or is it my driver craziness? Thanks, --b The 3.8 kernel, designated -trunk in experimental worked well. When they moved it to sid, stuff broke for a lot of people, including people who were already running it. It's gone from experimental now, so we can't easily roll back. You can consider reinstalling the nvidia driver, but it didn't work for me. Is nvidia really the problem, or is it the usb driver? The usb functionality broke for me, and I can't log in using lightdm, because my keyboard and mouse are usb. ssh is not running, so logins from the network are out. Hitting the power button for a soft shutdown reset the computer instead of performing a shutdown, and I lost a RAID partition out of the deal. I put that machine aside and used the enforced downtime as a reason to put together a new machine, and I will postpone upgrading to 3.8 for a while until this gets straightened out. So I am urging everyone to exercise caution going to 3.8 for a while.
Sid: Kernel 3.8 + nvidia?
I upgraded yesterday, and also installed 3.8. I was wondering, has anyone else run into issues with the nvidia drivers with this kernel? I realize I have a kind of franken-driver situation: ii glx-alternative-nvidia 0.3.0 amd64allows the selection of NVIDIA as GLX provider ii libgl1-nvidia-alternatives 304.88-2 amd64transition libGL.so* diversions to glx-alternative-nvidia ii libgl1-nvidia-alternatives-ia32 304.88-2 amd64simplifies replacing MESA libGL with GPU vendor libraries (32-bit) ii libgl1-nvidia-glx:amd64 313.30-1 amd64NVIDIA binary OpenGL libraries ii libgl1-nvidia-glx:i386 313.30-1 i386 NVIDIA binary OpenGL libraries ii libglx-nvidia-alternatives 304.88-2 amd64transition libgl.so diversions to glx-alternative-nvidia ii libnvidia-ml1:amd64 304.88-2 amd64NVIDIA management library (NVML) runtime library rc libxvmcnvidia1:amd64 304.84-1 amd64NVIDIA binary XvMC library ii nvidia-alternative 313.30-1 amd64allows the selection of NVIDIA as GLX provider ii nvidia-glx 313.30-1 amd64NVIDIA metapackage ii nvidia-installer-cleanup 20130505+1amd64cleanup after driver installation with the nvidia-installer ii nvidia-kernel-common 20130505+1amd64NVIDIA binary kernel module support files ii nvidia-kernel-dkms 313.30-1 amd64NVIDIA binary kernel module DKMS source ii nvidia-kernel-source 304.88-2 amd64NVIDIA binary kernel module source ii nvidia-settings 304.88-1 amd64Tool for configuring the NVIDIA graphics driver ii nvidia-smi 304.88-2 amd64NVIDIA System Management Interface ii nvidia-support 20130505+1amd64NVIDIA binary graphics driver support files ii nvidia-vdpau-driver:amd64 313.30-1 amd64NVIDIA vdpau driver ii xserver-xorg-video-nvidia 313.30-1 amd64NVIDIA binary Xorg driver but that was a result of the driver bug a couple of months ago which made the VTs go away, and the fix was to use the nvidia driver from experimental. Has anyone seen any issues with 3.8, or is it my driver craziness? Thanks, --b
Re: network problems
Not a solution, but might I also suggest installing/running apt-listbugs? It will go through the BTS during an upgrade. From the man page: apt-listbugs is a tool which retrieves bug reports from the Debian Bug Tracking System and lists them. In particular, it is intended to be invoked before each upgrade by apt, or other similar package managers, in order to check whether the upgrade/installation is safe. It's a nice way to thumbnail what might get broken in the upgrade, and you can hold packages and restart the upgrade. --b On Wed, May 8, 2013 at 1:01 PM, Frank McCormick debianl...@videotron.cawrote: On 05/08/2013 12:40 PM, Bob Proulx wrote: Frank McCormick wrote: Booted up my Sid partition this morning and found the network failed to initialize. A message during boot said something to the effect auto lo had been declared twice in /etc/network/interfaces and then that the same file was unreadable. http://bugs.debian.org/cgi-**bin/bugreport.cgi?bug=707052http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=707052 There were two different bugs. Here is the entire set: //snip// Yes. Now that Wheezy has released Sid is once again very Unstable. The last year that Sid has been frozen has made Sid relatively stable. But now the floodgates are open again and everyone is pushing changes, sometimes untested changes, into Sid again. For the next few months it will be exceptionally rough there as a year's worth of pending disruptive changes are pushed through. If you are using Sid then you must be able to track problems in the BTS and be able to use snapshot.debian.org to recover previous versions of packages and return to them as bugs appear. You might consider using Testing Jessie instead as it will be somewhat insulated from the thrash in Unstable Sid. Let the calamity begin! Bob Sounds like a good idea. I like running bleeding-edge software but not if I have to spend a lot of time on the BTS :) Can I just make a simple change in my sources list to testing and wait for everything to catch up? I have run Sid for years and it **seems** that it wasn't this disruptive ? (see my other message about apt-get wanting to purge gedit and rhythmbox etc) -- Cheers Frank -- To UNSUBSCRIBE, email to debian-user-REQUEST@lists.**debian.orgdebian-user-requ...@lists.debian.orgwith a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/**518a84f1.9040...@videotron.cahttp://lists.debian.org/518a84f1.9040...@videotron.ca
Re: Debian full disk encryption
Concur with Bob. I have been using encrypted drives for many years now. The only differences for me are that I have a separate swap partition, but then you end up having to encrypt that separately, and cannot take advantage of putting swap inside of the LVM. My next encrypted build will remedy this. :) Also, I have never built on an SSD...But the procedure is sound. --b On Sun, May 5, 2013 at 9:07 PM, Bob Proulx b...@proulx.com wrote: John Thoe wrote: I am trying to set up full disk encryption for Debian. There are a lot of options available and I cannot choose which one to use.. For starters, I am using a laptop for SSD so I read that using LUKS is not a good option since it disables TRIM. Anyways, I came across this video on Youtube: http://www.youtube.com/watch?v=Z9pn2PYbDdA that explains how to configure full disk encryption. Can anyone confirm if this is a good way of doing it? If not, can you point me to some good documentation for my case?? There are too many and I can't decide which one to pick.. I am running Wheezy. That video describes exactly how I always set up encrypted laptops. Except I use lower case names for the logical volume names. Been working for me for many years now. I can't really comment about trim support. Until recent kernels it wasn't supported. Prior to Wheezy there wasn't enough support to enable it. With Wheezy everything should be in Stable. And Wheezy has only released today. Prior to this time my encrypted SSD machines have been running Squeeze without trim support and have been working quite well regardless. Although I am sure that properly working trim support would enhance it with faster performance. Bob
Re: Debian in the sunshine?
I have had pretty good luck using my Nokia N900 in the sunshine. I have been known to read using FBreader on it, as well as using other apps. Now my preference is white text on a black background, but that seems to work out pretty well... --b On Mon, May 6, 2013 at 12:07 PM, Hendrik Boom hend...@topoi.pooq.comwrote: I'm a long-time user of Debian, and also have an e-paper ebook reader. It occurs to me that something like a Debian laptop with an e-ink screen would be extremely useful, for, say, sitting on a sunny back porch in the summer and programming -- a situation where the normal luminous screens are a complete washout. No, I don't expect to be able to watch videos on an e-ink screen. But the kind of (non)dynamism that occurs when editing code in emacs should be easily achievable. Anybody know of any hardware that could be used (or, if necessary, abused) for this purpose? Preferably with bog-standard Debian? -- hendrik -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/km8kf8$6s8$1...@ger.gmane.org
upgrading mysql - mariadb...A cleaner way?
Hi, Over the past couple of weeks, I have converted a couple of boxes from mysql to mariadb, citing concerns with oracle's irresponsible handling of mysql. (There was another no-authentication exploit fixed in the massive patch cluster last week.) I have come up with a method of upgrading from mysql 5.5 to mariadb 5.5, but it's kinda clunky and ugly, and I was wondering if anyone had a better way. This is the procedure I have developed over the two or three times I have done (or tried) it. 1. Set up /etc/apt/sources.list.d/mariadb.list: # MariaDB 5.5 repository list - created 2013-04-08 22:43 UTC # http://mariadb.org/mariadb/repositories/ deb http://ftp.osuosl.org/pub/mariadb/repo/5.5/debian wheezy main deb-src http://ftp.osuosl.org/pub/mariadb/repo/5.5/debian wheezy main Note that this is from the mariadb site, as are the debs. 2. Set up /etc/apt/preferences.d/mariadb: Package: * Pin: origin ftp.osuosl.org Pin-Priority: 1000 I'm sure the pinning is probably not optimal, but it worked in this case... 3. You then have to download the following debs libmariadbclient18_5.5.30-mariadb1~wheezy_i386.deb libmariadbd-dev_5.5.30-mariadb1~wheezy_i386.deb libmysqlclient18_5.5.30-mariadb1~wheezy_i386.deb mariadb-common_5.5.30-mariadb1~wheezy_all.deb These debs have to be installed using dpkg, because for whatever reason, the libmysqlclient18 deb from mariadb will not install via apt-get install, and I generally try not to force anything. So I download and dpkg -i them, then I am able to apt-get install the rest, in this particular case, libmariadbclient-dev libmariadbclient18 libmariadbd-dev libmysqlclient18 mariadb-client-5.5 mariadb-client-core-5.5 mariadb-common mariadb-server-5.5 mariadb-server-core-5.5 So does anyone see any way to make my installs less clunky? Thanks, --b
Re: what's your Debian uptime?
I have a box at work that has an uptime of: 12:32:00 up 1971 days, 18:32, 1 user, load average: 1.00, 1.00, 1.00 On Fri, Apr 26, 2013 at 12:10 PM, David Parker dpar...@utica.edu wrote: I have a box running Etch that hasn't been rebooted in 1,589 days: irp:~# uptime 12:09:06 up 1589 days, 18:23, 1 user, load average: 0.00, 0.01, 0.03 irp:~# I swear this is real. :-) On Sun, Apr 21, 2013 at 6:32 PM, Vincent Lefevre vinc...@vinc17.netwrote: On 2013-04-20 19:24:00 -0600, Bob Proulx wrote: Vincent Lefevre wrote: That's theory. In practice, old machines get no longer supported... I submitted a bug report (and a patch), but AFAIK the bug has never been fixed. I upgraded everything except the kernel, without being sure I could boot it again (udev incompatibilities...). That's why the machine was no longer rebooted. And if you get into a situation where the machine reboots whether you desire it or not? Power, cosmic ray hit, dead cooling fan, other? It was a laptop, so that power wasn't a problem. A hardware failure wuld have meant that the machine would be probably dead anyway (after the last boot the laptop was already more than 8 year old). This is actually what happened a few months ago: strange noises from the disk and I/O errors... It happens. Even with UPS mains and redundant power supplies. Hardware doesn't last forever. Will it boot? If so then great. If not then you have a nasty problem to sort out and the machine is down until you do. I would rather know about it on my schedule rather than its schedule. Even if there were a software problem, I wouldn't have wasting my time to try to fix it for a machine that was almost no longer used (mainly just for portability testing), in particular if the machine couldn't boot. -- Vincent Lefèvre vinc...@vinc17.net - Web: http://www.vinc17.net/ 100% accessible validated (X)HTML - Blog: http://www.vinc17.net/blog/ Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon) -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130421223223.gg9...@xvii.vinc17.org -- Dave Parker Systems Administrator Utica College Integrated Information Technology Services (315) 792-3229 Registered Linux User #408177
Re: Don't do that!
That is interesting. I have a similar setup on my workstation: /dev/sda2 ext4964532 59380856156 7% /boot With the rest of the filesystems in an encrypted LVM container. I built (rebuilt) this machine a couple of years ago, and have never had an issue...To include power failures where the machine did not power down gracefully. Could it have been a problem with your SSD, e.g. a bad spot, or could the initramfs have been corrupted on write? Do you have other kernels installed? (I usually keep, at a minimum, the current one and the last one.) --b On Tue, Apr 23, 2013 at 11:52 AM, Hans-J. Ullrich hans.ullr...@loop.dewrote: Today I learnt this: Do NOT use ext4 for the /boot partition, where your kernel resides. I did this on my EEEPC to speed up boot, and today I got at boot the error message: initrd.img corrupt. My EEEPC has got an ssd inside and /usr, /home and /var are encrypted partitions. It took me hours and hours to fix this. First I tried ext2fs, with no success. I could run Trinity Rescue Kit from a sd card, and I created a chroot, but not all was possible to do in the chroot. After lots of tries I got the solution: 1. I backuped all the content of /boot to another drive. 2. Booted with a livefile and formatted /boot to ext2. 3. Restored /boot 4. Edited /etc/fstab, removed the UUID of /boot and removed disacard,noatime 5. Now I could boot again. 6. From the running system started update-initramfs -u 7. Did dpkg-reconfigure linux-base, so I got the UUID in all necessary config files again. 8. For making all sure. did update-grub 9. Finally test, rebooted again, everything was ok. So NEVER, NEVER, NEVER use ext4 for /boot! Don't do it! (If I would have read the manual, I should have known, ext4 and grub is still in experimental state) Best regards Hans -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201304231752.54550.hans.ullr...@loop.de
Re: Don't do that!
That's really odd. I know that there used to be warnings about ext4 years ago, but I don't recall seeing them as far back as the squeeze release. On Tue, Apr 23, 2013 at 2:38 PM, Hans-J. Ullrich hans.ullr...@loop.dewrote: Am Dienstag, 23. April 2013 schrieb Brad Alexander: That is interesting. I have a similar setup on my workstation: /dev/sda2 ext4964532 59380856156 7% /boot With the rest of the filesystems in an encrypted LVM container. I built (rebuilt) this machine a couple of years ago, and have never had an issue...To include power failures where the machine did not power down gracefully. Could it have been a problem with your SSD, e.g. a bad spot, or could the initramfs have been corrupted on write? initramfs was corrupted because of the filesystem. Do you have other kernels installed? (I usually keep, at a minimum, the current one and the last one.) Yeah, my fault. I had had two kernels, but some weeks ago I deleted one. However with ext4 everything went fine - until today Happy hacking Hans -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201304232038.55747.hans.ullr...@loop.de
Re: what's your Debian uptime?
I agree with Hans. For instance, I had a sid box back in the day which was my dhcp server (an old laptop). It was behind a firewall, and not accessible from the internet. (I know, no security is 100%, but i have defense in depth.) Plus, I too had built a minimal kernel. In any case, my record is somewhere around 700 days, just short of 2 years. Then we had a power outage that burned through the UPS and the laptop battery... On Wed, Apr 17, 2013 at 4:43 PM, Hans-J. Ullrich hans.ullr...@loop.dewrote: It is interesting. Whenever I someone is telling of big uptime, the arguiment is: Your server can not be secure! You have an old kernel! You MUST install/update the newest kernel and of course reboot. But this is not correct. For which reason a new kernel is necessary? 1. If there are extrem changes in the environment (unsupported new hardware or major software changes) 2. Security issues But a kernel can stay very, verry long time. On machines, where you do not change hard or software (i.e. new filesystems like btrfs), an old kernel will work perfectly. Security issues, which affect modules, but not the kernel itself, may not cause the need of a new kernel. When people lik me and others on this list, are using a very small kernel, with minimalistic modules, and the security issues affect modules, which are not built nor installed, then there is no need, to install a new kernel. So it is wrong to conclude and to say: Hey, your uptime is high, this concludes to an unsecure host due to an old kernel. To say so, is a big mistake! Just to clear things. :) Anyway, let's have fun at hacking. Best regards Hans -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201304172243.28312.hans.ullr...@loop.de
Re: What's Up With Debian.org?
On Mon, Apr 1, 2013 at 3:03 PM, David Guntner dav...@akamail.net wrote: Bret Busby grabbed a keyboard and wrote: The problem is that the Debian web site appears to have been breached and thence compromised. And what evidence do you have that the website has been breached/compromised by anyone other than the Debian website maintainers themselves? There's a LONG history on the net of people mucking with their own websites (among doing other things) as part of an April Fools gag. I strongly suspect that's what has happened there. Indeed. Like this: http://itsfoss.com/linus-torvalds-to-join-microsoft/ Is that supposed to be funny? For those of us with a sense of humor, yes. :-) Bret apparently has never been online on April 1st before. A *lot* of tech sites pull April Fools jokes. Slashdot used to be notorious for it, especially when Rob Malda was at the helm...
Laptop display smaller when not connected to external monitor
I'm having a bit of a strange problem. I'm running sid with KDE 4.8.4, and just received a new laptop at work. I've noticed something kind of odd that hopefully, someone can help with. The laptop in question is a Dell Latitude E6520, with a resolution of 1920x1080. When I have it at work, hooked up to the docking station with an external monitor (dual-monitor setup), the Konsoles in the laptop LCD are a normal size, for an approximately 80x24 display at an 8 point font. However, when the laptop is not hooked up to the docking station/2nd monitor, everything seems smaller. xdpyinfo and the KDE size and orientation display in system settings both say that the resolution is the same, however, I have to reduce the font in the Konsoles to a 6-point font to get the same screen real estate in the terminal. Is there a possible fix for this to keep the same resolution with and without the docking station? Thanks, --b
newer kernels from experimental?
While it isn't quite getting long in the tooth, sid is still sporting the 3.2.x kernel. Now as I recall, Greg KH said that this would be the next long term support kernel, but I would like to play with some of the newer features from the later 3.x kernels from experimental, like f2fs and btrfs. I was wondering if anyone is running any of them, and if they are stable enough for day-to-day use. Thanks, --b
Fwd: newer kernels from experimental?
Sorry. Didn't check the reply field. -- Forwarded message -- From: Brad Alexander stor...@gmail.com Date: Wed, Mar 13, 2013 at 9:22 PM Subject: Re: newer kernels from experimental? To: g...@dalefamily.org On Wed, Mar 13, 2013 at 9:12 PM, Gary Dale garyd...@rogers.com wrote: I wouldn't use any of the newer file systems until they've been around in use for a couple of years. You can use btrfs now and I've heard that it's quite reliable but it depends on how much you value the new features versus the risk of losing your files. It's more for experimentation and familiarization. I'd like to play with ZFS as well. How good are your backups? How much time can you spare to restore a corrupted file system? How much do you need the new features? I have a backuppc box in the basement. Daily incrementals, weekly fulls. Restoring a filesystem is actually quite simple. On the last point, some of the new file systems offer intriguing features but in real life, I'm not likely to really need them. Nor do I see any real problems with Ext4 that would make me want to switch. Your situation may be different. I'm not looking to switch filesystems, per se, but rather to play around with them. I have disk space to burn to experiment...I just don't want to jump to a crashy kernel (though honestly, for the past few years, that has been the exception rather than the rule -- years ago, I used to hand roll all of my own kernels. Now they work great out of the box). --b --b
Re: 10 top myths of debian
On Fri, Mar 1, 2013 at 10:44 PM, Miles Fidelman mfidel...@meetinghouse.net wrote: Good point. And when you start talking security to the point of serious testing and configuration control, I believe there are very few distributions that are on the DoD approved product list. I've tried to stay out of this thread, but I feel that Ihave to comment. The first point I need to make is that security is a mindset. Anything can be made secure if you are willing to work at it and accept the trade-offs that are required to be that way. Just like a training Olympian cannot eat bacon sandwiches and still be ready to compete, an administrator or user cannot just install any piece of software whenever they want and hope it will stay secure. Take, for instance, Java... I have an adage that usability times security is a constant. The only truly secure system is one that is unplugged from the network, powered off, packed in concrete, then fired into the sun...But at that point, it isn't very usable, is it? However, if you *are* willing to work for it, you can secure anything. In 1995, the NSA granted Windows NT 3.5 an Orange Book C2 security certification, C2 being Controlled Access Protection. Now the caveats to this: * The tested machine had no network connectivity; * The tested machine had no floppy drive; * The tested machine had no CD-ROM; So the box could only run what was installed on it, and had no contact with the outside world. Now, having said that, you can take a page from MS here. Run the absolute minimal software needed to accomplish the task at hand. In this respect, servers are generally (and I *am* generalizing here) than workstations, because of the required functionality of workstations -- video drivers, games, internet communications, etc. Take, for instance, my workstation versus my firewall. My workstation has 2850 packages installed, while my firewall has a total of 369. That's nearly an order of magnitude more software that can be attack vectors. Additionally, you can, especially if you have a home network, run security tools, both on your machines and scanners. Nessus (www.tenable.com) has a free Home Feed, you can run nmap against your machines, etc. Get to know how they behave, then you will notice it when things change. Finally, have regular backups. If the worst does happen, you can recover from it. Security isn't a fire-and-forget solution. It is a constantly changing threat environment. You can't say I installed distro/OS abc, I'm secure now. There are always people out there who want access to your machine, for varying reasons. --b On the BSD side, OpenBSD (despite the name), focuses on security, and has a pretty good reputation for being pretty secure. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. Yogi Berra -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/513175ae.5080...@meetinghouse.net -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+bzpwmevxpotdt5vqmfjvy1djzibq1cmnm9m3qatoz...@mail.gmail.com
Failed deb build
Hi all, Hopefully someone here can help me. I haven't rolled a deb in years (a testimonial to Debian's repos), but now I need to. A user needs a newer version of torque. The repos have 2.4.16, and he needs 4.0.2. So, having not built one in forever, I did a ./configure; make and it compiled cleanly. I did an apt-get source for 2.4.16, pulled the 4.0.2 sources, then untarred the /debian directory into the 4.0.2 sources. I had to delete the patches directory. Once done, I dch -i and updated the version, then did a debuild -us -uc. It compiled for a while, then justbefore building the binaries, I got make[5]: Leaving directory `/home/storm/torque/torque-4.0.2/debian/build/without-x/src/lib' make[4]: Leaving directory `/home/storm/torque/torque-4.0.2/debian/build/without-x/src/lib' make[3]: Leaving directory `/home/storm/torque/torque-4.0.2/debian/build/without-x/src/lib' make[2]: Leaving directory `/home/storm/torque/torque-4.0.2/debian/build/without-x/src' make[1]: Leaving directory `/home/storm/torque/torque-4.0.2/debian/build/without-x' rm -rf /home/storm/torque/torque-4.0.2/debian/torque-client/usr/lib chrpath --delete `ls /home/storm/torque/torque-4.0.2/debian/torque-client/usr/bin/* | egrep -v '/(nqs2pbs|xpbs)' ` chrpath --delete /home/storm/torque/torque-4.0.2/debian/torque-client/usr/sbin/pbs_iff open: No such file or directory elf_open: Invalid argument make: *** [install] Error 1 dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2 debuild: fatal error at line 1350: dpkg-buildpackage -rfakeroot -D -us -uc -sa -i -I failed One other thing I have noticed is that in /home/storm/torque/torque-4.0.2, it creates the following symlink: torque-4.0.2 - /home/storm/torque/torque-4.0.2 Which is something I don't recall seeing before. I'm not sure where to look to find what the error is telling me. Can someone point me in the right direction? Thanks, --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+a5hskyt-nqvav_ekuoe1xuuqgpiobhmwmbu3qvxgo...@mail.gmail.com
Re: Update hanged, now stuck in loop
Did you try apt-get -f install ? I've used that as well, making careful note of what it is about to do (so I could reinstall things it may uninstall to get itself sorted out). You could also try it with the --dry-run option. --b On Thu, Feb 7, 2013 at 1:54 PM, Hans-J. Ullrich hans.ullr...@loop.de wrote: Hi folks, during the last upgrade, it stuck during the update of the package virtuoso- opensource-6.1. Now I am sticking in a loop. I cannot deinstall the package, as it wants configure itself and then hangs up, nor I can reinstall it, as I am running into the same loop again. I also tried to download it agarin, but this was not possible, too. My system believes, it is still downloaded. However, I downloaded the package manually via FTP, so I have it here on my harddrive. When I want to install it, these messages appear: - LANG=C dpkg -i virtuoso-opensource-6.1_6.1.4+dfsg1-5_amd64.deb (Reading database ... 420102 files and directories currently installed.) Preparing to replace virtuoso-opensource-6.1 6.1.4+dfsg1-5 (using virtuoso- opensource-6.1_6.1.4+dfsg1-5_amd64.deb) ... odbcinst: DSN removed (if it existed at all). ODBC_SYSTEM_DSN was used as the search path. dpkg: warning: subprocess old pre-removal script returned error exit status 10 dpkg: trying script from the new package instead ... odbcinst: DSN removed (if it existed at all). ODBC_SYSTEM_DSN was used as the search path. dpkg: error processing virtuoso-opensource-6.1_6.1.4+dfsg1-5_amd64.deb (-- install): subprocess new pre-removal script returned error exit status 10 * Starting Virtuoso OpenSource Edition 6.1 virtuoso-opensource-6.1 [ OK ] --- and then it hangs. Any ideas? Or is this a bug? I am running debian/wheezy, amd64. Best regards Hans -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/201302071954.04989.hans.ullr...@loop.de -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+ze0wnewh8y8fh7jv7_-2y-onmktzx3pjt9mmfuxht...@mail.gmail.com
Re: [OT] computer security (online) training
On Thu, Feb 7, 2013 at 7:48 PM, Harry Putnam rea...@newsguy.com wrote: Sorry for taking advantage of the list a bit but as happens pretty often, this list is more likely to provide useful info on the subject. I want to begin some training in computer security... training I can do online... and hopefully a hands on approach. Something I ran across, that I haven't had a chance to play with yet is http://exploit-exercises.com/ From the website: exploit-exercises.com provides a variety of virtual machines, documentation and challenges that can be used to learn about a variety of computer security issues such as privilege escalation, vulnerability analysis, exploit development, debugging, reverse engineering, and general cyber security issues. --b I'm already an old man at 66 but would like to learn enough to get some kind of job involving comp security. I don't really need a job so far as having an income, since I'm already retired from field construction boilermaker trade with a decent pension, but I have lots of interest in security and have found through my life that there is nothing like having a job in a field to really make you learn the ropes. Cutting to the chase... after googling around I see there are many many computer security training sites and companies. I need a little guidance to paring them down with my main criteria being hands on training. So, any advice on this would be most welcome. Anyone who thinks it should go off list is welcome to email me: reader AT newsguy DOT com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/87mwvfwryq@newsguy.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+yxcrn3ambkawogybkep4uavcksabzf+5r8mmurw7u...@mail.gmail.com
[Possibly OT] Hauppague WinTV card lost sound
I have a Hauppauge 44801 WinTV-GO in my wife's computer, plugged in to cable, so she can watch TV through tvtime on her sid machine. This past week, sound stopped in the tvtime app. Sound works in everything else, but not through the card. I tried plugging the speakers directly in to the line out on the TV card, and got silence. I've adjusted the sound levels in aumix. The odd thing is that this happened with an older version of the same card a few months ago, so I ordered a refurbished one to replace it, and this happens again to the same card. Is there anything more I can do to troubleshoot the sound issues? Could thi sbe an issue with pulse audio? I have the following installed of pulse: dpkg -l | grep pulse ii gstreamer0.10-pulseaudio:amd64 0.10.31-3+nmu1 amd64GStreamer plugin for PulseAudio ii libpulse-mainloop-glib0:amd64 2.0-6 amd64PulseAudio client libraries (glib support) ii libpulse0:amd642.0-6 amd64PulseAudio client libraries ii pulseaudio 2.0-6 amd64PulseAudio sound server ii pulseaudio-module-x11 2.0-6 amd64X11 module for PulseAudio sound server ii pulseaudio-utils 2.0-6 amd64Command line tools for the PulseAudio sound server ii vlc-plugin-pulse 1:2.0.5-dmo1 amd64PulseAudio plugin for VLC Thanks, --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+ycbfyzzuzevsratk9asoo5ag8nkvqjukk6pzxqw17...@mail.gmail.com
Re: [Possibly OT] Hauppague WinTV card lost sound
I know it is bad form to respond to one's own post, but I have swapped the Hauppague card, and have exactly the same symptoms. I suspect it is something with either pulse. Suggestions? On Sun, Feb 3, 2013 at 12:01 PM, Brad Alexander stor...@gmail.com wrote: I have a Hauppauge 44801 WinTV-GO in my wife's computer, plugged in to cable, so she can watch TV through tvtime on her sid machine. This past week, sound stopped in the tvtime app. Sound works in everything else, but not through the card. I tried plugging the speakers directly in to the line out on the TV card, and got silence. I've adjusted the sound levels in aumix. The odd thing is that this happened with an older version of the same card a few months ago, so I ordered a refurbished one to replace it, and this happens again to the same card. Is there anything more I can do to troubleshoot the sound issues? Could thi sbe an issue with pulse audio? I have the following installed of pulse: dpkg -l | grep pulse ii gstreamer0.10-pulseaudio:amd64 0.10.31-3+nmu1 amd64GStreamer plugin for PulseAudio ii libpulse-mainloop-glib0:amd64 2.0-6 amd64PulseAudio client libraries (glib support) ii libpulse0:amd642.0-6 amd64PulseAudio client libraries ii pulseaudio 2.0-6 amd64PulseAudio sound server ii pulseaudio-module-x11 2.0-6 amd64X11 module for PulseAudio sound server ii pulseaudio-utils 2.0-6 amd64Command line tools for the PulseAudio sound server ii vlc-plugin-pulse 1:2.0.5-dmo1 amd64PulseAudio plugin for VLC Thanks, --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/CAKmZw+ZvJOD8kYBiqSKQQG=hXc044FVWDJGkWT=lyvoammc...@mail.gmail.com
Re: Backing up system customization: Is Debian packaging better than Remastersys?
On Fri, Feb 1, 2013 at 4:35 PM, Linux-Fan ma_sys...@web.de wrote: On 01/30/2013 11:29 AM, Andrei POPESCU wrote: On Ma, 29 ian 13, 11:20:42, Linux-Fan wrote: On 01/28/2013 11:02 PM, Andrei POPESCU wrote: On Du, 27 ian 13, 19:12:40, Martin Steigerwald wrote: Well, for only 4 systems puppet might be a bit off. I´d suggest starting with puppet not before at least 10 systems. The initial setup is definitely not trivial, but afterwards you sit back and relax ;) This is also what I expect from Debian packaging: Work once, enjoy later. While you can certainly (ab)use custom Debian packages for your needs I strongly believe in the right tool for the right job, which I think puppet is much closer to. Kind regards, Andrei The perfect solution would be to use both then: Debian packaging for the software and Puppet for the configuration. [1] supports this idea, but I guess that I will abuse packaging for the configuration (at least until I find out that it does not work or is difficult to maintain) in order not to have to learn two complex systems. [1] http://serverfault.com/questions/215545/deploy-our-own-software-using-puppet While it may be overkill/overengineered, here are the things I do on my home network -- 16 Debian machines, both i386 and amd64, and across the board from stable to testing to unstable, running a gamut of software (worstations, firewall, wiki, monitoring, etc. puppet - I have an ever-growing list of modules. The nice part about puppet is that you can do a base build, install puppet, and do all of the cert-signing, running puppet agent -t and watching all your goodies install. It's almost magical. :) sysdata script - I also have a script that runs nightly that captures the package list (using dpkg --get-selections, as someone mentioned), drive layout, debconf database, and autoinstalled packages. This is saved to /var/backups/hostname-day of week.gz, so it keeps a week's worth and cleans up after itself. Another nice thing that this does is to give me a pool of package lists, so that I can create generic type machines, workstation, wiki, firewall, etc., then tailor the package list as needed. etckeeper - A package that places a git repo under /etc, and captures changes to /etc config files, mainly by puppet and apt, which both have hooks files to implement changes. The truly paranoid could combine/clone all of the git repos onto a machine, but I am content using the next one for that. backups - I use backuppc to back up my systems' full/incrementally. I am using rsync, though you can also use tar or smb as needed. Between all of these, I would have to work to lose an entire machine. :) HTH, --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+bnp+jl+yxrlgjtfbuwtxyk3t1_k_tanfegb09qkwx...@mail.gmail.com
multi-arch and wine?
Keeping on the gaming theme, I have a question Many months ago, I installed playonlinux, and the wine64-unstable libs in order to play Starcraft II. Well I haven't played it in a longish while, apparently since before multiarch. So I fired up playonlinux, tried to run Starcraft, and got the following popup from playonlinux: This is the wine64-bin helper package, which does not provide wine itself, but instead exists solely to provide the following information about enabling multiarch on your system in order to be able to install and run the 32-bit wine packages. The following commands should be issued as root or via sudo in order to enable multiarch (the last command installs 32-bit wine): # dpkg --add-architecture i386 # apt-get update # apt-get install wine-bin:i386 Be very careful as spaces matter above. Note that this package (wine64-bin) will be removed in the process. For more information on the multiarch conversion, see: http://wiki.debian.org/Multiarch/HOWTO However, when I tried to install wine-bin:i386, apt tells me: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine-bin:i386 : Depends: libwine-bin:i386 (= 1.4.1-4) but it is not going to be installed and apt-cache search for it indeed comes up empty. However, searching on the unstable page on packages.d.o, I see both libwine-bin and libwine-bin-unstable, and what's more, they are only for i386, kfreebsd-i386 and powerpc. should I just apt-get install wine-bin (or wine-bin-unstable), or did I miss something with the architectures? (dpkg --print-architectures says amd64, and dpkg --print-foreign-architectures says i386). Thanks, --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+bb9gfywzwd8ve6xy9yqowozlbz2cnmpjffhybn+yn...@mail.gmail.com
Re: sshfp records
Rookie mistake from messing with this too late at night. Apparently it only works with fully qualified domain names (therefore working more like dig than host): $ ssh -o VerifyHostKeyDNS=yes user@host The authenticity of host 'host (192.168.1.52)' can't be established. RSA key fingerprint is 6d:fd:09:59:e2:32:b8:3f:4e:ff:51:1f:58:5a:14:3a. No matching host key fingerprint found in DNS. $ ssh -o VerifyHostKeyDNS=yes u...@host.example.com The authenticity of host 'host.example.com (192.168.1.52)' can't be established. RSA key fingerprint is 6d:fd:09:59:e2:32:b8:3f:4e:ff:51:1f:58:5a:14:3a. Matching host key fingerprint found in DNS. Not sure how I'm going to work around this. I may just dispense with sshfp records for the time being, unless something jumps out at me. --b On Tue, Jan 22, 2013 at 1:20 PM, Bob Proulx b...@proulx.com wrote: Brad Alexander wrote: Has anyone worked with sshfp records for openssh? No. But I do have a suggestion. I generated sshfp records: host IN SSHFP 1 1 5490056a2208c8ad2cf869f5c06470450c8a017a host IN SSHFP 2 1 18aef47bc01264709f25ac9daebed236b45b6b45 but when I ssh into the host (after deleting the records from .ssh/known_hosts), I get: $ ssh -o VerifyHostKeyDNS=yes user@host The authenticity of host 'janeway (192.168.224.52)' can't be established. RSA key fingerprint is 6d:fd:09:59:e2:32:b8:3f:4e:ff:51:1f:58:5a:14:3a. No matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? Anyone got any idea why the key fingerprints aren't matching up? Add more verbosity to the command. For example I see: $ ssh -v -o VerifyHostKeyDNS=yes example.com debug1: Server host key: RSA 1e:c8:2d:20:c7:dc:9b:10:1d:5b:85:bd:4c:95:9a:43 DNS lookup error: name does not exist The authenticity of host 'example.com (192.0.43.10)' can't be established. That DNS lookup error: name does not exist tells me in that I do not have sshfp records. Perhaps with more verbosity (adding -v) you will have a similarly information message? Bob -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+y+dharqc9qp9wmgpd3myyw+bmxmqwz3ad_gbwkkit...@mail.gmail.com
Re: What are some common problems when using Debian GNU / LINUX?
On Mon, Jan 21, 2013 at 4:27 AM, Anthony Campbell a...@acampbell.org.uk wrote: On 20 Jan 2013, berenger.mo...@neutralite.org wrote: [snip] If you uncheck them all, as I usually do, you start with a system with almost nothing. Even less is not present in such an installation :D I keep a list of all the packages I normally use and then get the same ones when I install on a new computer. (Obviously this doesn't work for your very first install.) I run a script nightly to capture the package list, debconf database, disk information (df, df -h, and fdisk -l), and autoinstalled packages list for every host in my network. What this allows me to do is have a pool of generic hosts (firewall, web server, wiki, etc) to choose from, then when I build a new box, I do a base build, then copy the most appropriate package list, and do dpkg --get-selections package.list apt-get dselect-upgrade Then I can customize the new machine. The other thing is that I run puppet, and have a module called essentialpkgs that makes sure that certain essentials are on the box. [snip] For a normal usage, testing is better, even if the project claims it is not for production environment. More recent kernels and drivers which means more supported hardware, and updated web browsers are some obvious interesting points here. They are simply the most obvious. [snip] I'd say you are generally better off using Sid. The name Unstable unfortunately gives the impression that it is unsafe, but this is misleading. A quick search for debian unstable vs testing will produce plenty of discussion, mostly favouring Sid. See for example http://www.debian.org/doc/manuals/debian-faq/ch-choosing.en.html and http://raphaelhertzog.com/2010/12/20/5-reasons-why-debian-unstable-does-not-deserve-its-name/ I generally run sid/unstable, unless there is a reason not to. I have a few boxes that are testing or stable, but they are appliances (my Proxmox-VE machines, for example). But the vast majority of my network runs sid, and has for years. I have only had rare issues, for instance, during an ABI change or something major. To combat that, I do staged upgrades. For instance, I will test upgrades on my own workstation (on the premise that I can fix it easier than my users), and I also run apt-listchanges and apt-listbugs, and look for show-stopper changes. --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/CAKmZw+Y__CoGdFYTpojaDKuPrGsODN6hMnR7edH5js1DGw=9...@mail.gmail.com
sshfp records
Has anyone worked with sshfp records for openssh? I generated sshfp records: host IN SSHFP 1 1 5490056a2208c8ad2cf869f5c06470450c8a017a host IN SSHFP 2 1 18aef47bc01264709f25ac9daebed236b45b6b45 but when I ssh into the host (after deleting the records from .ssh/known_hosts), I get: $ ssh -o VerifyHostKeyDNS=yes user@host The authenticity of host 'janeway (192.168.224.52)' can't be established. RSA key fingerprint is 6d:fd:09:59:e2:32:b8:3f:4e:ff:51:1f:58:5a:14:3a. No matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? Anyone got any idea why the key fingerprints aren't matching up? Thanks. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/CAKmZw+Z3mPKi8oyaqA-4=UX=Xv=jcia+k6lpczwf3c_gfpk...@mail.gmail.com
Fwd: Re (2): Running vlc from another machine.
Forwarding back to list... -- Forwarded message -- From: peasth...@shaw.ca Date: Mon, Dec 31, 2012 at 11:50 PM Subject: Re (2): Running vlc from another machine. To: stor...@gmail.com Cc: peasth...@shaw.ca From: Chris Davies chris-use...@roaima.co.uk Date: Tue, 01 Jan 2013 02:11:14 + I'm going to assume (dangerously) that since you're running vlc rather than cvlc you don't really mean console but local X Windows screen. Yes, I should have said, In a local LXTerminal, Don't run telnet, use ssh instead. OK, thanks. DISPLAY=:0 vlc *.WAV It produces sound correctly and a flock of messages. peter@dalton:~$ DISPLAY=:0 vlc *.WAV VLC media player 1.1.3 The Luggage (revision exported) Warning: call to srand(1357015068) Warning: call to rand() Blocked: call to unsetenv(DBUS_ACTIVATION_ADDRESS) Blocked: call to unsetenv(DBUS_ACTIVATION_BUS_TYPE) Warning: call to signal(13, 0x1) Blocked: call to setlocale(6, ) Blocked: call to sigaction(17, 0xb71780d4, 0xb7178048) Warning: call to srand(1357015068) Warning: call to rand() Warning: call to srand(1357015068) Warning: call to rand() Warning: call to srand(1357015068) Warning: call to rand() Warning: call to srand(1357015068) cvlc *.WAV Correct sound again and more messages. peter@dalton:~$ cvlc *.WAV VLC media player 1.1.3 The Luggage (revision exported) Warning: call to srand(1357015446) Warning: call to rand() Blocked: call to unsetenv(DBUS_ACTIVATION_ADDRESS) Blocked: call to unsetenv(DBUS_ACTIVATION_BUS_TYPE) [0x9e0b51c] inhibit interface error: Failed to connect to the D-Bus session daem on: /usr/bin/dbus-launch terminated abnormally with the following error: Autolau nch error: X11 initialization failed. [0x9e0b51c] main interface error: no suitable interface module [0x9e08d44] main interface error: no suitable interface module [0x9d618fc] main libvlc error: interface globalhotkeys,none initialization fai led [0x9e08d44] dummy interface: using the dummy interface module... Thanks for the help, ... Peter E. -- 123456789 123456789 123456789 123456789 123456789 123456789 123456789 12 Tel +13606390202 Bcc: peasthope at shaw.ca http://carnot.yi.org/ http://members.shaw.ca/peasthope/index.html#Itinerary -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/CAKmZw+ZBijSON+uTQJdAyaZkRDX_O7j8jd_z9j=gb7dz3hh...@mail.gmail.com
Fwd: Re (2): Running vlc from another machine.
Forwarding back to the list... -- Forwarded message -- From: peasth...@shaw.ca Date: Mon, Dec 31, 2012 at 11:54 PM Subject: Re (2): Running vlc from another machine. To: stor...@gmail.com Cc: peasth...@shaw.ca From: Brad Alexander stor...@gmail.com Date: Mon, 31 Dec 2012 22:14:01 -0500 Telnet? Aside from the security concerns, you can set up ssh ... OK, thanks. vlc has some sort of streaming interface. ... Here are a couple of links ... Will read. Thanks for the help,... Peter E. -- 123456789 123456789 123456789 123456789 123456789 123456789 123456789 12 Tel +13606390202 Bcc: peasthope at shaw.ca http://carnot.yi.org/ http://members.shaw.ca/peasthope/index.html#Itinerary -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+zkrbrmztq9va2tv9lmykpx77o8jyctavun3xedc3v...@mail.gmail.com
Re: Tools to retrieve images from dead hard drive and/or deleted partitions
On Thu, Dec 27, 2012 at 4:17 PM, berenger.mo...@neutralite.org wrote: 1) I have a hard disk (1Tb, the bigger) which gave me many errors when I am trying to read it. It makes it very slow to even read, but I've been able to determine that it contains jpg images with a classic file browser. I did not managed to copy any data on a safer place... I am feared I will not even be able to retrieve one photo with my conventional hardware... but maybe some of you will have an idea? If this drive is giving errors, you might try sticking it in a plastic bag and putting it in the freezer for a couple of hours. It may give you some life out of a dying drive. I used this method on an old drive (this was back around 2000 or so), and was able to get enough life back in it to pull the data. Here is a lifehacker article from 2010 which documents it: http://lifehacker.com/5515337/save-a-failed-hard-drive-in-your-freezer-redux --b -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/cakmzw+av1fnzysiw_kopchforg4-pr-6eom-2k3sumvbruh...@mail.gmail.com