Re: sad but true, Linux sucks, a bit
On Wed, 15 Jan 2014, y...@marupa.net wrote: These reasons why Linux is not ready for the desktop lists are so stupid. The whole question of whether Linux is 'ready for the desktop' is specious. This statement presumes that everyone has the same desktop requirements which they demonstratably do not. I've been using Linux on the desktop continuously since 1994. It is clear that it was ready for my desktop in 1994. A lot of Linux geeks spent a lot of time worrying about Microsoft's desktop dominance over those years. I would often hear people claim that Linux had to get on to the desktop *now* (1999, 2004, 2007, etc) or it would be locked out *forever*. I concluded some time in the late 90s that sooner or later a disruptive technology would come along and completely rewrite the rules on computer interfaces, making any current desktop dominance irrelevant. Mobile computing is a sufficiently disruptive technology that it has done this. Note that I did not know *what* the disruptive technology would be but I was sure there would be one. In particular I used to make the point that 40 years ago the desktop as we know it didn't exist and I was sure it would not exist in 40 years time. FWIW I expect yet another disruptive technology to come along soon. I very much doubt we'll be using a single finger or thumb to type on small mobile screens to get anything done in 10 years time. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.pracops.com “To learn who rules over you, simply find out who you are not allowed to criticize.” -- Voltaire
Re: Client daemon for sorting e-mail via IMAP
On Thu, 24 Jan 2013, Lázaro wrote: aptitude search sieve|grep mailutils I realised after I posted that I'd failed to state _why_ I switched to imapfilter rather than continuing to use one of the myriad of delivery filtering solutions available. Over 15-20 years my ideas on how I want to sort my mail have changed significantly. I could simple resubmit all my email to the MTA and then re-sort it but this would alter the mail messages, which I don't want to do. What I want today is to be able to sort and re-sort my mail from time to time and leave it contents and headers unchanged. I considered the 'fetchmail -m procmail' option mentioned earlier in the thread but decided to go for something a little more ambitious which led me to imapfilter. Also, I never really liked Sieve much :) My plan for the near future: I have old mail that needs to be re-sorted. While I could re-sort everything periodically this is resource intensive (with a lot of mail) and would be fairly inefficient as most of the mail would not need to be re-sorted. I'm going to be using an opportunistic approach. Every day (or every few hours) a script will randomly pick a mailbox and re-sort the contents. Thus over time my mail will approach a state of 'full sortedness' :) Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Information is a gas
Re: Client daemon for sorting e-mail via IMAP
On Tue, 22 Jan 2013, David Guntner wrote: So it might actually be safer to let it hand the mail off to Postfix (and let *that* handle Procmail) anyway There are a few options here: (1) Use maildrop (not to be confused with MailDrop). Like procmail but safer (apparently). Home site: http://www.courier-mta.org/maildrop/ (2) Use a catch-all rule at the end of .procmailrc so that even if mail falls through it goes somewhere other than /dev/null. (3) Keep a backup of all email. I have my personal MTAs (running Postfix) keep a copy of all email that passes through them. If something somewhere (procmail/maildrop/imapfilter/whatever) drops the ball I can always go to my mail backup account and recover the mail item. Yes this doubles mail storage requirements, but you know what - disk is cheap[1]. I do this using the Postfix config option always_bcc=. Note: Anyone implmenting a solution like this should take in to account privacy laws in relevant jurisdictions (where they are, where the MTA is, etc). [1] I backup nightly and mail storage is still only a small proportion of the data that gets backed up. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Information is a gas -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1301230833030.30...@pollux.opentrend.net
Re: Client daemon for sorting e-mail via IMAP
On Tue, 22 Jan 2013, David Guntner wrote: (2) Use a catch-all rule at the end of .procmailrc so that even if mail falls through it goes somewhere other than /dev/null. Also mentioned in the manpage I quoted: It doesn't say that the errant filter error sends to /dev/null, but there's a risk a message will end up in an unexpected location. Yeah, I didn't mean that literally. When procmail drops mail it goes in to a black hole from which there is no return, as I'm sure you know. I have straced procmail before but I don't recall what it did with the mail in the end, it may well drop it to /dev/null. (3) Keep a backup of all email. I have my personal MTAs (running Postfix) keep a copy of all email that passes through them. If something somewhere (procmail/maildrop/imapfilter/whatever) drops the ball I can always go to my mail backup account and recover the mail item. Yes this doubles mail storage requirements, but you know what - disk is cheap[1]. If you're telling fetchmail to invoke Procmail or Maildrop directly, aren't you *bypassing* Postfix processing? The comment I replied to specifically mentioned passing to an MTA. In any case you can potentially do it upstream, which is what I do. I run my own MXs which bcc backup all email before passing it on. Also, disk is only cheap if you have the money to spend. Not everyone does. But that's just an aside. Mail tends to be pretty tiny compared to modern available storage. I have 37GB of personal mail which includes many years of list mail as well as personal mail. I suspect most people don't have that much. If you run large mail installations (as I have) there are all sorts of great tricks to dedupe mail these days. Unless you are talking about very large numbers of users mail storage requirements are tiny compared to the storage requirements of so many other sorts of data today. Note: Anyone implmenting a solution like this should take in to account privacy laws in relevant jurisdictions (where they are, where the MTA is, etc). I'm not entirely sure how privacy laws come into play here, given we're talking about a way to pull in mail from several sources for an individual, *by* that individual running a cron job or whatever. It's This comment directly followed and related to the use of the always_bcc parameter in Postfix. It will capture all mail passing through that MTA, regardless of sender or recipient. Most MTAs serve a large number of users. That's where the potential privacy concerns come in. Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Information is a gas -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1301231049490.30...@pollux.opentrend.net
Re: Client daemon for sorting e-mail via IMAP
On Tue, 22 Jan 2013, Erwan David wrote: I personnaly use imapfilter for such tasks. But it requires some lua scripting, as it is rather a lua library for accessing and searching imap accounts than a program. I've been using imapfilter for about a year after 15+ years of using fetchmail/procmail to deliver sort mail. Before that my mail was always delivered locally on a Unix box and I didn't have enough of it to warrant filtering :) Imapfilter is itself great but LUA is quite different to other programming languages I've used in the past. I've done a bit of work in LUA but have ended up with a (perhaps hacky) solution of having a shell script construct my .imapfilter/config. I considered 'fetchmail -m procmail' as a solution but didn't select it at the time. I think I'm going to take another look at it though, now that I've had more of a chance to play with the alternatives. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Information is a gas -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1301230734570.30...@pollux.opentrend.net
Re: Client daemon for sorting e-mail via IMAP
On Tue, 22 Jan 2013, David Guntner wrote: Actually, in all the (many) years I've been using Procmail, I've never once had it fall through and just discard the message outright. Maybe that happens if you've got a rule that *would* route to /dev/null and the errant test above falls through to it? Beats me. I'm not doubting you, I'm just saying that in my personal experience, it's never happened. Maybe I've just been lucky. :-) Glad to hear it :) I did have it happen during testing. Mail tends to be pretty tiny compared to modern available storage. I have 37GB of personal mail which includes many years of list mail as well as personal mail. I suspect most people don't have that much. Agreed. I just tend to balk when someone throws out disk is cheap in general terms; I see it bandied about too often by people who don't take into account that not everyone has money to spend. Sorry for the somewhat knee-jerk reaction if that's not what you were doing. :-) No problem :) It can indeed be used as an excuse instead of fixing a problem. I once ran a RS/6000 server running AIX at a university. Disk space was always an issue since students figured disk space was free (and this was back in the 90's when those types of SCSI diskpacks cost a bit more than what you can get for a home computer these days). We finally stuck a quota on home directories, and I set up a cron job that would look for mbox files that were bigger than a given size gzip them up and put the .gz into their home directory - and if they were out of space, they lost their mail. After that, most of them learned pretty quickly to manage their mail better. :-) Hahah :) Not sure if you've looked at some of the recent innovations but with SANs doing block level dedupe, Dovecot coming out with fancy new deduping storage formats and dbmail doing some interesting stuff also things are a lot less painful than they once were. Sure, I agree with you here. But I was keeping things within the context of the OP's message asking about this type of thing, where the OP stated needing a way to collect mail and filter into folders independent of what MUA was being used, running their own system on a VM. Within that context, there's no privacy concerns. At the time, I didn't realize you were speaking in more general terms about larger multi-user systems. Sorry for the misunderstanding. Me too. I figured on a list of thousands(?) I better mention it lest someone go out and just turn it on, on their MTA :) I did a bit of googling as a result of this thread and came across 'fdm' which I haven't seen before. It is in Debian and looks very interesting. http://fdm.sourceforge.net/ Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Information is a gas -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1301231226230.30...@pollux.opentrend.net
Re: Debian and OSS vs vSphere
On Tue, 28 Feb 2012, Davide Mirtillo wrote: I was also wondering if any of you had opinions regarding Proxmox. http://pve.proxmox.com/wiki/Main_Page It seems like a solid solution and it also looks it's gonna be something that works out of the box by just installing it, which is kinda what i was hoping for - yes, i know, i'm lazy :) Hi Davide. I was just about to send a reply to your other email suggesting you try Proxmox :) It offers OpenVZ and KVM so allows you to enjoy using Linux containers or fully virtualised systems. I've used OpenVZ a lot over the years and trialed Proxmox a while back and was quite impressed. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Director, Software in the Public Interest (http://spi-inc.org/) Free Open Source: The revolution that quietly changed the world One ought not to believe anything, save that which can be proven by nature and the force of reason -- Frederick II (26 December 1194 – 13 December 1250)
Re: rm -rf is too slow on large files and directory structure(Around 30000)
On Thu, 16 Feb 2012, Bilal mk wrote: I am using xfs filesystem and also did the fsck. DMA is enabled. Also perfomed xfs defragmentation( xfs_fsr). But still an issue not only rm -rf but also cp command Until quite recently XFS was notable for being slow to delete. Others have noted that this is greatly improved in recent kernels but even with older kernels there is quite a bit of tuning that you can do to improve the delete performance. Your favourite search engine should give you good results. I put down some notes for myself here a while back: http://www.practicalsysadmin.com/wiki/index.php/XFS_optimisation Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Free Open Source: The revolution that quietly changed the world One ought not to believe anything, save that which can be proven by nature and the force of reason -- Frederick II (26 December 1194 – 13 December 1250)
[OT] Re: GNU/Debian Linux vs. facebook, Twitter and other proprietary social media
On Sun, 5 Feb 2012, Andrei Popescu wrote: On Du, 05 feb 12, 12:30:05, Robert Brockway wrote: One of the best pieces of business advice I ever received was this: Don't sell what you like. Sell what people will buy. While it may be good business advice isn't it hypocrisy to sell a product you don't believe in? Hi Andrei. I don't see the statement as making any reference to belief at all. The intention of the statement, IMHO, is to recommend assessing and understanding the market quite separately from your own preferences when determining what sort of business you intend to start. Questions over whether it is ok to sell a particular product or not are, IMHO, othogonal. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Free Open Source: The revolution that quietly changed the world One ought not to believe anything, save that which can be proven by nature and the force of reason -- Frederick II (26 December 1194 – 13 December 1250)
Re: GNU/Debian Linux vs. facebook, Twitter and other proprietary social media
On Sat, 4 Feb 2012, Nick Lidakis wrote: In a nutshell, my wife and I starting a small business in a family oriented neighborhood. We're serving coffee, espresso and baking fresh bread and pastry on premises. The shop will be located at the northern tip of Manhattan in the vicinity of Inwood Hill Park (last remaining undeveloped, old growth forest in Manhattan), Isham Park and Fort Tryon Park. Hi Nick. You may wish to contact David J Patrick who founded LinuxCaffe (http://www.linuxcaffe.ca) in Toronto, Canada many years ago. LinuxCaffe is a successful business providing coffee light food running completely on Linux/FOSS. I assisted David with some of the early sysadmin/architectural work for LinuxCaffe. Starting on a shoe-string budget during a recession, I plan on using GNU software for as much as I can possibly can without devoting too much time to computing. Music Player Daemon for music, possibly sc for basic spreadsheet needs, m0n0wall firewall for Wi-Fi, etc. Also investigating a Linux payroll system, etc. Wife uses a Mac. I have been using Debian since 1996 for all my daily needs, though I still would not call my self a guru. We do not have any social media presence and we like it that way. No facebook, twitter or other accounts. We use basic cell phones. With that said... Everyone has been telling us that we *absolutely* have to be on facebook and twitter, if not more, for our coffee house. They argue that it's free marketing and advertising. That we need facebook to advertise events and I have to say that a small business should not pass up any legal, ethically sound advertising option. Starting a successful small business is hard enough as it is. My personal opinion is to use Facebook and Twitter as tools to deliver your message. I'd recommend using alternatives such as identi.ca *as well*. This does not sit well with me. I've read the tech news concerning facebook's privacy and intellectual property policies. I've recently read about twitter's country based censorship controversy. Regarding privacy, you should only post only as much info as you are happy to provide. It is true that data mining techniques can infer things about you from what you 'like' and even what your friends 'like' but a company facebook page wouldn't be used in the same way as a personal page so I don't see there would be a lot of data to mine. I follow the rule that I don't send something in unencrypted email (or post it on facebook) unless I'm happy for it to appear on the front page of a large national news paper. Being a neighborhood shop, I was hoping to avoid social media. I want to interact with people in person. But I agree that I need a way to let people know about events of specials. Can I do that with GNU software without selling my soul to Zuckerberg? Do I even need software or could I be smarter about this? One of the best pieces of business advice I ever received was this: Don't sell what you like. Sell what people will buy. P.S. Thinking about social media pisses me off even more because, currently working as a paramedic, I've had co-workers take their phones out while treating patients. And I have yelled at people. These days, seems most people can't go two minutes without sucking on their digital pacifier. Internet addiction is a problem that has been recognised to exist for decades (it was around long before the Internet went mainstream) but is still not generally taken seriously, IMHO. I treasurer the ability to 'unplug' from time to time. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1202051201440.2...@proxima.opentrend.net
Re: OT: Safe to access SSH server from work?
On Thu, 5 May 2011, Rob Owens wrote: I hesitate to mention this, because it will start an argument about security through obscurity, but you can run your ssh server on a port other than 22. It really does nothing for security, but it will keep your firewall logs a lot cleaner because it avoids pesky scripts that circulate the internet, trying to brute force ssh servers. Hi Rob. I'm glad you mentioned that it doesn't do anything for security. Yes it would keep logs a bit cleaner. I've never[1] changed the ssh port on any host and never been terribly worried about the state of the logs as a result. Changing the port is only really viable for home servers. It can't reliably be done on any service used by a lot of people anymore than you can do this for any other service. You could of course do this if you are using SRV records (if the client supports it) but then you throw away the obscurity aspect anyway. The idea of changing the port number for SSH seems to stem from the idea that SSH is somehow more dangerous to run than another service and so needs special treatment. I think this idea comes from the fact that a successful SSH login will give you a shell and that sounds a bit scary. The thing to remember is that exploits of other network services normally involve the execution of arbitrary code. And what is the arbitrary code that they run? It is often a shell. Most Linux systems will be using OpenSSH which comes from the OpenBSD project. It is likely the best audited code on many Linux systems and is thus likely to be less of a threat to system security than running many other services. Treat all network services as a potential threat whether they are designed to give you a shell or not. Keep the system patched, restrict access to the service to legitimate users if you can, and follow best practice for locking down each service. [1] I've been using SSH since 1996 or 1997. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1105070154560.7...@castor.opentrend.net
Re: OT: Safe to access SSH server from work?
On Fri, 6 May 2011, Brian wrote: A strong password is no less secure in brute force terms than a key so Oh yes it is. A strong password may take a very long time to brute force, but that isn't what you said. Breaking an arbitrarily long key pair is regarded as being cryptographically infeasible. That means it isn't practical for anyone to even undertake the attack. So how long does the key need to be? That changes with time due to advances in computer hardware. Right now attacks against 1024 bit RSA keys may be cryptographically feasible. So use a longer key if you fear you may be subject to a sustained brute force attack[1]. [1] Hint: home users are probably not the targets here :) Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.2.00.1105070225570.7...@castor.opentrend.net
Re: when does one change from testing to stable in sources.list
On Fri, 10 Dec 2010, shirish शिरीष wrote: Hi all, I know it probably is still a long road but want to know when should one change from testing to stable so that one is living in squeeze and not go into wheezy. I use the codenames (lenny, squeeze, etc) in sources.list. This way it doesn't matter when they declare squeeze to be stable. If you use 'testing' then you could sail right on past squeeze if you didn't change it at just the right time. If I want the system up dist-upgrade to a later version I have to explicitely edit sources.list, which is just fine. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world
Re: how to kill a process that is defunct?
On Sun, 21 Nov 2010, François TOURDE wrote: The zombie process don't use any resources in general. No need to reboot at this point, because nothing is wrong. Right. I can't see how the OP's process is a zombie as a zombie won't consume CPU (or any other resource). It exists solely to hand back the exit code to the parent when it can. On some case, nevertheless, I remember that there can be a kernel problem, or a device driver one. The process was killed during a blocking IO, so it is marked as Z until IO finished. If the IO could not stop, or is on a CPU consuming loop, there is no other way than reboot. I think you'll find such a process would be in state 'D'. Sometimes processes in state D can get stuck and indeed be unkillable. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1011221315330.13...@castor.opentrend.net
Re: minimum number of days between password change
On Wed, 3 Nov 2010, Mark Allums wrote: Not a pattern in the hashes. A pattern in the history. Hi Mark. That's what I meant. The history is made up of hashes and possibly additional information. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1011041632210.16...@castor.opentrend.net
Re: minimum number of days between password change
On Wed, 3 Nov 2010, Mark Allums wrote: I know it is the hashes. Everything leaves tracks. It's not the passwords that might be compromised, it's the privacy. I expect this is an example of extreme paranoia, but still... An unrelated example: Incognito mode (AKA, porn mode) of Google Chrome. Forensic researchers have published articles about how much they found out about the user even after they used the secure mode. You can't reverse the hash, but a pattern in the history file might tell someone something you don't want them to know. Granted, you could keep the If the hash algorithm is worth its salt (pun intended) then there shouldn't be a pattern in the hashes even if there is in the passwords. If the file keeps timestamp information in plaintxt that may reveal information like when the user tends to change their password which may or may not be useful to an attacker. I think on balance the risk is low though. The hash log could be subject to a brute force attack. /etc/shadow is also subject to a brute force if someone can get root on the box. This is useful as passwords are often resued across systems, so they could use this to break into other systems. /etc/shadow would deliver current rather than old passwords so it is far more valuable too. Personally I don't think much of keeping a record of old password hashes but for a different reason: they are easily circumvented by the user changing their password several times until they can reuse the old one again. Some organisations have tried to prevent this by limiting how quickly passwords can be changed - the problem with this approach should be obvious :) Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1011031131570.16...@castor.opentrend.net
Re: Disappearing mouse
On Fri, 3 Sep 2010, John A. Sullivan III wrote: Has anyone experienced this in Debian? Is there a definitive cause and, more importantly, a real solution? Thanks - John I had this with an NVidia chipset (don't recall the exact chipset right now). I had to add a SWCursor option to prevent cursor going invisible from time to time under Lenny. Once it went invisible the X server would need to be restarted to recover it. The cursor was just invisible - it always worked perfectly if only you could guess where it was. This is an example from xorg.conf: Section Device Identifier Configured Video Device Driver Vesa Option SWCursor yes EndSection Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Contributing member of Software in the Public Interest (http://spi-inc.org/) Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1009040016390.22...@castor.opentrend.net
Re: How do I back up a running system?
On Fri, 18 Jun 2010, Robert S wrote: I have debian running on a headless system. I'd like to back the entire system up. Its difficult with a bootable disk without a monitor (so Clonezilla etc are out). I've tried mondoarchive but it usually bails out before it completes the backup. Hi Robert. If you're using the XFS filesystem then xfsdump is guaranteed to provide consistent backups even if used on a read-write mounted filesystem. This has been confirmed by the developers several times in public and is consistent with my own DR tests. If you're using another filesystem then you could consider using LVM snapshots. Quite a lot of backup utils are LVM snapshot aware and can use this feature if available. Are there any suggestions? A simple script would be nice. I use custom backup scripts in a lot of places. I suppose they would be described as fairly complex. The backup server sshes to the server to be backed up, executes the backup command (xfsdump, tar, whatever) and streams the data over the network via STDOUT. On the other end of the link the data is captured and written to the filesystem as a backup. Backups of Unix filesystems have been done this way for decades (using rsh rather than ssh in the old days of course). I go in to a little more detail about this in my backup talk: http://www.timetraveller.org/talks/backup_talk.pdf Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1006181118380.26...@castor.opentrend.net
Re: How do I fsck and XFS file system in Squeeze
On Thu, 20 May 2010, Mark Allums wrote: If not, then a live CD will be needed, something like Knoppix, be sure it has XFS support. Just boot the live CD or DVD, and Bob's your uncle. I was going to suggest a live cdrom too but remember that Debian has its own live cdroms. I've been using them in preference to Knoppix for rescuing Debian systems due to binary compatibility problems. I've got as lot of AMD64 Debian systems and Knoppix is i386 only. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1005202227270.7...@castor.opentrend.net
Re: cross-connecting console ports?
On Mon, 17 May 2010, Miles Fidelman wrote: Has anybody done this? Any suggestions on where to start - both re. cabling (USB vs. serial cross-over), and/or software? Hi Miles. Many of us have done this for years and years. You can go with a serial console over rj45 (including bios level tools) like iLO or DRAC or you can get Linux to provide you with a 'software serial console' that will be available from the bootloader (lilo or grub) onwards. A quick Google should turn up howtos on how to configure Grub friends. I've always used true serial ports to do this although I understand it is possible via usb-serial connectors. You can use any serial terminal app to provide access to the serial port. I prefer minicom but there are lots of options. Keep security in mind when doing this. If soneone gets root access[1] to one of the servers then they can 0wn the other one. Don't cross-connect the serial consoles unless the servers are in the same 'security domain'. [1] You can restrict who can talk to minicom for example. Cheers, Rob -- Email: rob...@timetraveller.org Linux counter ID #16440 IRC: Solver (OFTC Freenode) Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1005171708120.7...@castor.opentrend.net
Re: apt-get dist-update failure - can't boot
On Mon, 3 May 2010, Boyd Stephen Smith Jr. wrote: If a full-upgrade (previously known as dist-upgrade) throws errors, the last thing you should do is reboot. You should *fix the errors*; your system may not reboot cleanly until they are resolved. Well said. Rebooting in the middle of a dist-upgrade - I'm surprised it came up at all. Convincing people not to reboot in the face of a problem is a constant stuggle. I penned the following thoughts on rebooting: http://practicalsysadmin.com/wiki/index.php/Reboot Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1005031538260.32...@castor.opentrend.net
Re: Linux should not be booting
On Thu, 11 Mar 2010, Carlos Davila wrote: Yet linux still boots. I am using Lenny and grub. Where is the kernel actually stored then? Hi Carlos. This is actually the expected behaviour with lilo. I've seen it myself many times with lilo. This is because lilo maps the blocks of the kernel directly rather than passing through the filesystem. As thib notes, Grub actually has a similar capability so it looks like you're using it. In any case the best way to get rid of a bootloader is to map in another bootloader. MS-Windows used to have an undocumented switch fdisk /mbr which would remap the MBR and erase any copy of lilo or grub present. I don't know if they still have that option. You can use dd to erase the MBR too. Commands to do this can be found with a little RTFM. If you do device to erase the MBR using dd then make sure you back up all important data first. My preference has always been to get rid of one bootloader by replacing it with another. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003110927210.11...@castor.opentrend.net
Re: Linux should not be booting
On Thu, 11 Mar 2010, Tom H wrote: MS-Windows used to have an undocumented switch fdisk /mbr which would remap the MBR and erase any copy of lilo or grub present. I don't know if they still have that option. Undocumented? Yes, it didn't appear in any of their regular help sources, at least not back when I used to touch MS-Windows. The command above works pre-XP. For XP, it is fixmbr and/or fixboot.. For Vista and Seven, it is bootrec /fixmbr and/or bootrec /rebuildbcd. Ah excellent. I'll make a mental note. Thanks, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world
Re: /boot fs (was Re: /boot partition changes when it should not)
On Mon, 8 Mar 2010, Ron Johnson wrote: grub (and maybe lilo) never used to be able to boot from an xfs partition. Grub can boot from xfs now. Lilo always could. If you install xfs as the root filesystem on older versions of Debian Stable the installer is smart enough to realise that Grub won't cut it, and it installs Lilo. I noticed this behaviour has changed recently but I don't recall exactly when it changed. As for the shiver, I also am confused. A 64MB partition, though, really doesn't need a high-performance fs. ext2 is more than adequate. I certainly have no objections to seperating /boot if it makes the bootloader happy. Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003102342360.18...@castor.opentrend.net
Re: /boot partition changes when it should not
On Tue, 9 Mar 2010, Clive McBarton wrote: Yes, of course. I mean md5sum /dev/sda1. Hi Clive. If you don't mind me asking, why are you doing this? Are you concerned about corruption or someone (with root) compromising your kernel image, or perhaps something else? Also even if /boot was merely a directory on the rootfileeystem you could still md5sum all the files within it. Indeed aide and tripwire do just that. It's mounted read-only (actually also noatime, although that is implied by ro). The access times cannot change. Nor the other metadata. And in fact they don't: ls -Rl, ls -Rlc, ls -Rlu report no changes in the metadata. So you're wondering what is changing the checksum? The ext2/3 keeps metadata on mount times, number of mounts, etc. Merely rebooting would be sufficient to update the mount count and therefore completely change the md5sum. If you want to confirm that no files are changing take md5sums of all files and compare back file by file. As with any IDS keep your hash list off the system to avouf potential compromise. I do NO write operation whatsoever on it. It is not allowed to change in ANY way. To the extent that you can assert this. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003102347510.18...@castor.opentrend.net
Re: /boot partition changes when it should not
On Tue, 9 Mar 2010, Clive McBarton wrote: umount /boot; mount /boot; dd_rescue /dev/sda1 /tmp/boot1; umount /boot; mount /boot; dd_rescue /dev/sda1 /tmp/boot2; diff /tmp/boot1 /tmp/boot2 Hi Clive. I've never used diff to compare binary files. Is the md5sum of the different files the same? Result: No change. Hence it does not increment a mount count as long as it is manually unmounted and remounted while the system is up. The filesystem sees no distinction between mounting during boot or mounting any other time. It does increment the mount count. I even went and confirmed this on one of my systems. Same situation - ext3 /boot. Use tune2fs -l filesystem device to take a look. Malicious modifying of files with a disk editor is exactly the undesired stuff that this whole checksumming is supposed to detect. Why not just use Aide? It's a path of least resistance IMHO and will produce a better overall result. To get an absolute, no write, ever, to the device, the OP will need to figure out how to force read only permissions on the device /dev/sda1, across boots. Phantastic idea! Can it be done? I have not heard about this yet. It would be great. Well that's a big topic in itself. I think you'd need to get in to mandatory access controls to do this in an effective way. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003102357060.18...@castor.opentrend.net
Re: /boot partition changes when it should not
On Thu, 11 Mar 2010, Robert Brockway wrote: The filesystem sees no distinction between mounting during boot or mounting any other time. It does increment the mount count. I even went and confirmed this on one of my systems. Same situation - ext3 /boot. Hmm I knew I should have read to the end of the thread before replying. As Bob notes in another email, a read-only ext3 filesystem does not increment the mount count held within the filesystem. My orignal test was on a read-write mounted /boot. I have to say this surprised me also but it does make sense. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com Open Source: The revolution that silently changed the world -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003110047410.18...@castor.opentrend.net
Re: Single root filesystem evilness decreasing in 2010? (on workstations) [LONG]
On Thu, 4 Mar 2010, thib wrote: OTOH - I haven't studied XFS - but from the little overviews I read about it, I suppose its allocation groups are a way to scale with this problem (along with other unrelated advantages like parallelism in multithreaded environments). What happens if a filesystem doesn't have anything like it? Filesystems will hit scale problems at some point. As you note AGs in XFS help it to scale alot but you do need to be careful in selecting the number. Too many and you can become CPU bound. Maybe no-one cares because we currently don't have filesystems big enough to actually see the problem? Some people definitely do. I agree with that, but I know it's because I, personally, *need* to know what's going on, all the time. Some people are OK with letting a program (even such a critical one) do some magic; and without having tested any complex one, I suspect they try to KIS for the user. The problem is that if a backup system breaks you get to keep both pieces :) Failing to understand your backup system and now you can DR under the worst case is a serious risk. The problem is, if there's a problem with the backup system itself, then it's going to be a long night. If there's no need for such software, I, again, agree, there's no use to take risks, even if they're minimal. Amanda is a good example. I keep 'backup state information at the beginning of the tapes and allows the information to be dumped to a test file easily. I have done a 10TB SAN DR with Amanda and used printed out pages of the tape state information to guide me. It was relatively painless considering the amount of data I was bringing back. Considering your experience, I have to believe you; we can always backup very simply, even very large systems. It's just weird to picture, all these complex backup systems would be useless? (I know, it's not a binary answer, but you know what I mean.) I'm not saying they are useless but organisation do need to take more time considering DR I think. Large organisations will have fully operational DR sites and they can afford to run a database for their backup system since they can expect at least one of their sites to be operational at any given time. I have known people who run a copy of the backup DB on a laptop which is supposedly kept offsite. These laptops likely come on site occassionally and they are a prime candidate for bitrot. Anything that gets between me and data restoration makes me nervous :) And for those people who think that off-site/off-line backups aren't needed anymore because you can just replicate data across the network, I'll give you 5 minutes to find the floor in that plan :) I guess I'm perfectly OK with that, but are we still talking about workstations? :-) I'm talking about servers. There is no substitute for offsite/offline backups and there never will be. This is one of the few topics were I will use absolute statements like this. You can never predict the nature of the failure. If you try to figure out how a failure will occur then you will sooner or later run in to a failure of imagination. The only way to guarantee against a single disaster of a certain size is to physically seperate the data stores by a sufficient distance and keep the backups offline. No technology can change this fundamental truth since our understanding of the possible failure modes will always be incomplete. My understanding is that the cached column of the output of free(1) is the sum of all pages, clean and dirty. The buffers column would be the Right. It might be nice if free did display them seperately. It would confuse people less then :) /proc certain present the info. Checkout the source of 'free' - it is a really simple application. Since there's no cached column for the swapspace, I guess no clean page gets pushed there, although it could be useful if that space is on a significantly faster volume. Anyway, the used column should be the total, actual swapspace used, so your comment kind of confuses me. Am I really wrong here? I'd recommend doing some reading. The cached system memory and the swap space disaplayed by free are really unrelated concepts (at least at the level we're talking about here). If you want to chat on IRC about fun subjects like caching and swap space sometime you can find me as Solver on Freenode OFTC. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003091250110.18...@castor.opentrend.net
Re: Single root filesystem evilness decreasing in 2010? (on workstations)
On Sun, 28 Feb 2010, Stan Hoeppner wrote: swap 4GB may never need it, but u have plenty of disk /boot 100MB ext2safe call, even if grub(2) doesn't need a /boot / 40GBext2/3 journal may eliminate mandatory check interval /var up2uext2sequential write/read, journal unnecessary Hi Stan. Questions about the need for a journal aside, if you run ext2 when you will get a fsck following a crash _whether you need it or not_. Yes you can bypass it but the filesystem will be marked dirty. My recommendation is to not use ext2 unless you have to and then only on small filesystems. /home up2uxfs best performance for all file sizes and counts As a long time sysadmin I council against mixing filesystems like this unless there is a compelling reason. Using different filesystems drives up management overhead. The tools and procedures to fix different filesystems are different so you're expecting someone to have to deal with both cases in the event of problems. Methods to backup can also vary, etc. This is an example of the classic 'heretogeneous vs homogenous system' argument applied to filesystems. *You may trust ext4 at this point, but I, and many others don't. xfs beats ext4 in every category, so why bother with ext4? I do rather like xfs and it can be used for every regular filesystem on the box. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.1003031022090.7...@castor.opentrend.net
Re: Single root filesystem evilness decreasing in 2010? (on workstations)
On Sun, 28 Feb 2010, thib wrote: Usually I never ask myself whether I should organize my disks into separate filesystems or not. I just think how? and I go with a cool layout without thinking back - LVM lets us correct them easily anyway. I should even say that I believed a single root filesystem on a system was a first sign (you know what I mean ;-). But now I'm about to try a new setup for a Squeeze/Sid *workstation*, and I somehow feel I could be a little more open-minded. I'd like some input on what I covered, and more importantly, on what I may have missed. Maybe someone can point me to some actually useful lists of advantages to partitioning? I find a lot of BS around the net, people often miss the purpose of it. So, what are the advantages I see, and why don't they matter to me anymore? I've been pondering this myself of late. I was going to post to another list (SAGE or SAGE-AU) but you've done such a nice list of advantages/disadvantages that I think I'll piggy back here :) I'm a long-time sysadmin and generally deal with servers but of course people ask my advice on workstations too. * Filesystem corruption containment A corrupt root filesystem is _much_ worse than a corrupt non-root filesystem. As long as the root FS is ok the box will boot, possibly without network access. OTOH a box booted with just the root FS mounted is probably pretty useless. These days if a box has any filesystem problem I'm likely to boot it from a live cdrom to perform recoveries. In the past reasonably alternatives were available too though. Some Unixen in the 80s could boot from tape for example. In the end the final defence against a corrupt filesystem is the backups. * Free space issues Since I'm the only one who uses this machine, I should know if something may go wrong and eat up my entire filesystem (which is quite big for a workstation). Yes, I still monitor them constantly. On servers there is a concern about one part of the filesystem gobbling all the space. This has been one of the most compelling reasons to use multiple partitions. Some filesystems such as XFS ZFS allow you to effectively set quotas on parts of the filesystem. I think we'll see this becoming more common. This takes away a big part of the need for multiple filesystems. * Specific mount options According to the Lenny manpage, mount(8) --bind won't allow me to set specific options to the remounted tree, I wonder if this limitation can possibly be lifted. If not, I think a dummy virtual filesystem would do the trick, but that seems kludgy, doesn't it? Pointers? I guess I could live without it, but I would actually find this quite annoying. This is a good point. I actually hadn't considered this in my list. I'll respond by saying that in general the mount options I use for different filesystems on the same box do not vary much (or at all) in practice. If I want a filesystem marked noatime then I probably want all the filesystems marked noatime. There are exceptions to this of course. * System software replacement Easier to reinstall the system if it's on separate volumes than conf and data? Come on.. That's true but the time savings is not terribly great IMHO. The system can be backing up and restoring the dats while the human is off doing other stuff. Saves computer time (cheap) but not human time (expensive). For a workstation, I don't need a fast system recovery mechanism, and I want to minimize my backup sizes. I'd rather save a list of selections rather than a big archive of binaries. I recommend backing up all system binaries. It's the only way you can guarantee you will get back to the same system you had before the rebuild. This is most important for servers were even small behavioural changes can impact the system in a big way. See this link for my talk on backups which goes in to this issue further: http://www.timetraveller.org/talks/backup_talk.pdf All the info in this talk is being transferred to http://www.practicalsysadmin.com. * Fragmentation optimization One of the most obvious advantages, and usually my main motivation to separate these logs, spools, misc system variable data, temporary directories, personal data, static software and configuration files. This is less of an issue than it used to be. Even ext2 will work towards minimising fragmentation. Several *nix filesystems now allow for online defragmentation (eg, xfs). I expect this problem will completely vanish in the future. * Metadata (i-node) table sizes While this may be a problem now I think it will be less of a problem in the future as some filesystems already allow you to add i-nodes dynamically and this will increasingly be the case. * Block/Volume level operations (dm-crypt, backup, ...) Encryption (with LUKS) in particular should beat any implementation at filesystem level. I don't have any number to back that up, however
Re: Single root filesystem evilness decreasing in 2010? (on workstations)
On Sun, 28 Feb 2010, Clive McBarton wrote: Ignore swap, that's just small stuff, especially with 3GB. You could have 64GB and it would still be not that important. Put it on any partition or file you want. The rule is 1:2 BTW. Hi Clive. I liked the rest of your post but I did want to make one little comment here. The 1:2 rule was true on some versions of *nix in the past. It has never been true on Linux and is not AFAIK true on any modern version of *nix. With improvements in disk capacity outstripping improvements in disk i/o by a significant margin 1:2 would mean that the system would be effectively unusable from thrashing long before you had used half of the allocated swap. I cover this a little more here: http://practicalsysadmin.com/wiki/index.php/Swap_space Cheers, Rob Well, here it is; so, should I do it? If you feel like tinkering and sorting out problems, then yes. If you want to just get your computer running and never think about it again, then no. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuKngUACgkQ+VSRxYk4409jVwCfdeZARa+3LjZR9yWZat6na0bv iesAoJ1mYVKnBbupounl709caGPzOEqN =c+qk -END PGP SIGNATURE- -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/alpine.deb.1.10.100303190.7...@castor.opentrend.net
Re: Single root filesystem evilness decreasing in 2010? (on workstations) [LONG]
On Thu, 4 Mar 2010, thib wrote: If restore speed is really that critical, it should still be possible to generate an image without including the free space - I know virtualization techs are doing it just fine for most filesystems. Maybe we misunderstood each other - saw a different problem. Possibly. I didn't mean to suggest that dd was a good way to backup. I think it is a terrible way to backup[1]. I was talking about dump utilities. I started using dump on Solaris in the mid 90s and really like the approach to backing up that dump utilities offer. On Linux I use xfs a lot and backup with xfsdump in many cases. [1] A long time ago I used to use it to backup MS-Windows systems from Linux but disks grew so much it became infeasable. I recommend backing up all system binaries. It's the only way you can guarantee you will get back to the same system you had before the rebuild. This is most important for servers were even small behavioural changes can impact the system in a big way. So you don't trust Debian stable to be stable? :-) Actually I'd say Debian is best-of-breed when it comes to backporting security patches to retain consistent functionality. Having said that, system binaries represents an ever reducing proportion of total data on a computer system. When I first started with Linux the OS took up about 80% of the available disk space that I had. Today I'd be generous if I said it took up 2%. So even if there is an alternative, backing them up now is hardly onerous and improves the chances of a successful disaster recovery. I cover this more in the backup talk. Thanks a lot; that's a talk full of useful checklists. I'll definitely eat your wiki pages when I have the time. Great. I'm gradually adding more and more info to the site. While this may be a problem now I think it will be less of a problem in the future as some filesystems already allow you to add i-nodes dynamically and this will increasingly be the case. I'm not sure I follow you, but that sounds cool. Could you elaborate? Sure. GPFS (a commercial filesystem available for Linux) allows for the addition of i-nodes dynamically. We can expect more and more dynamic changes to filesystems as the science advances. I once nearly ran out of i-nodes on a 20TB GPFS filesystem on a SAN. Being able to dynamically add i-nodes was a huge relief. I didn't even need to unmount the filesystem. Anyway, my preference isn't based on my own experience so I'm not actually using anything like that, but I'm willing to look at and try fsarchiver and see if it can really beat simple ad-hoc scripts for my needs. Or something heavier, just for fun (Bacula?). I'm fairly particular about backup systems. I think most people who design backup systems have never done a DR in the real world. I seem to end having to do at least one large scale DR per year. I've done two in the last month. I've done several DRs in the multi-TB range. Virtually every DR I've done has a hardware fault as the underlying cause. In several cases multiple (supposedly independent) systems failed simultaneously. The core of any DR plan is the KISS principal. There's a good chance that the poor guy doing the DR is doing it at 3am so the instructions need to be simple to reduce the chance of errors. If the backup solution requires me to have a working DB just to extract data or wants me to install an OS and the app before I can get rolling then I view it with extreme suspicion. And for those people who think that off-site/off-line backups aren't needed anymore because you can just replicate data across the network, I'll give you 5 minutes to find the floor in that plan :) Ah but they are. Cache pages may be clean or dirty. Your disk cache may be full of clean cache pages, which is just fine. Am I interpreting the output of free(1) the wrong way? Sort of :) Free is telling you the total memory in disk cache. Any given page in the cache may be 'dirty' or 'clean'. A dirty page has not yet been written to disk. New pages start out dirty. Within about 30 seconds (varies by filesystem and other factors) the page is written to disk. The page in the cache is now clean. Unless your system is writing heavily most pages in the cache are likely to be clean. The difference is that clean pages can be dumped instantly to reclaim the memory. Dirty pages must be flushed to disk before they can be reclaimed. Using clean pages allows fast read access from the cache without the risk of not having committed the data. I describe this as 'having your cake and eating it too'[2]. More info can be found here: http://en.wikipedia.org/wiki/Page_cache [2] Paraphrase of English language saying. cay:~$ free -o total used free sharedbuffers cached Mem: 31167483029124 87624 0 7215001548628 Swap: 3145720800
Re: lenny backups and recovery
On Fri, 15 Jan 2010, Paul E Condon wrote: Contrary to tldp advice, I think it is unnecessary to make backups of /bin or /sbin. These files are readily available from you favorite I'm very much a fan of backing up the entire system (with limited exceptions, such as an area set aside for the storage of downloads, that is not backed up. The problem is that unless you restore to the _same_ binary you can't guarantee the same behaviour. This is essential in the case of server backups where there is little tolerance for behavioural changes but still applies to desktop systems. A similar argument applies to the complete setup of the system. Despite the best change management, small and non-obvious changes can occur in a system. If you reinstall from the repo and restore the config from /etc you may still be missing something (eg, a symlink) and find an app is broken when it was previously working. Backing up the entire state of the system means that when you do a DR you get back a known working copy of the system, since it was working before. I'd consider having to reinstall from original media a failure of the DR system. When I first started with Linux the system (binaries and config) took up about 80% of the disk. Now it takes up less than 2%, so backing up the system components hardly adds any pressure to the backup system. It also allows for a much faster recovery following a DR. Also, important data has a way of hiding in more places on the disk that you think it will. If you start excluding parts of your system from the backups you increase the liklihood of missing something important in the backups. This is covered in more detail in my backup talk notes (which I did mention earlier in the thread): http://www.timetraveller.org/talks/backup_talk.pdf Debian repository, and if your system has crashed in some serious way, you would be well advised to download again, once you think you have resolved the issue that caused the crash. Think about it --- if you have to restore one of these, something really bad has happened and you can't be sure that something -else- bad hasn't also happened - but you haven't noticed it - yet. That's where testing of the backup system comes in. You never know that DR will work unless you test it. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: lenny backups and recovery
On Thu, 21 Jan 2010, Paul E Condon wrote: Linux-Complete-Backup-and-Recovery-HOWTO. One of the first things I noticed about it is that it assumes that you already -have- a daily backup system in place, and then it makes no attempt to integrate what it is presenting with that system, or even suggest a review of the design of that system. And yet it claims to be 'complete'. I think we can agree, it is not complete. Hi Paul. Thanks for your reply. I probably read that doc years and years ago although I don't remember anything about it :) I'll go reread it. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Octave slow?
On Thu, 21 Jan 2010, George wrote: Is it just me or is the GNU Octave on Lenny very slow? Hi George. Do you mean... * Slow compared to MATAB? or * Slow compared to Octave on another platform? or * Slow compared to Octave on etch? or * Something else These days I use Octave like a glorified calculator but it all seemed fine to me. Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: lenny backups and recovery
On Thu, 14 Jan 2010, Adam Hardy wrote: Just recovered from a kernel-not-loading situation, without any data loss and happily wondering what I should do now to make sure I don't get the same adrenalin shot next time it happens. Hi Adam. Below are the notes from my talk on backups. My opinions on this topic are forged by 15 years of real world experience including very large disaster recoveries (10s of TB) following SAN failures. http://www.timetraveller.org/talks/backup_talk.pdf Cheers, Rob -- Email: rob...@timetraveller.org IRC: Solver Web: http://www.practicalsysadmin.com I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: How to fix ipaddress
On Thu, 7 Jan 2010, J.H.Kim wrote: Sometimes ip address is set to 192.168.0.7. But somtimes ip address is set to 169.254.171.33 which is not set by me, and I don't konw why that address is set to my ip address. I want to set my ipaddress 192.168.0.7 always. Please tell me how to fix that problem. Hi. Addresses that start with 169.254 (169.254.0.0/16) are called 'link-local' addresses. They are allocated if a DHCP client fails to get an address from a server. The idea is that even if there is no DHCP server available hosts on a LAN can still talk to one another. The /etc/network/interfaces you posted suggests you are using statically assigned addresses, however the presence of a link-local address suggests that a dhcp client is running on the box. So I'd recommend figuring out if you have any dhcp client packages installed (dpkg, apt-get, aptitude) and seeing if they are running (ps aux | grep dh). If you post further information, like the output of the ps command above, we can help you dig further. Cheers, Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: ext2/3 vs xfs for maildir
On Fri, 1 Jan 2010, Volkan YAZICI wrote: I strongly agree, even in recent ext4 and nilfs benchmarks, reiserfs is generally the winner in many different scenarious. Besides, XFS is very disappointing at power failures and ext2/3 requires huge amounts of There are reasons for the observed XFS behaviour. If a file becomes corrupt XFS zeros the file rather than leaving a corrupt file in place. There are pros and cons to this approach. In any case it is essential to always keep good backups. Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: ext2/3 vs xfs for maildir
On Fri, 1 Jan 2010, Stan Hoeppner wrote: Which filesystem is more appropriate for maildir use on a Postfix/Dovecot system, ext2/3 or xfs? This maildir will be storing mulitple mail folders and files, some folders containing over 10,000 email files. I'm a big fan of XFS and have successfully used it in high performance mail servers. If xfs, what is the most appropriate mkfs.xfs command line for creating the filesystem best tuned for the above described maildir? I have no previous Here are some general optimisations for high performance systems that I've put together: http://www.practicalsysadmin.com/wiki/index.php/XFS_optimisation Dovecot has a few words to say on the subject too. Search for XFS in the link below. http://wiki.dovecot.org/MailboxFormat/Maildir If you go with xfs then be sure to use xfsdump/xfsrestore for backups if you can. Cheers, Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Secondary Mail Server
On Wed, 25 Nov 2009, Tony Nelson wrote: My advice is not to have a secondary MX, as it is just going to be the main target of spammers, as secondary MX servers usually don't receive the care given to primary MX servers. It might well cause a lot of backscatter spam, as spam accepted during the SMTP transaction will be rejected later, when your primary MX gets it, by sending a bounce message to some innocent party. This is the reason that it is now necessary to verify the delivery address during the initial SMTP transaction. It is backup MXs not doing this that causes backscatter spam. The OP mentioned that he needed to do this and was hoping for a way around it. To the OP: No there is no way around this requirement thanks to the spammers. You may want to verify users via LDAP on each MX. Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Secondary Mail Server
On Wed, 25 Nov 2009, John Hasler wrote: That some organizations ignore the standard and deliberately configure their servers to give up after a few hours. I've been seeing less of that. My recent experience is that even organisations pushing a lot of mail will keep retrying for 24 or 48 hours. Having said that I have no problem with using backup or multiple primary MXs if they are properly configured. They should all: 1. Reject undeliverable mail in the first instance. 2. Use the same anti-spam strategies. 3. Receive the same care as any other server (patching, etc) Rob -- I tried to change the world but they had a no-return policy http://www.practicalsysadmin.com -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: how to solve the problem Could not bind sock on port 21: Address already in use?
On Thu, 5 Mar 2009, Star Liu wrote: i changed the port my ftp server use, then it starts. it's good, i'm not so stupid. Hi Star. I'd recommend against solving the problem that way. Ports are standardised so they may be found easily by those who needs them. As others have noted Inetd is the Internet superserver. If you look in /etc/inetd.conf (or /etc/xinetd.* if using xinetd) then you should see an entry where it is starting an ftp server on tcp/21. It shouldn't be starting anything else there unless you have a honeypot set up. So I'd recommend finding out what Inetd is starting, disable it and start your ftp server on the correct port. Having said that I wouldn't recommend ftp for anything except anonymous access. sftp is now widely supported in file transfer clients even on MS-Windows. sftp is a lot more secure and doesn't have any problems with firewalls like ftp does. Cheers, Rob -- I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Who is logged into this box?
On Sun, 11 Jan 2009, Dotan Cohen wrote: On a machine that I have root access to, how can I see who is logged into the machine? Specifically, I suspect that a malicious entity is logging on in a compromised account over SSH, even while the account's user is sitting at the machine and logged in, so if I can catch two simultaneous login sessions (one on the physical hardware, one over ssh) then I can be sure. Thanks. w and who have been mentioned. I generally prefer finger (which runs quite happily locally without a fingerd to connect to). You probably also want to look at last[1] which will show a history of when users were logged in. But... If you really think the a/c has been compromised then don't wait for the baddie to log in again. Lock the account. Scan the box for anomalies (eg, checkrootkit) and take a particular interest in that a/c. If you don't find any evidence that the baddie broke root then may wish to reset the a/c password and move on. If you find any evidence that the baddie broke root then best practice is to restore the box from known good backups. You can never guarantee that you found all of the backdoors that a cracker may have left on a system. I'll stop now as there is a lot more I could say on this topic but it isn't necessary at this stage. [1] I comment out the entry concerning wtmp in /etc/logrotate.conf as this allows the login history to remain indefinitely. Even for multi-user boxes that have been running for years I haven't found a problem doing this. wtmp is tiny so disk space is hardly an issue. Cheers, Rob -- I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Using old diskless machine as X terminal
On Mon, 22 Dec 2008, Ross Boylan wrote: I switched to trying to get a 100Mhz Pentium with 64MB of RAM working. Unfortunately, it can't boot from CD-ROM (maybe something broke--the CD ROM is still readable, though). Nor does it directly support network booting. Its disks are basically full; it's running Windows NT 4, but my other family members are finding it intolerably slow. I was hoping it would be adequate as an X terminal. There is an easy way to turn an MS-Windows system into an XTerminal although I'm not sure it will work with something as old as NT4. - Install Cygwin. - Install the X server - Don't bother installing any other Cygwin tools as you won't be using them. - Use X -query as you normally would to connect to a remote display manager that is accepting remote queries. - Login If you set the X server to be full screen then you can alt-tab between the MS-Windows environment and the *nix environment. It can take some time to explain to users that the apps they are running are not runnning on the local box. Otherwise for a box that can't boot from cdrom or NIC it would seem you'd be stuck with floppy booting. Rob -- I tried to change the world but they had a no-return policy -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: XDMCP
On Sun, 29 Apr 2007, Daniel D Jones wrote: One more thing: There is a reported but unfixded bug in kdm in Debian/Ubuntu going back a long way which is causing some xdmcp configurations to fail when they really are ok. If you look in your logs you'll see reports of kdm_greet getting memory corruption if you are getting this problem. As a result of this problem I recently migrated a bunch of thin client servers to use gdm instead of kdm. You didn't specify which log but I didn't find any such errors in either kdm.log or Xorg.0.log, nor did I find any other related errors in the log. Maybe I'll try switching to xdm just to see. It depends on how you have syslog configured. I recommend running a debug log to catch all the info. You can grep for the following error coming from kdm_greet: Internal error: memory corruption detected Cheers, Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net Contributing Member of Software in the Public Interest -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: XDMCP
On Sat, 28 Apr 2007, Daniel D Jones wrote: Running unstable. Trying to get XDMCP working via KDM. On the local host, everything works. KDM gives me a graphical login prompt, and KDE loads when I log in. From a remote machine, however, I get what appears to be a pure X session with no Windows manager running. I get the hollow X cursor, and am able to move it with the mouse. However, neither left nor right clicks do any thing - no menu or anything. Clicking and dragging doesn't generate a dotted outline selection. No key presses appear to do anything. Not sure where to go from here. Hints and suggestions welcome. Checkout /etc/kde3/kdem/Xaccess to see what sorts of xdmcp access you are allowing. At the least you want to offer: * #any host can get a login window You may also want to allow: * CHOOSER BROADCAST #any indirect host can get a chooser Also you need to enable xdmcp in kdmrc: # Whether KDM should listen to incoming XDMCP requests. # Default is true Enable=true Restart kdm at this point. Make sure you backup kdmrc before making any changes so you can rollback if you break it. One more thing: There is a reported but unfixded bug in kdm in Debian/Ubuntu going back a long way which is causing some xdmcp configurations to fail when they really are ok. If you look in your logs you'll see reports of kdm_greet getting memory corruption if you are getting this problem. As a result of this problem I recently migrated a bunch of thin client servers to use gdm instead of kdm. Cheers, Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net Contributing Member of Software in the Public Interest -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: system is up 1 year
On Sun, 10 Sep 2006, Ron Johnson wrote: That is if the kernel is at (more than slight) risk of infection. If you sit behind a firewalling router, don't run an httpd, an ftpd, etc, how much at risk are you? As Marc Wilson said, it depends. A local root exploit (in the kernel for example) combined with a remote exploit that does not itself grant root access can equal a remote root exploit. Wham bam, r00ted system. Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Shutdown my Laptop? Why should I?
I'm going piggy back a couple responses here... On Wed, 12 Jul 2006, Digby Tarvin wrote: On Wed, Jul 12, 2006 at 10:45:20AM -0700, Greg Ryman wrote: I would say to do a reboot and possible a file system check once a month to avoid corruption and unintended loss of data. Other then that, you don't need to reboot. Greg, what makes you think this is needed? What FS do you usually use? I would also suggest a reboot any time you use apt to do an upgrade, I'd modify this a little. I schedule a reboot to follow any significant security upgrade or when important libraries are updated. In essence the key is to avoid leaving compromised binaries and libs in memory even if the on-disk copies are fixed. If in doubt schedule a reboot. or otherwise change or reconfigure your system. It is much easier to solve a booting problem if you can remember what has changed since it was last working... As Debian security updates are (in general) backported it has an excellent record for stuff not breaking on a security update (unlike some other major OSes and distros I could mention). I _only_ use Stable in production and I keep even backports to a minimum for production boxes. Right now none of the production Debian boxes under my control have any backported packages. It is the same logic as for servers which run 24/7. Exactly. Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. If you are emailing regarding an open ticket please consider mentioning the ticket ID as this will assist us in responding as quickly as possible. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What's wrong with X forwarding / remote X logins?
On Mon, 10 Apr 2006, Christian Pernegger wrote: Is X11's network transparency a thing of the past and not supposed to No way. I use it every day of the week (and the weekend) as I'm sitting on an XTerminal right now. Network transparency is the basis of the LTSP project and others. You note you are using Debian Testing. Personally I only use Stable in production environments (with very judicious use of backports). Who knows what may be broken in Testing at any given time. Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: backing up a drive
On Mon, 20 Mar 2006, Monique Y. Mudama wrote: I seem to recall that it doesn't. I believe that dd will cause the partition to think it's the size of the original partition. So it works, but you can only use as much space as the original hard drive had. That's right. It does work in so far as the original filesystem is restored and all it costs you is a little wasted space. For most people a small price to get their system back working, at least in the short term. There may be a way around this limitation or to fix it after performing dd. Following a reimaging from dd the options to expand the filesystem are: 1. Resize the filesystem using suitable tools if possible. or 2. Take the data off using a tool like dump, xfsdump (for xfs), cpio or tar and remake the filesystem before restoring. Of course if any of these work then use them instead of dd in the first instance. dd is really only useful for backups where other tools may not work well (eg, to backup an NTFS file system from Linux when you fear making a new NTFS filesystem may be difficult in the future). dd takes up more space (since it backs up the filesystem rather than the files), is very rigid and does not allow for anything except a full backup. Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: backing up a drive
On Mon, 20 Mar 2006, Philippe De Ryck wrote: A pointer: take a dump of the entire drive (dd if=/dev/hda) and not just partition. I've had some systems that wouldn't accept the restored version (PC wouldn't boot anymore due to no OS found) unless the whole disk was restored. I've done this plenty of times over the years. You just need to rerun a boot loader following the partition restore. Lilo does well for this. The bootloader can be run from a local Linux partition or from a live cdrom as is preferred by the admin. Think about disaster recovery which ever way you go (eg, the entire disk is toast). Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: System hangs at boot
On Sat, 18 Mar 2006, Leo Britto wrote: Hi everyone, I have a Debian Sarge 2.6.8 running on my laptop and I finally got my wireless adapter to work. But when I reboot it I just cant go pass the jabberd startup. Earlier it was hanging on the MTA startup so I apt-get remove exim4-base and got rid of it just to find out that the problem is after it. WhenI boot in the safe mode (no services i guess) I can get to the bash w/o problems, start my wireless and use it. How can I found what is causing this problem? I setup one Sarge laptop with a D-Link wireless card[1] using ndiswrapper where booting with 2.6.8-2-686 (kernel compile for the right cpu type) fails to load some of the relevant drivers while the default 2.6.8-2-386 works. I didn't get time to investigate throughly and just dropped it back to the -386 kernel. It's been working beatifully. If relevant, see if this helps. [1] I don't have the model available to me right now. Rob -- Robert Brockway B.Sc.Phone: +1-905-821-2327 Senior Technical Consultant Urgent Support: +1-416-669-3073 OpenTrend Solutions Ltd Email: [EMAIL PROTECTED] Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [root user] How to disable root account?
On Fri, 25 Nov 2005, Maxim Vexler wrote: On 11/25/05, Robert Brockway [EMAIL PROTECTED] wrote: Anyone wanting to lock the root account (not a good idea IMHO) should have a root enabled session (sudo, su or whatever) put to the side and not touched during the procedure. This session would be used only to reverse the procedure if it was found that establishing superuser privs was no longer possible in new sessions. In the worst case, couldn't someone just boot from a livecd, run [passwd root], then [cat /etc/shadow | grep root] on the livecd and finally simply copying that entry into the locked out system shadow file ? Sure but this involves bringing the system down. If you don't allow the three fingered salute on the console to reboot or halt the system then it involves bringing the system down badly. If we are talking of a production system this is a _very bad thing_ even after hours. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: root can not delete a directory ,why?
On Wed, 21 Dec 2005, Gene Heskett wrote: Humm, you weren't perchance cd'd into the directory? Actually unix will let you delete a directory that is in use. Harley Hahn once described this as sawing off the branch you are sitting on. Eg: $mkdir /tmp/foo $cd /tmp/foo $rmdir /tmp/foo $pwd /tmp/foo $ls -a $cd $cd /tmp/foo bash: cd: /tmp/foo: No such file or directory The primary cause of the error the original poster was concerned with is that the directory is a mount point and I see the original poster confirmed this was the case. Cheers, Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian print server
On Tue, 20 Dec 2005, Clive Menzies wrote: Well, you don't need X; use lynx as a browser and use cups and if you're Indeed, there is no need for X in those case (or any server in general). If the original poster needs a graphical app all he has to do is display it on a remote box X display. There is plenty online on doing this. serving windows clients, samba. Personally I use CUPS/IPP even when MS-Windows is involved, I find it is easier all round. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [root user] How to disable root account?
On Thu, 24 Nov 2005, Bj??rn Lindstr??m wrote: passwd -l simply sets the password to a value matching no passwords. sudo works by running SUID root, and so does not depend on a root password in any way. Actually that depends on how sudo is configured. In some configurations sudo does depend on the root password (rather than the user a/c password) for authentication. Anyone wanting to lock the root account (not a good idea IMHO) should have a root enabled session (sudo, su or whatever) put to the side and not touched during the procedure. This session would be used only to reverse the procedure if it was found that establishing superuser privs was no longer possible in new sessions. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis.
Re: sytem running out of free disk space
On Tue, 15 Nov 2005, Realos wrote: Can you people point me to a documentation or give some hints how to deal with such problems in debian? I am not new to linux but have relatively little experience with debian. Consider using deborphan to locate packages without dependencies (it does more than just libraries). This is a great way to get a handle on useless packages you may have installed in the past. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Version numbers and backporting [was Re: A few general questions from a Debian newbie]
[Discussion on Debian version numbers and backporting] On Sun, 13 Nov 2005, Scott wrote: Perhaps, but it's also confusing to anyone coming to Debian from another Linux distro. Let's just hope they *properly* update the user agent string.. I say, that approach is fine, but why not show the right freakin version Because the version is 1.04 not 1.07. Changing the version number to 1.07 when an app is really 1.04 with backported fixes would be bad bad. The version number can define features, defaults, bugs and behaviour. Cheers, Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A few general questions from a Debian newbie
On Sun, 13 Nov 2005, Carl Fink wrote: BTW, I think Sarge is more than just usable for desktops right now. What I fear as a long-time Debian user is that it'll have plenty of time to BECOME obsolete, because Etch won't be released until 2010 or something. If Etch goes frozen by June of next year, the stable-only policy makes perfect sense. I think you are spot on. One of the reasons we love Debian is the rock solid stability. The discussions leading up to the last freeze lead me to believe most users and developers want a release cycle faster than those in the past. IMHO 12-18 months would be ideal. I really think this is achievable without increasing the work load on the developers (something we can't ask). Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A few general questions from a Debian newbie
On Sat, 12 Nov 2005, Scott wrote: And then OpenOffice.0rg 3, Firefox 2.0, GIMP 3.0, GNOME 2.16, and KDE 4.0 will be released within the following month discouraging many from sticking with Debian stable I think most people want a faster stable release cycle (though not as fast as many distros). This will mitigate the desire of users to use other distros for more up to date software. We shall see whether or not this will happen of course :) I certainly hope the release cycle can be sped up to 12-18 months. Cheers, Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
RE: Request to remove Information
On Sat, 12 Nov 2005, Paul Johnson wrote: A US national would be someone living legally in the US. You're *only* That's not the definition that is commonly accepted. The term 'national' when used as a noun is interchangeable with citizen. This is what dictionary.com says about the noun 'national': National, noun: 1. A citizen of a particular nation. See Synonyms at citizen. 2. A contest or tournament involving participants from all parts of a nation. Often used in the plural. The people who may legally live in any given nation cannot be easily defined as a right to reside is often covered under numerous different laws. Among those who can legally live in the US include: US citizens, Canadians or Mexicans on a TN visa, H1-B visa holders, Australians on an E-3 visa, refugees, and the spouses of most of the afore mentioned categories. allowed to hire US nationals inside US territory. This is not true regardless of which definition of national is used. Who can be hired in the US is a more complex issue than who can live there :) For example, the spouses of TN visa holders cannot work in the US but the spouses of E-3 visa holders can. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A few general questions from a Debian newbie
On Sat, 12 Nov 2005, Scott wrote: I was absolutely blown away by this: The latest official Debian Sarge package for Firefox is for v 1.04! http://security.debian.org/pool/updates/main/m/mozilla-firefox/ I'm rather surprised to see this. Why? Firefox is currently @ 1.07 and every point release since 1.0 has been due to security issues. It's normal for the Debian security team to backport changes into the existing code base in Debian. Thus I expect the Firefox 1.04 to be the vanilla source 1.04 plus backported security fixes. This is a _good_ thing as it means less changes on an update. This is one of the strengths of the Debian approach. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: i need a yum debian package
On Thu, 13 Oct 2005, Rakotomandimby Mihamina wrote: Hi, I need to build a yum repository on a debian server. Would you know where could I find a yum debian package? You may wish to use the tool alien (available in Debian) to convert a yum .rpm to a .deb. You'll need to make sure dependencies are satisfied. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian 3.0?
On Wed, 10 Aug 2005, Vicent wrote: I have had a simple google search with debian-30: http://www.google.com/search?hl=enq=debian-30btnG=Google+Search And that's right!: http://home.nktv.no/Filer/debian30/ Some effort should be made to independently verify the iso if it is coming from an non-Debian source. One option is to check the md5sums from multiple sources (although they could all be copying the same trojaned version). Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: how to check if its system bootup
On Sat, 6 Aug 2005, Bill Marcum wrote: On Sat, Aug 06, 2005 at 01:44:45PM +0200, LeVA wrote: Hi! I wonder if there is a way to check at an init.d script, if it is system bootup, or the user just executing the the script from a console, while the system is already up. Any way to do this? Use the runlevel command. Unfortunately that won't work. The runlevel when the init scripts are being executed is the same runlevel that the system will be running in. I'll present three options here. Each one has issues which I will note. The list of issues is probably not exhaustive :) Option 1: Check to see if the shell is interactive. The PS1 environment variable is set if the shell is interactive. A problem here is that a shell executed from cron and changing the state of a service would probably look just like a shell involved in the bootup. A crafty user could also fool this by deliberately making the shell non-interactive. This may not be a big deal. Option 2: Check to see if the shell has a tty attached to STDIN. Use the tty command for this. A problem here is that a shell executed from cron and changing the state of a service would probably look just like a shell involved in the bootup. Option 3: Checking the state of the system through signature files Some files do get added and removed during bootup. You could research files which are present during the service start but not present later. This is problematic as these files may vary from release to release and across different distros. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Was Space Recovery - Now X-windows problem
On Thu, 28 Jul 2005, John Graves wrote: xrdb: Can't open display ':0' Hi John. That error can mean a number of things: X is not running because it lacked a valid mode line (most common) X is not running because it died for some other reason X is refusing connections (less common) Try starting X on its own and see what happens. If you get a grey screen and a mouse pointer than it worked ok - control-alt-backspace to kill the X server you started. If X fails to start it will leave an error message (which will turn up in .xsession-errors too). If you need to use X in a hurry and kdm is broken, startx will get you out of trouble. If this works but kdm doesn't then the problem relates to kdm. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Space recovery
On Wed, 27 Jul 2005, John Graves wrote: I am still discovering what I don't know...Among that is why after I deleted a 17GB error log, df does not report that space as usable. Is there some process I need to start after deletion to actually recover that space? In Unix the space will not be released back to the free list while a process has the file open. Use losf or fuser to establish was processes have a file open. Whether or not you actually want to stop them will depend on the operational responsibilities of you and the box. Given that it is a log file I believe it will be syslogd. In Debian you can restart syslogd like this: /etc/init.d/sysklogd restart Check df afterwards and things should look better. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Space recovery
On Wed, 27 Jul 2005, Robert Brockway wrote: On Wed, 27 Jul 2005, John Graves wrote: I am still discovering what I don't know...Among that is why after I deleted a 17GB error log, df does not report that space as usable. Is there some process I need to start after deletion to actually recover that space? In Unix the space will not be released back to the free list while a process has the file open. Use losf or fuser to establish was processes Oh strike that, you've already deleted it :) You could try to figure out this information through /proc but it probably isn't worth. In future I recommend checking on what processes have a file open with lsof or fuser before deleting the file. In this case just restarting syslogd as per my earlier post is most likely the way to go. You may wish to wait until after hours if the box is in production and it can wait to get the 17GB back. I really recommend against rebooting in the face of a problem like this. It doesn't help anymore than restarting syslogd, interrupts the use of the system and could even create more problems than it solves. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Why not a Desktop on a GNU/Linux Server
On Mon, 25 Jul 2005, Anthony Simonelli wrote: I am planning on running a Squid Proxy, Postfix, Apache, webmail server here at my company and I was wondering if it was alright to run a Desktop or just X-Windows on this server. I love using the command-line and have become pretty proficient with it (I always have a terminal open), but other people in my department are not and a Desktop will help them out a great deal. They're used to a Windows NT type interface. I have always read that a desktop should not be running on a server but there is never an explanation as to why. Is there any problem with running a desktop on a server other than performance issues? I never run X on a server. The main issue is one of stability. X will bring down a Linux box faster than just about anything else. You want production servers to be stable - running X does not improve stability and in many cases reduces it. If you use X long enough you _will_ see it kill an otherwise healthy box. Remember also that production servers should run as little software as possible for reasons of security and stability. All extraneous software is bad on a server and it doesn't get much more extraneous than X. If they want to access the server graphically what is wrong with starting a remote display or (better) starting the graphical app over ssh? Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What's wrong with debian?
On Sat, 16 Jul 2005, Glenn English wrote: On Sat, 2005-07-16 at 18:02 -0700, mike wrote: FWIW, I would pay for a subscription if it meant faster time to market for package updates. I'd pay for a subscription for not much reason at all -- just to help keep Debian fat and happy. Hey there's nothing wrong with making donations :) Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: OT: Windoze spyware?
On Fri, 8 Jul 2005, Marty wrote: This is for readers who are unfortunate enough to have more Windows administration knowledge than I. The sole Windoze XP box on my LAN is sending http requests to a site named movies.go.com, although there is no web client running on the XP box (at least none obvious). I am analyzing the LAN traffic and appreciate any ideas about where to go next. If the traffic from the Winbox is passing through a Linux box then you can use transparent proxying to force all HTTP requests to the Linux box and run it through Squid. You can then monitor the traffic to see what is happen and even block it. I've heard all the chilling spyware stories, but this is an eye opener for the sheer volume of data being passed 24/7 to or from this box. But what data and to whom? It is often a good idea to isolate any Winboxes in their own LAN and firewall them from the other boxes as much as possible (including the aforementioned transparent proxy and squid cache :). Then the users of the non-Win boxes can be less worried about network sniffing, attacks, etc. Rob -- Robert Brockway B.Sc. Phone: +1-416-669-3073 Senior Technical Consultant Email: [EMAIL PROTECTED] OpenTrend Solutions Ltd.Web:www.opentrend.net We are open 24x7x365 for technical support. Call us in a crisis. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: System exiting due to kernel....
On Fri, 1 Jul 2005, Almut Behrens wrote: (In your case it hit the X server -- which might be considered suboptimal, with respect to keeping damage to the end user to a minimum... ;) The needs of the many must outweigh the needs of the few (or the one) :) Debate on the best strategy for an OOM killer will go on forever :) Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Fri, 17 Jun 2005, Mitja Podreka wrote: I have ADSL connection without fixed IP, can I then set some kind of IP net mask to restrict access from other IP? Yes you can. SSh can do this itself (if compiled against TCP Wrappers), or better you can get a firewall to do it. It is generally accepted that if you block password access and use PKI authentication only then further restricting access based on IP is not necessary. OTOH people do do this - We have one client who wanted us to do this with some of their externally visible systems. Here are a couple of things to consider: 1. The principals of least privilege and security in depth both endorse restricting the IP if you can. 2. If there is a remote exploit in sshd or something it relies on (like a library) you can rest easier if you know you've restricted access via IP. Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Problems in linux software dev at enterprise level
On Tue, 14 Jun 2005, shatam bhattacharya wrote: 1. Lack of linux awareness among the clients, generally the clients are very hard to convince on TCO Vs legacy issues I can't speak for India but I've had experience with clients and other companies in many different countries (Australia, Canada, US, EU) over a period of 11 years of using Linux and TCO has been widely recognised as lower for a long time. Linux has had a reputation for being cheap and stable for a very long time. This doesn't stop people making unsubstantiated claims of course. 2. Lack of support from within the company for linux migration, marketability etc. This may well be a valud concern. If you company decided to start using Linux in client systems it needs in-house experience or support from a company that does. 3. Lack of pool of skilled programmers in this domain etc. what I want to ask over here is, 'Is there any systematic approach Do they mean in the company or in the world? If they mean in the world they need to check their facts :) I _do_ know that India has a lot of experienced Linux developers as does every other country in the world. Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Sat, 11 Jun 2005, s. keeling wrote: And if anyone can get at your console, they can CTRL-ALT-Backspace to get to a logged in shell prompt. They may not still have your ssh-add No they can't. A session managed by a display manager does not fall back to a shell. If you C-A-Backspace from any session managed by a display manager the display manager will respawn and you will be presented with another graphical login window. Try it. If you really find this isn't happening then something is very broken in your X config. You are probably thinking of startx which calls xinit. It does not use the ~/.xsession file - it uses ~/.xinitrc instead, although alot of people do symlink them together for convenience. If you C-A-Backspace from a session started by startx then yes you will end up back at a shell prompt unless you exec startx (or something else in the chain of processes). I haven't started my X sessions this way for more than 10 years. For some reason I've noticed a lot of people get the two methods of starting X mixed up. Man xdm and startx for more info. I prefer to ssh-add after an exec /usr/bin/blackbox in ~/.xsession, The window manager must be the last thing run in ~/.xsession by definition. If you background the window manager then the session will exit as soon as you login. If you don't background the window manager then nothing after it will run (whether you exec it or not). Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Wed, 8 Jun 2005, Mitja Podreka wrote: Can this 2nd box be my laptop or it must be something else? It can be a laptop or anything else. Basically you aim to have Linux (or another Unix) running on the laptop so you can ssh into this box and gain access to the serial console of your server through minicom (or a similar app). A box with the console of other boxes connected is sometimes called a console server (especially if this is its fulltime job). Two boxes may act as the console server for each other - as long as one is on the network you can access the console of the other. The key is that the console server should be no less secure than the servers who's consoles it has. This is because if someone takes control of the console server it is only a matter of time before they gain access to the other boxes. Usually this isn't a big deal as a fulltime console server would not run any services and would allow access via ssh with PKI authentication only. Even if a laptop did not run Linux fulltime it could be booted off Knoppix (with ssh started) to act as a parttime console server. Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Thu, 9 Jun 2005, Marty wrote: Regarding PKI, are there any Debian or non-Debian packages you recommend Hi Marty. The ssh related packages in Debian contain everything you need. for this use? Can you elaborate on your reasoning here, for a non-expert in security, or at least point to some links? I am particularly interested in why you think PKI is better than the plain ssh password/login procedure for this application, and how you keep your Password access is highly susceptible to a brute force attack where the attack just cycles usernames and passwords. Breaking in using a method like this isn't as hard as it first sounds as most people use fairly easily guessed usernames (eg, first names) and passwords. I regularly see attackers try this on my ssh daemons that don't accept password authentication :) PKI makes things much more difficult. An attacker would need both your private key and your passphrase to gain entry. Brute forcing an ssh daemon that only accepts PKI access is an intractable problem. keys secure (i.e. thumb drive? Floppy? Theft issues?) All of the hosts I have private keys for are under my control or my companies' control. We have some clients that move around a lot and they do need keep their private keys on a usb drive. As with everything in security some risk is always involved. A hosts administrator may be sniffing keystrokes to get your passphrase and they may be automatically nabbing any private keys they see - but in reality this is not likely. If you think a machine is not safe don't ssh from it. Cheers, Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Thu, 9 Jun 2005, Roberto C. Sanchez wrote: Sadly, most people (myself included) have no passphrase on their SSH Hi. Using PKI with no passphrase drops the level of security significantly (as I'm sure you know). keys. I also end up bouncing aroud a variety of machines (some Fedora some Windows with PuTTY and some Windows with SSH.com). So the key thing is a pain in the but. At least on the Linux machines it is straightforward and I set those up when I can to use keys instead of passwords. May I introduce you to ssh-agent and ssh-add. They are a standard part of ssh and will operate between implementations (as long as no one has broken their implementation). This is the last line of my ~/.xsession file: ssh-agent bash -c ssh-add /dev/null /usr/bin/fvwm2 After entering my passphrase as part of the login process[1] I can ssh to boxes all over the world without so much as entering my passphrase and I'm doing it securely. Of course you need to keep your session secure if you are doing this (and I certainly do). [1] I can't login successful without the passphrase. Cheers, Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote administration of a server
On Sun, 7 Aug 2005, Mitja Podreka wrote: I will have (I hope :-) no problems setting up the server, I've done that already. What I worry about is how to administer the server from China? Will I only lack the access to the reset button, or something more? Which software should I use for this? What should I take special care at? If you're confortable with the command line (or prepared to become so) this is pretty easy. You can administered the box through ssh without a problem. I'm in Canada and administer boxes in various countries via ssh on a daily basis and have done so for many years. Disable password access and root access via ssh and only allow assess to user accounts through PKI authentication. You may need the console from time to time. The best option if you can manage it is to setup a serial console. The down side is this requires a 2nd box controlled either by yourself or someone you trust implicitely. With the serial console in place you can drop the box to single user mode, take if off the network, etc all from the other side of the world. With a serial console the only things you lack are access to the BIOS and the reset button. Some motherboards allow access to the BIOS through the serial console but this may be more expensive and is not a big deal IMHO. Simlarly 3rd party hardware is available to allow serial access to any BIOS but it is expensive. Some housing facilities allow you to power cycle the box via a web interface. This is useful if you accidentally halt the box. As always, just be very careful when you are root. Good luck, Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Ph: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest http://www.spi-inc.org -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: running one thin client?
On Sun, 5 Jun 2005, Cameron Matheson wrote: So my computer is kind of loud which is drawing some complaints from other people in my room at night... anyway, i'm thinking about moving the compute downstairs and using my laptop as a thin client (i really Excellent idea. I've run laptops as thin clients. hate turning my computer off and losing uptime... garg). anyway, the laptop is a 365Mhz Pentium 2... it runs ok, but it's way slower than my P.C.--i'd really rather just use it as a dumb client (when it's hooked up to the network, and as a real computer when it's not). I checked out the Linux Terminal Server Project, but that looks like too much for this one-client setup. So what's the best way going about this... just using gdm acroos the network to log in (btw, how is the I've found xdm and kdm to be easier to setup than gdm for this purpose. YMMV. Read up on xdmcp. performance for across the network X?)... nothing i do is too intense, Excellent. You're unlikely to notice a difference. It may even seem faster. except watch the occasional movie w/ mplayer. any advice would be greatly appreciated. We've got a bit of info here that may be useful: http://www.opentrend.net/thinclients.shtml This is not technical info on setting one up, more like bandwidth used, questions answered, etc. Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Phone: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest (http://www.spi-inc.org) -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Need help recovering from a power outage
On Mon, 9 May 2005, Paul E Condon wrote: Any ideas? Have you tried remaking the journal with tune2fs -j ? I've never tried this on a filesystem with an already existing journal (corrupt or otherwise) but it may rebuild the journal from scratch (which would be the smart thing for it to do). ** Backup before trying this or anything else ** You may wish to take a copy of the filesystem with dd, mount it loopback and experiment there, if you can afford the space. Rob -- Robert Brockway B.Sc. Senior Technical Consultant, OpenTrend Solutions Ltd. Phone: +1-416-669-3073 Email: [EMAIL PROTECTED] http://www.opentrend.net OpenTrend Solutions: Reliable, secure solutions to real world problems. Contributing Member of Software in the Public Interest (http://www.spi-inc.org) -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]