Re: [CentOS] Printers, aka an old time sysadmin
on 15:05 Tue 05 Apr, m.r...@5-cent.us (m.r...@5-cent.us) wrote: Well, today, I feel like a real, old time sysadmin. Now, I didn't have to write a driver in assembly for the printer, but We got this huge, 44 HP Designjet z3200ps printer. Only supports Win and Mac. Fine, I hang it off of one of our servers on a subnet (at $0.96/foot paper, we're the only ones who print on it). Then I'm thinking that all I really need is a .ppd. My co-worker, who's also got a Mac, d/l's the Mac driver and extracts the .ppd. The Windoze one is apparently buried in a dll, you see I then figured out how to hack a .ppd. First, I found an ifdef construction, for Mac-only information. That worked on the small paper (24 width roll, small). Then the real paper, the 42 stuff. Why HP sells a 44 printer, but 42 paper, dunno, but there's no option for large format printing. After a pointless waste of half an hour on HP's live chat (not sure how many chats the guy was on), he tells me there's no driver. I call HP support, and talk to someone who seems to know a little more... but is sorta fuzzy on .ppd's, and then tells me that there ought to be an option to set a custom size, and seems to confirm what I read (in vi) in the ppd, that there are no settings for 42 paper. So I hacked it, and added settings for 42x34, and 42x60 (the usual size for posters). A lot was cut, paste, and substitute, but the one gotcha is that the actual paper size that the printer sees is in points. Once I got that, it worked beautifully. Anyone needs any info about hacking a .ppd, feel free to email me; if you have a beast of a z3200ps, I'll be glad to send you a copy of mine. A task with a very laudable history: http://oreilly.com/openbook/freedom/ch01.html -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
First: thanks very much for spelling this out, Ilyas. This was along the lines of what I'd been considering. You addressed a number of concerns I had (e.g.: non-blocking output) which is really helpful. on 08:39 Fri 25 Mar, Ilyas -- (umas...@gmail.com) wrote: Hi! I'm using follow method for remote logging and catch logs from many servers. Nginx writes logs into fifo, which created via nginx init script: cat /etc/sysconfig/nginx ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log Nginx just writes to fifo as to file. Nginx has nonblocking output to logs and if nobody read fifo nginx dont stop on logs write. Bingo. From other side pipe reads syslog-ng. cat /etc/syslog-ng/syslog-ng.conf ... source s_nginx_20 { fifo (/var/log/nginx/access.log log_prefix(nginx-access-log: )); }; source s_nginx_21 { fifo (/var/log/nginx/error.log log_prefix(nginx-error-log: )); }; ... destination d_remote { tcp(remote.example.com, port(514)); }; ... # nginx filter f_nginx_20 { match(nginx-access-log: ); }; filter f_nginx_21 { match(nginx-error-log: ); }; ... # nginx log { source(s_nginx_20); filter(f_nginx_20); destination(d_remote); }; log { source(s_nginx_21); filter(f_nginx_21); destination(d_remote); }; Nice. To avoid syslog-ng problems on startup (ex. if fifo does not exists) used follow solution: cat /etc/sysconfig/syslog-ng ... # syslog-ng support for nginx if [ ! -p /var/log/nginx/access.log ]; then /bin/rm -f /var/log/nginx/access.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/access.log fi if [ ! -p /var/log/nginx/error.log ] ; then /bin/rm -f /var/log/nginx/error.log /usr/bin/mkfifo --mode=0640 /var/log/nginx/error.log fi /bin/chown nginx:root /var/log/nginx/access.log /var/log/nginx/error.log On remote side (remote.example.com): cat /etc/syslog-ng/syslog-ng.conf ... source s_net { udp(ip(0.0.0.0) port(514)); tcp(ip(0.0.0.0) port(514) keep-alive(yes) max-connections(128)); }; ... filter f_nginx_20 { match(nginx-access-log: ); }; filter f_nginx_21 { match(nginx-error-log: ); }; ... destination d_nginx_20 { file(/var/log/nginx/access.log); }; destination d_nginx_21 { file(/var/log/nginx/error.log); }; ... log { source(s_sys); filter(f_nginx_20); destination(d_nginx_20); }; log { source(s_sys); filter(f_nginx_21); destination(d_nginx_21); }; In the same way I catch logs from 20-30 servers to 1 server, approx. 300GB gzipped logs per day. Great. That also answers the scaling question. We're comfortably under that scale for now. Very, very helpful post, thanks again. On Thu, Mar 24, 2011 at 11:23 PM, Dr. Ed Morbius dredmorb...@gmail.com wrote: I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively. I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly. However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis). I've heard rumors of a Perl script used for apache logs. Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that. Thanks in advance. -- Dr. Ed Morbius, Chief Scientist / | Robot Wrangler / Staff Psychologist | When you seek unlimited power Krell Power Systems Unlimited | Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- Ilyas R. Khasyanov Unix/Linux System Administrator GPG Key ID: 6EC5EB27 (Changed since 2009-05-12) -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 09:08 Fri 25 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 24, 2011 06:52:24 pm Dr. Ed Morbius wrote: Right, and the general solution also generalizes to other tools. Postgresql (which we aren't using currently) also has its own log handler (a small frustration of mine with the database). PostgreSQL has had syslog support since version 7.x, with programmable facility information in /var/lib/pgsql/data/postgresql.conf. It's commented out by default; looking at a C4 server that has 7.4.30: #syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' Good to know. (I don't have syslogging enabled for that box for PostgreSQL) Sometimes it's still nice to see the stdout and stderr, though. Of course it is. Most daemon / service utilities have the ability to run non-detached, in debug mode. And you can always hunt down filedescriptors and nasty stuff like that, but devs of such abominations should be hauled out and shot. Or bribed with beer until they do provide the requisite foreground + stdout/stderr functionality. And I don't recall when or if remote support was added; 7.4 was the last version I actively maintained the RPMs for, and the 8.x databases I have running aren't using syslog. If there's syslog support, rsyslog or syslogng can handle the remote aspect. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 20:19 Fri 25 Mar, Ilyas -- (umas...@gmail.com) wrote: Hi! Also note that: 1. logrotate wouldn't rotate fifo/pipes if options `notifempty' enabled in logrotate profiles. 2. enable buffering in syslog-ng.conf (next - whole list of options in my config): options { sync (128); time_reopen (10); log_fifo_size (16384); chain_hostnames (yes); use_dns (no); use_fqdn (yes); create_dirs (yes); keep_hostname (yes); dir_perm(0755); perm(0644); dir_owner(root); dir_group(root); owner(root); group(root); log_msg_size(16384); }; 3. Don't worry about blocking output in some services. If syslog-ng listen fifo locally (in the same server/vps where working daemon which logs we want serve) any output will be buffered (with few limits of free version of syslog-ng) in syslog-ng. Main idea here is that other side which listen fifo - locally runned syslog-ng. My concern with buffering / blocking output has more to do with some critical service saying wups, no more serving until I can flush my log buffers than it does losing a few lines of logging periodically (though that should also be minimized). 4. I used and using opensource version of syslog-ng and no have problems with load. Syslog-ng is very perfect tool for loads. ... -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 15:28 Fri 25 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/25/2011 2:53 PM, Dr. Ed Morbius wrote: My concern with buffering / blocking output has more to do with some critical service saying wups, no more serving until I can flush my log buffers than it does losing a few lines of logging periodically (though that should also be minimized). Does this have to be centralized in realtime? It'd be nice / helpful / useful. We're at the point now where syncing daily will exceed local storage allocations soon with projected growth rates. We could do more frequent log rotation / distribution, but given the role and volume, real-time (or very close to it) updates would be preferred, and workfactor is largely orthogonal. If we need to queue, we could always have rsyslog (we're using it, not syslog-ng) write locally and rotate those frequently. There's still the risk of a hiccup between nginx and rsyslog, but we can keep an eye on that via monit. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kernel Panic on HP/Compaq ProLiant G7
on 13:18 Fri 25 Mar, Mogens Kjaer (m...@lemo.dk) wrote: On 03/24/2011 10:05 PM, Dr. Ed Morbius wrote: You can create a timestamp cron job. Just a */10 * * * * root Logger --- TIMESTAMP --- syslogd already has this buildin. It's normally disabled by the -m 0 in /etc/sysconfig/syslog. Change the zero to 10, restart syslogd and you get the same result. Thanks, yeah, I wasn't sure if it was or wasn't, too many system variants over the years. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Default permissions for creating a new user
on 16:45 Fri 25 Mar, Todd Cary (t...@aristesoftware.com) wrote: I know this is a Linux 101 question, however I am unable to locate the answer in my O'Reilly Linux book: how to set the default for permissions when creating a new user. The default for the GUI in my newly installed Centos 5.5 is 700. I usually use 774. And when root creates a new directory, is there a way to have a default there too? Lastly, if root or someone with root privileges creates a sub-directory, is there a setting so that the sub-directory will have the owner/group and permissions as the parent directory? man adduser - FILES - /etc/login.defs At login, umask is set by the shell initialization. Check ~/.bashrc, ~/.bash_profile, /etc/bashrc, and /etc/profile, for the usual suspects. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively. I'm aware that there's an nginx patch, and we're evaluating this. It may be the way we fly. However there are other tools which may not have a patch for which remote logging would be useful. If there's a general soution (something as naive as tailing local logs and firing these off on a regular basis). I've heard rumors of a Perl script used for apache logs. Also that rsyslog supports logging from local files to a remote syslog server, possibly. I'm RTFMing on that. Thanks in advance. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kernel Panic on HP/Compaq ProLiant G7
Dave: on 16:03 Thu 24 Mar, Windsor Dave L. (AdP/TEF7.1) (dave.wind...@us.bosch.com) wrote: Hello Everyone, I recently installed CentOS 5.5 x86_64 on a brand new ProLiant DL380 G7. I have identical OS software running reock-solid on two other DL380 ProLiant servers, but they are G6 models, not G7. On the G7, the installation went perfectly and the machine ran great for about 2 weeks, when it just seemed to stop. The system stopped responding on the network, and there was no video on the console (or remote console via iLO). It would not reboot or cold boot through iLO, I actually had to hold the power to turn it off and then hit it again to power up. This happened several times within a few days of each other. Each time, there was no evidence in any logs of a problem - the system just seemed to stop or lock up. We did have a CPU problem light appear on the front, so HP came in and replaced the one 4-core CPU. Since then, it has run as long as two weeks, but still crashes randomly. After the last reboot, I left the console in text mode on vt1, and when it crashed again this morning this was displayed on the screen: CS: 0010 DS: ES: CR0: 80050033 CR2: 8100dc435cf0 CR3: 8a6ca000 CR4: 06e0 Process smbd (pid: 18970, threadinfo 81001529e000, task 81011f5347a0) Stack: 81011e4e71c0 8100cf12a015 80009c41 81011e4e71c0 0001 00030027ea9d 8100cf12a011 81011e4e71c0 81010d9cf300 81011e4e71c0 8101044099c0 Call Trace: [80009c41] __link_path_walk+0x3a6/0xf5b [8000ea4b] link_path_walk+0x42/0xb2 [8000cd72] do_path_lookup+0x275/0x2f1 [80012851] getname+0x15b/0x1c2 [800239d1] __user_walk_fd+0x37/0x4c [80028905] vfs_stat_fd+0x1b/0x4a [80039fa2] fcntl_setlk+0x243/0x273 [80023703] sys_newstat+0x19/0x31 [8005d229] tracesys+0x71/0xe0 [8005d28d] tracesys+0xd5/0xe0 Code: 00 00 00 00 00 00 00 00 70 4d 4f 9d 00 81 ff ff 98 e4 4b dc RIP [8100dc435cf0] RSP 81001529fd18 CR2: 8100dc435cf0 0Kernel panic - not syncing: Fatal exception This suggests that something happened in a Samba process. Correct. If this is regularly happening in Samba, that would point to a problem with your samba config (either on that host, something remotely stuffing bad packets at you, or likley in that case, both, as bad data shouldn't crash the host). If this is happening in different programs over time, then the problem is likely /not/ software, but hardware/firmware. The LKML may be able to help you on your panic; please read their bug posting guidelines /BEFORE/ posting. I have the Samba3x packages installed since we are beginning to introduce Win7 clients into our environment. What happens if you take the Win7 clients away? Googling Kernel panic - not syncing: Fatal exception and CentOS That is the generic kernel panic message. It's going to be spectacularly unspecific. produced many hits, but nothing that seemed to exactly match my problem. Since this is the only G7 server I have here right now, I can't reproduce the problem on another machine. The G6s I have running the identical version of CentOS have no problems. I am trying to determine if this is pointing to a hardware or software issue. Some of the Google results suggested using a Centosplus kernel - is this a good idea? Dell have had numerous issues with recent server editions, it's possible HP are as well: - If you haven't, configure the netconsole kernel module for kernel-enabled network logging of panics. - Call HP and find out what the latest recommended BIOS and firmware upgrades for your system are. C-STATE has been a particular issue with Dell, and its' been disabled entirely in recent BIOS versions. I see below you've updated BIOS. - Scan logs for other messages, particularly panics and/or ECC issues. - If you can stand the downtime, run memtest86+ at least overnight on your RAM. A reboot indicates a failed test. - Otherwise: try running with half your RAM swapped. - Check/reseat all DIMMs, sockets, and cables. Some folks caution against this on the basis of connector wear, but if you've got a problem, this may help resolve it, and I've seen boxes shipped with components poorly or even un-cabled. - Does a similarly equipped system exhibit the same problems? The server is a HP DL380 G7 Server with 4 GB RAM (1 DIMM 1333 MHz), one 4-core CPU (2133 MHz), 4 built-in Broadcom NetExtreme II BCM5709 II Gigabit Ethernet NICs, and a P410 Smart Array Controller. The P410 and the system BIOS have both been updated to the latest levels to see if that fixes the crashes, with no change. Ugh. Broadcom's gotten better but I prefer Intel NICs. Can't speak to the others. And OK, you've updated BIOS. -- Dr. Ed Morbius, Chief Scientist
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 16:35 Thu 24 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote: I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively. logger I'm familiar with it. It's part of util-linux, and should be on every CentOS box, unless something is bad wrong It can take its stdin and syslog to any loglevel and facility, and can do so over any socket. So: as part of a robust production system solution, how would I, say, avoid retransmitting old log data? Named FIFO pipes come to mind. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kernel Panic on HP/Compaq ProLiant G7
on 16:56 Thu 24 Mar, Windsor Dave L. (AdP/TEF7) (dave.wind...@us.bosch.com) wrote: On 3/24/2011 4:38 PM, Dr. Ed Morbius wrote: Dave: on 16:03 Thu 24 Mar, Windsor Dave L. (AdP/TEF7.1) (dave.wind...@us.bosch.com) wrote: Hello Everyone, Code: 00 00 00 00 00 00 00 00 70 4d 4f 9d 00 81 ff ff 98 e4 4b dc RIP [8100dc435cf0] RSP81001529fd18 CR2: 8100dc435cf0 0Kernel panic - not syncing: Fatal exception This suggests that something happened in a Samba process. ... - If you haven't, configure the netconsole kernel module for kernel-enabled network logging of panics. This is a great idea. I will work on that soonest. It really is about four times as cool as it sounds. Getting the actual panic is hugely useful. - Call HP and find out what the latest recommended BIOS and firmware upgrades for your system are. C-STATE has been a particular issue with Dell, and its' been disabled entirely in recent BIOS versions. I see below you've updated BIOS. - Scan logs for other messages, particularly panics and/or ECC issues. I haven't seen anything ominous, although I have noticed a long time gap between the last entry in /var/log/messages and the actual crash. Such a gap in entries is very unusual. You can create a timestamp cron job. Just a */10 * * * * root Logger --- TIMESTAMP --- ... entry. At least you'll see any long dry periods. sar is also a useful utility to look at. It should be recording and reporting systems state and resource utilization levels prior to the crash. - If you can stand the downtime, run memtest86+ at least overnight on your RAM. A reboot indicates a failed test. - Otherwise: try running with half your RAM swapped. - Check/reseat all DIMMs, sockets, and cables. Some folks caution against this on the basis of connector wear, but if you've got a problem, this may help resolve it, and I've seen boxes shipped with components poorly or even un-cabled. We have one DIMM of 4 GB RAM, so I can't swap it out or run with half. I have reseated it and inspected the contacts, and it looks OK. I will look at anything else with connectors. Actually, you can. Setting 'mem=2G' at your boot prompt will cue the kernel to use only half the RAM. Now, you can't specify an offset to use the high half, unfortunately. You could also swap the DIMM with another system if you've got it and see if you still have the problems in this one (or start seeing them in the other). -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 17:14 Thu 24 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 24, 2011 04:44:00 pm Dr. Ed Morbius wrote: on 16:35 Thu 24 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 24, 2011 04:23:38 pm Dr. Ed Morbius wrote: I'm looking for suggestions as to a good general method of remote-logging services such as nginx or anything else which doesn't support syslog natively. logger I'm familiar with it. Have you tried it? Prior to PostgreSQL supporting syslog I used it to pipe PostgreSQL output to syslog. Worked fine. I haven't, looking at it. So: as part of a robust production system solution, how would I, say, avoid retransmitting old log data? Timestamps, good NTP setup, and log deduplication. Better to have retransmitted than to never have transmitted at all. OK. Any pointers on configuration are greatly appreciated. Docs, etc. Or, in the specific case of nginx, use the syslog patch from Marlon de Boer. Yeah, we're aware of that (I mentioned this in my first post to the thread). But nginx is not in the CentOS repos that I can see; logger is, however, and the general usage of logger in the CentOS context would be on-topic. We've got a locally-compiled version of nginx, so patching isn't out of the question. Just looking at all our options. Thanks. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Remote-logging nginx? (or other non-syslog-enabled stuff)
on 17:50 Thu 24 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 24, 2011 05:37:41 pm Dr. Ed Morbius wrote: on 17:14 Thu 24 Mar, Lamar Owen (lo...@pari.edu) wrote: Prior to PostgreSQL supporting syslog I used [logger] to pipe PostgreSQL output to syslog. Worked fine. I haven't, looking at it. It is one option that is definitely in vanilla CentOS. Quite. OK. Any pointers on configuration are greatly appreciated. Docs, etc. Whew. Large scale remote syslog operation is a large subject; I've never had anything large-enough scale to need more than logwatch or site-grown scripts to do processing. The biggest thing to do is set up NTP and have three reference time sources (three so that if one is wrong you know which one). Otherwise, log correlation is impossible. It is. There've been a few advances in sysadmin practice since the Nemeth books were first produced, and while there are some titles dealing with portions of this, codifying practices in docs would be a wonderful thing. I've considered (and been approached regarding) tackling at least parts of this myself. Useful logging is definitely part of this. Yeah, we're aware of that (I mentioned this in my first post to the thread). Yep, that you did. We've got a locally-compiled version of nginx, so patching isn't out of the question. Just looking at all our options. While CentOS doesn't provide nginx itself, it does provide tools for dealing with logs; I saw several things doing a 'yum list | grep log' (I know there's easier ways of doing that; that's just the way I prefer to go about it). Also try grepping a yum list for 'watch' as I remember some logwatching stuff. Right, and the general solution also generalizes to other tools. Postgresql (which we aren't using currently) also has its own log handler (a small frustration of mine with the database). And I turned up the rsyslogd feature: http://www.rsyslog.com/doc/imfile.html Text File Input Module Module Name:imfile Author: Rainer Gerhards rgerha...@adiscon.com Description: Provides the ability to convert any standard text file into a syslog message. A standard text file is a file consisting of printable characters with lines being delimited by LF. The file is read line-by-line and any line read is passed to rsyslog's rule engine. The rule engine applies filter conditons and selects which actions needs to be carried out. As new lines are written they are taken from the file and processed. Please note that this happens based on a polling interval and not immediately. The file monitor support file rotation. To fully work, rsyslogd must run while the file is rotated. Then, any remaining lines from the old file are read and processed and when done with that, the new file is being processed from the beginning. If rsyslogd is stopped during rotation, the new file is read, but any not-yet-reported lines from the previous file can no longer be obtained. When rsyslogd is stopped while monitoring a text file, it records the last processed location and continues to work from there upon restart. So no data is lost during a restart (except, as noted above, if the file is rotated just in this very moment). Currently, the file must have a fixed name and location (directory). It is planned to add support for dynamically generating file names in the future. Multiple files may be monitored by specifying $InputRunFileMonitor multiple times. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Install on Dell PowerEdge T310
on 18:40 Wed 23 Mar, admin lewis (adminle...@gmail.com) wrote: Hi, this is the first time I install linux on a dell server. Simply I booted from a centos 5.5 x64 dvd but I cant see the disks.. is there something I miss ? thanks very much for any help 1: Comfirm your boot menu settings 2: Comfirm your BIOS settings (CD/DVD is active) 3: Verify your download (md5sum / sha1sum). 4: Try the DVD in another system 5: Burn and try another disk 6: Try another boot option / medium: flash, PXE/kickstart, etc. If you're provisioning a datacenter or multiple systems, I'd jump straight to #6 myself. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Safe/sane tempfile creation?
I'm used to Debian-based distros which have a tempfile(1) utility for safely and sanely creating temporary files. There isn't a comperable utility for RHEL/CentOS systems. I've been exercising Google-fu looking for a good robust tempfile generation idiom, but haven't turned one up yet. Hence this appeal to the lazyweb. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Safe/sane tempfile creation?
on 20:35 Fri 18 Mar, John R. Dennison (j...@gerdesas.com) wrote: On Fri, Mar 18, 2011 at 06:33:14PM -0700, Dr. Ed Morbius wrote: I'm used to Debian-based distros which have a tempfile(1) utility for safely and sanely creating temporary files. There isn't a comperable utility for RHEL/CentOS systems. Sure there is. mktemp; contained within the package with the same name. My error. Thank you. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! signature.asc Description: Digital signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Watching a directory
on 14:06 Wed 16 Mar, Jason Slack-Moehrle (slackmoehrle.li...@gmail.com) wrote: Hi All, I am thinking about an idea, but it requires that I be able to watch several directories for files that are added, deleted or maybe changed. Let start with adding files. What tools are available for me to watch a directory. In an example, if a file is added to a directory I want to run a shell script that will do some conversation on the file to produce a second copy. I have the shell scripting down, I am not sure how to watch, realize change and kick off the script with the parameters of the directory and what was added. Would anyone have thoughts? make -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] wich filesystem to store 250E6 small files in same or hashed dire
on 13:10 Sat 12 Mar, Alain Spineux (aspin...@gmail.com) wrote: Hi I need to store about 250.000.000 files. Files are less than 4k. On a ext4 (fedora 14) the system crawl at 10.000.000 in the same directory. I tried to create hash directories, two level of 4096 dir = 16.000.000 but I had to stop the script to create these dir after hours and rm -rf would have taken days ! mkfs was my friend I tried two levels, first of 4096 dir, second of 64 dir. The creation of the hash dir took only few minutes, but copying 1 files make my HD scream for 120s ! I take only 10s when working in the same directory. The filenames are all 27 chars and the first chars can be used to hash the files. My question is : Which filesystem and how to store these files ? I'd also question the architecture and suggest an alternate approach: hierarchical directory tree, database, nosql hashing lookup, or other approach. See squid for an example of using directory trees to handle very large numbers of objects. In fact, if you wired things up right, you could probably use squid as a proxy back-end. In general, I'd say a filesystem is the wrong approach to this problem. What's the creation/deletion/update/lifecycle of these objects? Are they all created at once? A few at a time? Are they ever updated? Are they expired and/or deleted? Otherwise, reiserfs and its hashed directory indexes scales well, though I've only pushed it to about 125,000 entries in a single node. There is the usual comment about viability of a filesystem whose principle architect is in jail on a murder charge. It's possible XFS/JFS might also work. I'd suggest you test building and deleting large directories. Incidentally, for testing, 'make -J' can be useful for parallelizing processing, which would also test whether or not locking/contention on the directory entry itself is going to be a bottleneck (I suspect it may be). You might also find that GNU 'find's -depth argument is useful for deleting deep/large trees. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Swap space for kvm virtual host
on 14:24 Mon 14 Mar, Nataraj (incoming-cen...@rjl.com) wrote: I have a kvm virtual host running on what will become CentOS 6 with 12GB of memory and a Quad Xeon X5560 2.8Ghz . The store for virtual machines will be a software raid 6 array of 6 disks with an LVM layered on top. I'm not initially planning any major overcommitment of resources, though there could be a need for some overcommitment with a light workload on the guests. In recent years people seem to configure a wide range of different swap allocations. I was thinking initially to spread swap across seperate non-raid partitions on 4 of these disks, but the downside of that is if I put 2gb on each disk, then I can only swap processes that will fit in 2gb swap space. Incorrect. Linux processes aren't swapped to disk (the entire process memory space), but are paged (given memory blocks are swapped out individually). Swap allocated over multiple spindles is effectively striped (treated as one large RAID 0 partition). If you've got SSD, you'll get even better swap performance. For an excellent explanation of how Linux pages / handles memory: http://sourcefrog.net/weblog/software/linux-kernel/swap.html Also, if one of the disks fails, I have to reboot if anything was swapped to that drive. My questions are as follows: 1. What experience are others having with putting swap space on raid partitions? I was thinking about maybe swapping on a raid10 device, otherwise an LVM spanning multiple drives. I'd allocate swap to the raw devices rather than the RAID devices, particularly if using SW RAID. In the case of HW RAID, it's a bit of a toss-up. Whatever's easier to manage. 2. In practice, what kinds of swap allocation are people finding useful for a kvm virtual host of this size? 1-3x RAM is still my rule of thumb. I definitely don't want a system that is so overcommited that performance is impacted, but if some overcommitment is reasonable for VM's that have light workload, then I consider that. I can increase system resources when that becomes necessary. For this, you'll want to set the overcommit and swappiness kernel parameters. Amount of swap space is a secondary consideration. How much swap you /have/ and how much swap you're /doing/ are two different things. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can anyone help me understand Apache Errors?
on 07:45 Thu 10 Mar, Todd (slackmoehrle.li...@gmail.com) wrote: LogWatch reports items like: Connection attempts using mod_proxy: 83.167.123.83 - 205.188.251.1:443: 1 Time(s) 83.167.123.83 - 64.12.202.36:443: 2 Time(s) Requests with error response codes 403 Forbidden 205.188.251.1:443: 1 Time(s) 64.12.202.36:443: 2 Time(s) 404 Not Found //jmx-console/HtmlAdaptor: 1 Time(s) /VINT_1984_THINK_DIFFERENT: 1 Time(s) /mobo.png: 1 Time(s) /player.swf: 1 Time(s) /robots.txt: 4 Time(s) Now, I know mod_proxy is turned off by default, but is there a way to play games with those that attempt a proxy connection? Like a ReWrite rule or some sort? For the 404's, Obviously these don't exist, but robots.txt so I am not sure why that has a 404. What are 406 errors? Some Googling say they are due to mod_security issues and that an .htaccess fix can turn it off. But I don't understand the issue and the solution to be honest. HTTP status codes generally: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html Slightly more user-friendly descriptive guide: http://www.addedbytes.com/for-beginners/http-status-codes/ 406 indicates an unacceptable request. Bumping up your apache debug levels and watching the error log may help, as could snooping the traffic generating the request (packet/GET request). -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Any reliable way to determine LVM snapshot creation time?
We utilize LVM snapshots for some periodic maintenance. They're manually created and, usually, manually destroyed. But not always. So there's now a nightly script monitoring for open snapshots. Which raises the question of when a given snapshot was created. Absent good practices of, say, using sudo to create snapshots (leaving a /var/log/secure message), is there any reasonably reliable way to determine when an LVM was created? There's a timestamp on the /dev/mapper/snapshot file. I'm presuming that's somewhat useful for this purpose? Ah: just found /var/log/messages also indicates: lvm[2314]: Monitoring snapshot name lvm[2314]: No longer monitoring snapshot name ... which I think answers my own question. Posting here for Google's sake and/or discussion. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 10:05 Wed 09 Mar, Lamar Owen (lo...@pari.edu) wrote: On Tuesday, March 08, 2011 04:44:54 pm Dr. Ed Morbius wrote: I'd very strongly recommend you configure netconsole. Ok, now this is useful indeed. Thanks for the information, even though I'm not the OP While I suspected the facility might be there, I hadn't really dug for it, but if this will catch things after filesystems go r/o (ext3 journal things, ya know) it could be worth its weight in gold for catching kernel errors from VMware guests (serial console not really an option with the hosts I have, Yep, it is. Netconsole made me fall in love with Linux all over again. although I'm sure some enterprising soul has figured out how to redirect the VM guest serial port to something else). -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 07:06 Wed 09 Mar, Michael Eager (ea...@eagerm.com) wrote: Dr. Ed Morbius wrote: on 09:24 Tue 08 Mar, Michael Eager (ea...@eagerm.com) wrote: Hi -- I'm running a server which is usually stable, but every once in a while it hangs. The server is used as a file store using NFS and to run VMware machines. I don't see anything in /var/log/messages or elsewhere to indicate any problem or offer any clue why the system was hung. Any suggestions where I might look for a clue? I'd very strongly recommend you configure netconsole. Though not entire clear from the name, it's actually an in-kernel network logging module, which is very useful for kicking out kernel panics which otherwise aren't logged to disk and can't be seen on a (nonresponsive) monitor. I'll take a look at netconsole. Alternately, a serial console which actually retains all output sent to it (some remote access systems support this, some don't) may help. Barring that, I'd start looking at individual HW components, starting with RAM. The problem with randomly replacing various components, other than the downtime and nuisance, is that there's no way to know that the change actually fixed any problem. When the base rate is one unknown system hang every few weeks, how many wees should I wait without a failure to conclude that the replaced component was the cause? A failure which happens infrequently isn't really amenable to a random diagnostic approach. This is where vendor management/relations starts coming into the picture. Your architecture should also support single-point failures. If the issue is repeated but rare system failures on one of a set of similarly configured hosts, I'd RMA the box and get a replacement. End of story. If that's not the case, well, then, I suppose YOUR problem is to figure out when you've resolved the issue. I've outlined the steps I'd take. If this means weeks of uncertainty, then I'd communicate this fact, in no uncertain terms, to my manager, along with the financial implications of downtime. If downtime is more expensive than system replacement costs, the decision is pretty obvious, even if painful. Note that most system problems /are/ single-source. If you'd post details of the host, more logging information, netconsole panic logs, etc., it might be possible to narrow down possible causes. With what you've posted to date, it's not. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 10:37 Wed 09 Mar, Lamar Owen (lo...@pari.edu) wrote: On Wednesday, March 09, 2011 10:16:34 am Brunner, Brian T. wrote: This would be far cheaper than the time spent troubleshooting the running (sometimes hanging) system. Let me interject here, that from a budgeting standpoint 'cheaper' has to be interpreted in the context of which budget the costs are coming out of. New hardware is capex, and thus would come out of the capital budget, and admin time is opex, and thus would come out of the operating budget. There may be sufficient funds in the operating budget to pay an admin $x,000 but the funds in the capital budget may be insufficient to buy a server costing $y,000, where y=x. That represents an accounting failure, as opex is now subsidizing capex. Troubleshooting of known bad equipment should be an opex chargeback against capex or some capital reserve. This requires clueful beancounters. Recent economic/business/finance history suggests a significant shortage of same. Cue supply/demand and incentives off-topic digression. The answer is still to communicate the issue upstream. Estimating replacement costs and likelihood will help in the relevant business / organizational decision. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 11:52 Wed 09 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/9/2011 11:32 AM, Michael Eager wrote: Memory diagnostics may take days to catch a problem. Did you check for a newer bios for your MB? I mentioned before that it seemed strange, but I've seen that fix mysterious problems even after the machines had previously been reliable for a long time (and even more oddly, all the machines in the lot weren't affected). BIOS issues would tend to present similar issues on numerous systems, especially if they're similarly configured. Mind: we've encountered a DSTATE bug with recent Dell PowerEdge systems (r610, r410, r310), which has resulted in several BIOS revisions, the latest of which simply disables the option entirely. It's one of the first things Dell techs mention when you call them these days (much to our amusement). If it's a single system (and assuming there are others similarly configured), I'm leaning toward hardware or build-quality issues: bad RAM, other componentry, poor cable seating, etc. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 10:29 Wed 09 Mar, Michael Eager (ea...@eagerm.com) wrote: Les Mikesell wrote: Note that overheating can be localized or a bad heat sink mounting or fan on a CPU. I'll re-seat the CPU, heatsink, and fan on the next downtime. Very strongly advised. It's a simple and very cheap approach. I'd check /all/ cables (power, disk) as well. Visually scan for bad caps while you're doing this. The pandemic of the mid 2000s seems to have abated, but they can still ruin your whole day. Heat related problems usually present as a system which fails and will not reboot immediately, but will after they sit for a while to cool down. This system doesn't do that. Maybe, maybe not. I'll install sensord to log CPU temps in case this is a problem. Good call. There's not really a good way to approach intermittent failures. It may only break when you aren't looking. Major component swaps or taking it offline for extended diagnostics hoping to catch a glimpse of the cause when it fails is about all you can do. I disagree with this statement: you start with the bleeding obvious and easy to do (the cheap diagnostics), same as any garage mechanic or doctor. You instrument and increase log scrutiny. You make damned sure you're logging remotely as one of the first things a hosed system does is stop writing to disk. Yes, most memory diagnostics are not very effective. I'll have to stop the server to find out what the installed bios version is and see whether there is an update. Most bios updates appear to only change supported CPUs. Something else for the next downtime. You haven't stated who's built this system, but many LOM / OMC systems will provide basic information such as this. dmidecode and lshw are also very helpful here. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Newbie] Reclaiming /boot space
on 09:24 Wed 09 Mar, Simon Matter (simon.mat...@invoca.ch) wrote: On Wed, Mar 9, 2011 at 10:12 AM, Simon Matter simon.mat...@invoca.ch wrote: user deletes /boot, hilarity ensues Wouldn't it have been easier to reinstall the kernel grub, i.e.: yum reinstall kernel grub Surely if yum reinstalls it, it would re-create the permissions symlinks as well? Yes, only that reinstall doesn't exist in EL4 :) It doesn't? rpm -Uvh packagelist The -U (upgrade) should occur regardless of the current install state, though mucking with '--force' may be necessary. '--replacepkgs' should cover it. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Newbie] Reclaiming /boot space
on 15:39 Tue 08 Mar, Todd Cary (t...@aristesoftware.com) wrote: Simon - Did I screw up? I deleted what was in /boot! Yes, as others have noted. Lessons: 1: Don't go randomly/arbitrarily deleting system files (unless you're curious to see what happens when randomly deleting systems files). 2: Understand how Linux functions. E.g.: the boot process, and the significance of the /boot directory/filesystem. 3: Use your package management system. If you /are/ going to delete arbitrary system files, doing it through your package manager is going to a) give you some idea when you're about to do something really stupid (generally other Really Important Stuff depends on them) and b) at the very least does the damage in an orderly manner. 4: Have a boot disk. Know how to use it. 5: Know how to restore GRUB and an emergency boot kernel. Circling back to #3: your package management system can also dig you out of this hole. You should be able to identify and replace all files in an arbitrary tree, for example, /boot, using an RPM bash one-liner: $ rpm -qa # lists all packages installed $ rpm -ql package # lists all files in a package $ command | grep -q expression # success/fail on match / no match $ command1 command2 # runs command2 if command1 exits true # rpm -Uvh --replacepkgs package list # (re)installs packages $ $( command ) # executes output of 'command' Putting that together: rpm -Uvh --replacepkgs $( for pkg in $( rpm -qa ); do rpm -ql $pkg | grep -q ^/boot echo $pkg done ) Incidentally, the list of packages works out to: filesystem-2.4.0-3.el5 grub-0.97-13.5 kernel-2.6.18-194.17.1.el5 redhat-logos-4.9.99-11.el5.centos -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 14:31 Wed 09 Mar, Michael Eager (ea...@eagerm.com) wrote: Dr. Ed Morbius wrote: If the issue is repeated but rare system failures on one of a set of similarly configured hosts, I'd RMA the box and get a replacement. End of story. I'll repeat: this is a house-made system. There's no vendor to RMA to. It seems obvious to me: RMA is not a diagnostic tool. You fab your own silicon? I saw your reference to a homebrew machine after I'd posted. You'd neglected to provide this information initially. Knowing some basic stuff like: CPU architecture, memory allocation, disk subsystem, kernel modules, etc., If you'd post details of the host, more logging information, netconsole panic logs, etc., it might be possible to narrow down possible causes. The problem is that there are NO DIAGNOSTICS generated when the system hangs. There's no panic and nothing in the logs which indicates any problem. This is what I indicated from the get go. uname -a /proc/cpuinfo /proc/meminfo lspci lsmod /proc/mounts /proc/scsi/scsi /proc/partitions dmidecode ... would be useful for starters. If you've built your own kernel, your config options (if you're running stock, we can get that from the package itself). As would wiring up netconsole as I initially suggested. If I can clarify: YOU are the person with the problem. WE are the people you're turning to for assistance. YOU are getting pissy. YOU should be focusing on providing relevant information, or noting that it's not available. You're NOT obliged to repeat information you've already posted (e.g.: home-brew system), but it's helpful to front-load data rather than have us tease it out of you. With what you've posted to date, it's not. I could waste my time posting logs for you to tell me that they don't point to any problem. I'd rather skip that step. Krell forfend you should post relevant and useful information which might be useful in actually diagnosing your problem (or pointing to likely candidates and/or further tests). -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Newbie] Reclaiming /boot space
on 15:49 Wed 09 Mar, Keith Keller (kkel...@wombat.san-francisco.ca.us) wrote: On Wed, Mar 09, 2011 at 01:44:18PM -0800, Dr. Ed Morbius wrote: on 09:24 Wed 09 Mar, Simon Matter (simon.mat...@invoca.ch) wrote: Yes, only that reinstall doesn't exist in EL4 :) It doesn't? rpm -Uvh packagelist Creating the package list is what yum does automatically; using rpm directly means creating a list of URLs or downloading rpms individually. See my other recent post to this thread for how that's done. Essentially: use RPM to generate a list of all packages. List all files in those packages. Filter for files on /boot. Identify the packages with files on /boot. Reinstall those packages. It's a shell one-liner. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! signature.asc Description: Digital signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 09:24 Tue 08 Mar, Michael Eager (ea...@eagerm.com) wrote: Hi -- I'm running a server which is usually stable, but every once in a while it hangs. The server is used as a file store using NFS and to run VMware machines. I don't see anything in /var/log/messages or elsewhere to indicate any problem or offer any clue why the system was hung. Any suggestions where I might look for a clue? I'd very strongly recommend you configure netconsole. Though not entire clear from the name, it's actually an in-kernel network logging module, which is very useful for kicking out kernel panics which otherwise aren't logged to disk and can't be seen on a (nonresponsive) monitor. Alternately, a serial console which actually retains all output sent to it (some remote access systems support this, some don't) may help. Barring that, I'd start looking at individual HW components, starting with RAM. The trick is in passing the appropriate parameters to the module at load time. I found it helpful to have an @boot cronjob to do this. You'll need to pass the local port, local system IP, local network device, remote syslog UDP port, remote syslog IP, and the /gateway/ MAC address, where gateway is the syslogd (if on a contiguous ethernet segment), or your network gateway host, if not. Some parsing magic can determine these values for you. Good article describing configuration: http://www.cyberciti.biz/tips/linux-netconsole-log-management-tutorial.html If you're not already remote-logging all other activity, I'd do that as well. You might catch the start of the hang, if not all of it. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! signature.asc Description: Digital signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server hangs on CentOS 5.5
on 10:31 Tue 08 Mar, Michael Eager (ea...@eagerm.com) wrote: Les Mikesell wrote: On 3/8/2011 11:24 AM, Michael Eager wrote: Hi -- I'm running a server which is usually stable, but every once in a while it hangs. The server is used as a file store using NFS and to run VMware machines. I don't see anything in /var/log/messages or elsewhere to indicate any problem or offer any clue why the system was hung. Any suggestions where I might look for a clue? Probably something hardware related. Bad memory, overheating, power supply, etc. I've even seen some rare cases where a bios update would fix it although it didn't make much sense for a machine to run for years, then need a firmware change. The system is on a UPS and temps seem reasonable. Locating a transient memory problem is time consuming. Disable or remove half your RAM. If the problem persists, replace that RAM and remove the other half. If the problem resolves, the issue is likely in the half of the RAM you've removed. You can binary search through it, or RMA the lot if warranteed. Identifying a power supply which sometimes spikes is even more difficult. Same drill. Replace the power supply, or on a dual-PS system, disable one, then the other. Follow procedure as for RAM. I'd like to have a clue about the likely problem before shutting down the server for an extended period. If the server is critical, get a vendor loaner and bench-test the equipment until the fault can be identified. I'll set up sar and sensord to periodically log system status and see if this gives me a clue for the next time this happens. At best, sar will tell you whether or not you're experiencing resource exhuastion. It's a valuable tool, but fairly coarse-grained. Cacti will give you better resolution and visualization (particularly on CentOS) than sar (some distros now include sar graphing utilities, CentOS to the best of my recollection does not). -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Dell PERC H800 commandline RAID monitoring tools
We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. We've installed most of the OMSA (Dell monitoring) suite. Our current alerting is happening through SNMP, though it's a bit hit or miss (we apparently missed a couple of earlier predictive failure alerts on one drive). OMSA conflicts with mega-cli, though we may find that the latter is the more useful package. Both are pretty byzantine, the Dell stuff simply doesn't have docs (in particular: docs on how to interpret the omconfig log output). Ideally we'd like something which could be run as a Nagios plugin or cron job providing information on RAID status and/or possible disk errors. Probably both, actually. Thanks in advance. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools
on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote: 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com: We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. We've installed most of the OMSA (Dell monitoring) suite. Our current alerting is happening through SNMP, though it's a bit hit or miss (we apparently missed a couple of earlier predictive failure alerts on one drive). OMSA conflicts with mega-cli, though we may find that the latter is the more useful package. Both are pretty byzantine, the Dell stuff simply doesn't have docs (in particular: docs on how to interpret the omconfig log output). Ideally we'd like something which could be run as a Nagios plugin or cron job providing information on RAID status and/or possible disk errors. Probably both, actually. if your system supports omreport (comes with omsa) then this is good solution: http://folk.uio.no/trondham/software/check_openmanage.html So ... this slots on top of OMSA to provide reporting? -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools
on 16:04 Mon 07 Mar, Blake Hudson (bl...@ispn.net) wrote: Original Message Subject: [CentOS] Dell PERC H800 commandline RAID monitoring tools From: Dr. Ed Morbius dredmorb...@gmail.com To: CentOS User list centos@centos.org Date: Monday, March 07, 2011 2:43:03 PM We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. If you purchased the server with an add-in DRAC, the DRAC can provide email alerts if an array becomes degraded (or just about any other hardware fault). This isn't necessarily a replacement for your current monitoring, but it can be used to supplement or compliment it. The iDRAC /doesn't/ report on RAID / storage configuration or status. iDRAC 6, Dell r610, onboard PERC H700, offboard PERC H800 (MD1200 array). BIOS version 2.1.15, Firmware 1.54 (Build 15). We get batteries, fans, intrusion, power, removable flash media, temps, and volts, but not storage.o The iDRAC is pretty good compared with some past Dell offerings. Ability to boot virtual media in particular is very slick (I can specify local removable storage or a drive image and mount it for booting / diagnostics remotely). But no RAID / storage management or monitoring. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools
on 12:43 Mon 07 Mar, Dr. Ed Morbius (dredmorb...@gmail.com) wrote: We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. Pardoning the self-reply, but one issue we've ahd is reconciling the omcontrol log report with the Dell Server Manager syslog messages. omcontrol reported a predictive drive failure, but we (and three Dell storage/support techs) had trouble identifying which actual device was being reporrted as bad. From 'omconfig storage controller action=exportlog controller=0' output: 03/04/11 21:42:42: EVT#02959-03/04/11 21:42:42: 96=Predictive failure: PD 00(e0x08/s2) 03/05/11 14:28:41: EVT#02961-03/05/11 14:28:41: 112=Removed: PD 00(e0x08/s2) In /var/log/messages (timestamp/hostname trimmed): Server Administrator: Storage Service EventID: 2243 The Patrol Read has stopped.: Controller 0 (PERC H800 Adapter) Server Administrator: Storage Service EventID: 2049 Physical disk removed: Physical Disk 0:0:2 Controller 0, Connector 0 The Server Administrator reports of a slot 2 failure correspond to the drive which was physically replaced. The OMSA omconfig report is throwing us a bunch of crud about some device, but Dell variously identified it as slot 0 and slot 9. We're now getting from them that /s2 identifies slot 2. Dell said point blank you're not going to have any luck with that as far as documentation of the OMSA log report format and parsing being documented. Does anyone have a clue as to WTF it's actaully trying to say, or what this tool is based off of (I'm suspecting mega-cli on a general hunch but not much stronger). Enterprise support indeed. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools
on 23:15 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote: 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com: on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote: 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com: We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. We've installed most of the OMSA (Dell monitoring) suite. Our current alerting is happening through SNMP, though it's a bit hit or miss (we apparently missed a couple of earlier predictive failure alerts on one drive). OMSA conflicts with mega-cli, though we may find that the latter is the more useful package. Both are pretty byzantine, the Dell stuff simply doesn't have docs (in particular: docs on how to interpret the omconfig log output). Ideally we'd like something which could be run as a Nagios plugin or cron job providing information on RAID status and/or possible disk errors. Probably both, actually. if your system supports omreport (comes with omsa) then this is good solution: http://folk.uio.no/trondham/software/check_openmanage.html So ... this slots on top of OMSA to provide reporting? this plugin parsers omreport output and uses it for nagios output. Is it running/invoking omreport or relying on periodic runs? I'll dig through the docs but if you know this off-hand it'd be helpful. omsa webserver is not required, but working omreport cli is. .. works great on my servers. Good to know, much appreciated. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] email to web posting software?
on 14:41 Mon 07 Mar, Dave Stevens (g...@uniserve.com) wrote: Dear CentOS, I have a user group that would like to be able to routinely post (easily) emails to a web site. Must be usable without special training. I have no experience with this. Anyone have a suggestion? LAMP stack installed. https://posterous.com/ -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Updating hardware clock from cron
on 15:04 Fri 04 Mar, Denniston, Todd A CIV NAVSURFWARCENDIV Crane (todd.dennis...@navy.mil) wrote: -Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of Kenneth Porter Sent: Friday, March 04, 2011 14:15 To: CentOS mailing list Subject: [CentOS] Updating hardware clock from cron Is there a package to do this? Normally the hardware clock is set during shutdown if one is running ntpd. No, hwclock --systohc is only called at start time (in /etc/rc.d/init.d/ntpd), and only if ntpdate got a good time, which is a good thing. But if a long-running server shuts down unexpectedly, this isn't done, and the hardware clock might be off by a lot when it comes back up. Not if you are running ntp and it was able to sync, because ntpd activates a mode in the kernel that sets the hwclock every 11 minutes when ntp declares it got synced. If your hwclock is off by a lot when it comes up I believe it is from one of the following: A) bad cmos battery. B) poor cmos clock C) confusing info in /etc/adjtime due to using both hwclock --adjust [at boot] and ntp (long story, but it is due to both tweaking the clock without coordination between them). D) booting a different OS with different ideas of timezones. E) manual tweaking of time via bios. F) Hardware clock set to local time and booting after a standard - daylight savings or daylight savings - standard time shift. I saw this in a large production environmet. The main effect was that system logs would show a very large slew during boot, after ntpdate was run. Annoying, possibly confusing, but not a show-stopper. Generally, the solution is: 1: use ntp, chrony, or another time service to sync system clocks while running. 2: set hwclock to UTC, the way Krell intended it to be. 3: periodically sync the hwclock from system time. I don't really care if you do this hourly, daily, weekly, or monthly (I'd probably select daily myself). But it will tend to avoid big time jumps. If you're particularly anal, you could periodically compare system and hwclock time, and raise a flag to replace the CMOS battery when this starts to drift. Since other CMOS and BIOS settings can be lost, and about the only perceptible sign is a drifting hwclock, this is actually probably a pretty good practice. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 08:15 Fri 04 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/4/11 12:15 AM, Dr. Ed Morbius wrote: But why do you need screen, then? Terminal multiplexing, session persistance, scrollback/logging, split screen (top running in the top panel, shell underneath, etc.), workflow organization (similar processes are grouped in a screen session). But all of that just happens by itself in a GUI screen and isn't limited to text mode. I think you're fundamentally failing to understand my operating mode. Local system == Linux === my administrative center. Remote hosts. May be a dozen. May be 20,000. Or some number between or beyond. Desktop persistance is local. If I have to interactively operate on an individual remote host, I'm doing my job wrong. Preferably that's limited to initial provisioning and possibly hardware troubleshooting. Ideally, not even then (I haven't met my ideal). I'm really not particularly interested in having some complex GUI state on multiple remote systems. Again: my objective isn't to change your mind but possibly open it a tad. That appears to be increasingly unlikely. I'm writing this mail in mutt, in a screen session with multiple mailboxes open, each to its own screen window. It's like a multi-tabbed GNOME or KDE terminal, except that the session persists even if the controlling terminal is killed, or X dies altogether. Yes, but you are limited to text mode apps. Feature. Running remote GUI management apps is an utter fail. If you've *GOT* to run some remote GUI application, then yes, you're going to want a tool that supports it, of which there are several, and of which NX is merely one of many options. It's not a best, standard, open, free, or actively developed (in free software) solution. I'm done here. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 09:10 Fri 04 Mar, Sean Carolan (scaro...@gmail.com) wrote: In this case, you might want to conditionally assign some reasonable value on failure. Say: tput -T $TERM init /dev/null 21 || export TERM=xterm 'tset -q' is another test which can be used. The remote host's $TERM variable is in fact xterm. When I connect to the screen session the $TERM variable is 'screen'. Are you running screen locally or remotely? My experience is it's best to launch SSH sessions in their own terminal(s), then start screen on the remote side. This also generally provides more utility (you want a single session to a host, and a logically grouped set of shells / processes on that host). Nesting screen sesssions is possible, but generally not terribly friendly due to having to hit multiple C-a escapes for screen commands. I think it's because I'm opening a new ssh session in each screen window. Not a huge deal; I mainly use this for short commands, and if I need to run something longer I just write it all out in a text editor and paste it into the terminal. Or you could write a script, scp it to the hosts you want to run it on (testing first, natch), and exec it: for host in hostlist; do scp myscript $host:.; done [fiddle around with tests or verification as necessary] for host in hostlist; do echo ** $host **; ssh $host ./myscript; done ... which is a shell idiom that shows up a LOT in my history. As I mentioned earlier, dsh (distributed ssh) is a very powerful tool for running multiple remote commands. Puppet, cfengine, and other tools may also be useful. Scales from low multiples through thousands and more of hosts. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] virtualization on the desktop a myth, or a reality?
on 11:38 Thu 03 Mar, Always Learning (cen...@g7.u22.net) wrote: On Wed, 2011-03-02 at 19:18 -0800, Dr. Ed Morbius wrote: It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat). My dual-booting, actually tri-booting, with Vista (ugh!), Centos (brilliant) and Fedora 14 (not keen and a bit seriously buggy) allows me in Linux to access and change the file space content used by the other two operating systems. Surely that constitutes simultaneous access to storage? I should have hedged: there are means of accessing NTFS from Linux (ntfs-ng drivers) and Linux ext2/3 filesystems from Windows (explorE2fs and some ported drivers, IIRC). As I recall, writing via ntfs-ng still triggers a filesystem scan on the next Windows boot. The ext2/3 access last I used it (years ago) worked, but wasn't particularly fluid. Neither gives you proper multi-user semantics (/etc/passwd and wherever NT stores its user perms/IDs stuff aren't used). If you've coordinated UIDs, yes, it's very possible to share Linux partitions between multi-booted systems, though I'd still argue that this is less than optimal. A chroot works pretty well (and keeps things like LD search paths sane). KVM is /very/ lightweight and allows for separate process space. Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+. The win is still virtualization. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] virtualization on the desktop a myth, or a reality?
on 15:37 Thu 03 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 03, 2011 01:20:06 pm Dr. Ed Morbius wrote: Compare against CIFS/Samba shares or NFS exports bewteen booted host/guests. You get native filesystem support (under the host/guest as relevant), and mappings via CIFS/Samba and/or NFS/NIS+. The win is still virtualization. There are situations where dual-booting is a necessary thing to do; one of those is low-latency professional audio where accurate I think I addressed that reality. For some needs, you need to be on bare metal, though whether this is accomplished via multi-booting or multiple systems (if you're doing professional music editing, presumably you can justify a dedicated system to that task). timekeeping is required; basically anything that needs the -rt preemptive kernel patches. I actually have need of this, from multiple OS's, and while I've tried the 'run it in VMware' thing with Windows and professional audio applications the results were not satisfactory. What surprises me is that there aren't more systems available which provide separate bare-metal computing environments within a single enclosure, perhaps with some form of shared storage, perhaps just integrated networking, to provide this sort of need. We see this in server space (blade and multi-system enclosures) but rarely if ever in consumer space. Otherwise, the solution would be to run the system with the low-latency requirements as the host. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 13:36 Thu 03 Mar, Sean Carolan (scaro...@gmail.com) wrote: I really like gnu screen and use it everyday but there's one thing that is a bit inconvenient, and that's the odd line wrapping and terminal size issues that seem to pop up. The problem crops up when I type or paste a really long command, and then go back and try to edit it; the text starts to wrap over itself and you have no idea what you are editing. Any fixes for this? Is your local terminal type known to all remote systems (in termcap)? I find I have to re-map $TERM to some low-common standard (e.g.: xterm) on many systems. Legacy Unix vendors being the most eggregious for this. You can test $TERM compatibility in your shell login: tput -T $TERM init /dev/null 21 echo yes || echo no In this case, you might want to conditionally assign some reasonable value on failure. Say: tput -T $TERM init /dev/null 21 || export TERM=xterm 'tset -q' is another test which can be used. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 16:30 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/2011 4:19 PM, Dr. Ed Morbius wrote: on 16:07 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/2011 3:34 PM, Dr. Ed Morbius wrote: on 13:36 Thu 03 Mar, Sean Carolan (scaro...@gmail.com) wrote: I really like gnu screen and use it everyday but there's one thing that is a bit inconvenient, and that's the odd line wrapping and terminal size issues that seem to pop up. The problem crops up when I Is your local terminal type known to all remote systems (in termcap)? Instead of running screen, can you run a desktop session under freenx on a server No xlibs on our servers. You need _a_ machine somewhere that can host a freenx session. It doesn't need to be the target of the ssh connections, just something that will mostly stay powered up if you want the session to stay active all the time. A development box or even a VM session that would work - or a desktop machine if it stays on all the time. It doesn't even have to run X on its own console. Frankly, given the alternative ease of automatically redefining $TERM, that strikes me as a slightly overengineered solution. Not that I'm intrinsically opposed to overengineering. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 16:50 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/2011 4:36 PM, Dr. Ed Morbius wrote: Instead of running screen, can you run a desktop session under freenx on a server No xlibs on our servers. You need _a_ machine somewhere that can host a freenx session. It doesn't need to be the target of the ssh connections, just something that will mostly stay powered up if you want the session to stay active all the time. A development box or even a VM session that would work - or a desktop machine if it stays on all the time. It doesn't even have to run X on its own console. Frankly, given the alternative ease of automatically redefining $TERM, that strikes me as a slightly overengineered solution. Not that I'm intrinsically opposed to overengineering. Don't knock it until you've tried it. A full GUI desktop, even mostly hosting a bunch of terminal windows is a lot more comfortable place to work than a screen session I use screen *very* extensively, including locally. It works quite well. I run Linux locally, not just remotely. and mousable cut/paste that works even with the local OS running the NX client is extremely handy I've got that natively via X11. (how many times do you do the same command on several machines?). for host in list; do ssh $host 'commands'; done dsh hostgroup 'commands' I almost never log in directly at a linux console anymore and if I need to do something from home or remotely, I just pick the session that was my last desktop at work. What's your desktop system? I'm living in X11 on a laptop with good suspend/restore. A new terminal is hotkeyed, so no mousing around to get that. If I'm on a host frequently I'll generally have a screen session or several running there. Few if any remote clients have any X support. Works pretty well for me. There's no win (in this case) for freenx. I meant to note earlier: the upstream NX developers have gone non-free, no? Is there a free software development branch? -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] virtualization on the desktop a myth, or a reality?
on 16:44 Thu 03 Mar, Lamar Owen (lo...@pari.edu) wrote: On Thursday, March 03, 2011 04:24:14 pm Dr. Ed Morbius wrote: I think I addressed that reality. Part of it, yes. For some needs, you need to be on bare metal, though whether this is accomplished via multi-booting or multiple systems (if you're doing professional music editing, presumably you can justify a dedicated system to that task). It's not the computer portion of a separate dedicated system that would be expensive; it's the audio interfaces, patching, and control surfaces. Much much much easier to dual-boot in a workflow-friendly fashion. It would be decidedly nice to have virtualization running well enough to handle all the needs; but it requires twice the capacity machine to do it. I thought a bit about that when posting earlier. I still disagree WRT dual-booting. And no, virtualization doesn't need twice the hardware by a long shot (aggregated load averaging, shared componentry, and a host of other savings). Audio's pretty easy, as you could select between sources and output (or input) accordingly. Ditto inputs (keyboard, mouse, etc.). Storage might be virtualized/aggregated somehow. For video, you want high-performance. I'm thinking an integrated KVM might work, or something like it. If done in hardware with digital inputs it should be pretty good. How you'd split / select displays would be a design question. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 18:10 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/2011 5:49 PM, Dr. Ed Morbius wrote: Not that I'm intrinsically opposed to overengineering. Don't knock it until you've tried it. A full GUI desktop, even mostly hosting a bunch of terminal windows is a lot more comfortable place to work than a screen session I use screen *very* extensively, including locally. It works quite well. I've forgotten - does a vi session in screen window resize correctly like it does in an xterm if you want to change it? Yes. Assuming your termcap library/ies are consistent. I run Linux locally, not just remotely. NX works on Linux too. But 'locally' to me means a bunch of different machines. I'm aware of that. However the power of tunneled X sessions (when necesary) and ssh (when not) makes having a full-desktop tool (NX, VNC, etc.) markedly less attractive. I almost never log in directly at a linux console anymore and if I need to do something from home or remotely, I just pick the session that was my last desktop at work. What's your desktop system? Mostly windows at work, mac at home, but sometimes I'll be at a linux console and pick up the session there too. That would explain a lot of your perspective. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 19:21 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/2011 6:46 PM, Dr. Ed Morbius wrote: NX works on Linux too. But 'locally' to me means a bunch of different machines. I'm aware of that. However the power of tunneled X sessions (when necesary) and ssh (when not) makes having a full-desktop tool (NX, VNC, etc.) markedly less attractive. Oh - NX runs entirely over ssh in the default setup. You don't need any other ports and the whole session is encrypted. It's not the ports. It's the graphical presentation. Oh, and I've run NX. I'm really just fine with terminal windows and SSH-forwarded apps if those are necessary. I don't keep enough state running remotely to make it worth my time to have another level of nested desktop cruft to deal with. Ties up too much real-estate for no win. As I've noted several times: I've already got SSH local/remote. For NX I'd have to install clients and servers, and X libraries. For something I really don't want or need in the first place. While I could see the value for someone /not/ running a native Linux desktop environment. I'm /not/ trying to change your mind about anything, maybe just open it up a tad to see my PoV. I believe I've talked myself in a circle at least once, and we're pretty far from anything particularly CentOS related. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Gnu Screen - terminal issues
on 21:24 Thu 03 Mar, Les Mikesell (lesmikes...@gmail.com) wrote: On 3/3/11 7:48 PM, Dr. Ed Morbius wrote: I do like the way gnome collapses the icons in the task bar when you have enough of them - and pops up the list so you can see it. It makes it easy to find the terminal session connected to some particular remote host. WindowMaker has a windowlist. Even better. I usually last 1-4 hours when I periodically try GNOME. KDE and XFCE I might last a few days. Then it's back to the One True Window Manager. I'm really just fine with terminal windows and SSH-forwarded apps if those are necessary. But why do you need screen, then? Terminal multiplexing, session persistance, scrollback/logging, split screen (top running in the top panel, shell underneath, etc.), workflow organization (similar processes are grouped in a screen session). I'm writing this mail in mutt, in a screen session with multiple mailboxes open, each to its own screen window. It's like a multi-tabbed GNOME or KDE terminal, except that the session persists even if the controlling terminal is killed, or X dies altogether. Screen is one of those amazingly powerful Linux tools, once you stumble across it. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] virtualization on the desktop a myth, or a reality?
on 21:35 Wed 02 Mar, Rudi Ahlers (r...@softdux.com) wrote: On Wed, Mar 2, 2011 at 7:56 PM, compdoc comp...@hotrodpc.com wrote: Yes, I know that I could have used KVM, VMWare or VirtualBox, but I wanted to use what's included already. ... What I'm getting at: Can, or will virtualization replace dual boot systems It far and away already has. Dual-booting is a bastard compromise which forces you to select between altnernative OSs, doesn't allow for simultaneous access to features (and storage) of both, and generally necessitates use of some low-standard transfer storage partition (e.g.: vfat). Virtualization allows you to have your pick of base host OS (Linux, Windows, Mac, or bare-iron virtualization with some technologies), while offering a reasonable facsimile of bare-iron performance, often allowing multiple guests to run simultaneously. For realtime-performant needs (mostly gaming, though some engineering tasks come to mind), you'll still want to avoid a virtualized host, but for many, many other tasks this is more than adequate. The primary limitation I've encountered is RAM utilization. As much of the stuff vendors provide and however cheaply, it's never enough. And it's the truly mundane stuff (browser sessions usually) that seem to suck the most RAM. or even give one the ability to use your Desktop PC to it's full advantage? For example, while I'm busy rendering a 3hour 3D scene in Maya (running in Windows 7) I want to watch some moving in Linux - but have both run in real-time. My PC is capable of it with 2x Corei7 CPU's 16GB RAM. - this is just an example. If you could reduce priority on the render, you'll likely be happier. Some resources (disk IO particularly) aren't fungible and may have impacts on virtualized environments though. This means swap as well. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Sorting by date
on 13:23 Mon 28 Feb, Mark (mhullr...@gmail.com) wrote: On Mon, Feb 28, 2011 at 12:35 PM, erikmccaskey64 erikmccaske...@zoho.com wrote: Original: Jan 23 2011 10:42 SOMETHING 2007.12.20.avi Jun 26 2009 SOMETHING 2009.06.25.avi Feb 12 2010 SOMETHING 2010.02.11.avi Jan 29 2011 09:17 SOMETHING 2011.01.27.avi Feb 11 2011 20:06 SOMETHING 2011.02.10.avi Feb 27 2011 23:05 SOMETHING 2011.02.24.avi Output: Feb 27 2011 23:05 SOMETHING 2011.02.24.avi Feb 11 2011 20:06 SOMETHING 2011.02.10.avi Jan 29 2011 09:17 SOMETHING 2011.01.27.avi Jan 23 2011 10:42 SOMETHING 2007.12.20.avi Feb 12 2010 SOMETHING 2010.02.11.avi Jun 26 2009 SOMETHING 2009.06.25.avi How could I get the output where the newest file is at the top? You keep asking your questions in (at least) both CentOS and Ubuntu lists. And Debian. At least he didn't cross-post, but yes, this is tiresome. Which OS are you using? More importantly, have you considered looking things like this up in the man pages, then on the web where really basic questions like this are easily found, with answers? Bravo. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] can't disconnec iSCSI targets, please help
on 12:02 Mon 28 Feb, Rudi Ahlers (r...@softdux.com) wrote: Hi, I'm trying to disconnect some iSCSI targets, but can't seem to. [root@localhost ~]# iscsiadm -m session tcp: [1] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:500gb tcp: [3] 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.thin3.vg0.1tba tcp: [4] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:extent0 tcp: [5] 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.iscsi0.vg0.500gb I need to disconnect all 4 of these [root@localhost ~]# iscsiadm -m node -T 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.iscsi0.vg0.500gb -p 196.34.136.200 --logout That's a logout to a specific node and dataport. If you're wrong about the IQN and/or dataport, it's not going to work. I think you want: iscsiadm -m node -u And for good measure: iscsiadm -m session -u If you do want to delete specific discovery records and need to identify the dataport associated with them: iscsiadm -m discoverydb ... will print them. The target is still there, even though I tell it to disconnect. [root@localhost ~]# iscsiadm -m session tcp: [1] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:500gb tcp: [3] 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.thin3.vg0.1tba tcp: [4] 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:extent0 tcp: [5] 192.168.2.200:3260,1 iqn.2011-2.za.co.securehosting:RAID.iscsi0.vg0.500gb I can't delete it either: [root@localhost ~]# iscsiadm -m node --op delete --targetname 192.168.2.202:3260,1 iqn.2011.01.22.freenas.nvr:500gb iscsiadm: no records found! Restarting iscsi gives some odd errors: If you're going to that extent, you can umount all remote targets, disable iscsid and iscsi services, reboot, and clear out the various entries under /var/lib/iscsi/. Leave the directories (/var/lib/iscsi/*/), but remove the files beneath them. Note that this is a bit like doing brain surgery with a sledgehammer. Lacking in finess, but for some jobs, effective. You might also try posting to the open-iscsi mailing list. I've found that iscsi is very tempermental and poorly understood by most. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] strace issue in 5.5?
on 13:29 Sat 19 Feb, Eric Gerzon (ericger...@gmail.com) wrote: Hello, I am trying to get some more information about the following issue: # strace -p 2256 attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted As root? (Prompt suggests yes). Is another strace attached to the process already (or gdb session)? Some kernel hardening / selinux features may interfere with this as well, though I don't know specifics. I am trying to trace an mdadm re-sync pid and I keep getting the above error. I have done some digging on google and forums, but have not had much luck. Redhat-release says CentOS release 5.5 (Final) uname: 2.6.18-194.32.1.el5 Any help would be great. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recommendation for a Good Vulnerability Scanning Service?
on 14:20 Fri 18 Feb, Michael B Allen (iop...@gmail.com) wrote: Hi, Can someone recommend a good vulnerability scanning service? I just need the minimum for PCI compliance (it's a sort of credit card processing certification). First: if you're headed down the compliance / certification route, you're going to want to go with a certified vendor / service provider for this. I got a free scan from https://www.hackerguardian.com/ and their scan reported a number of Fail results. I haven't checked them all yet but most seem to be things for which fixes were backported looong ago by The Upstream Vendor. You can also run your own scans as a preemptive measure -- nessus is probably the baseline tool, though I'd also be interested in what others people would recommend. I haven't spoken with the hackerguardian people yet but it would be nice if I could just say I'm using CentOS 5.5 and have them factor that into their report so that I can focus on any real issues. Are there vulnerability scanning services that are more or less sophisticated about this? I'd suggest you educate yourself on the PCI compliance issue, and query your prospective vendor(s) on what specific scans they run and/or how these are tuned to specific operating environments. I'd tend to suspect that vuln/pen testing is going to be based more on known vulnerabilities than your environment. -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] working with multiple password protected iSCSI targets on one host
on 17:11 Tue 15 Feb, Santi Saez (santis...@woop.es) wrote: El 15/02/2011 14:54, Rudi Ahlers escribió: Hi Rudi, How do I setup multiple password protected iSCSI targets on Linux? (..) But, now I need to mount another iSCSI target, from a different SAN that has a different username password than whan I have configured here for the one already mounted. How do I tell iscsiadm which CHAP settings to use with which iSCSI target? I think there is not elegant way to do this.. I follow those steps: Only if all CHAP auth is the same for all targets, in which case you can specify the user/pass pairs in /etc/iscsi/iscsid.conf Otherwise, you have to configure individual targets via iscsiadm. I like your use of environment variables to specify targets and portals (though I believe the default port is used if unspecified). I'd arrived at the same convention myself. And though I'd turned up the authmethod assignment method somewhere, I can't for the life of me remember where -- neither the iscsi-initiator-utils document (README) nor the iscsiadm manpage seem to address this -- they're otherwise pretty good. Nice docs. 1) Set discovery.sendtargets.{auth,password} in iscsid.conf for target-1 + restart iscsid service. 2) Set CHAP settings for target-1 + connect: # iscsiadm -m node --targetname ${TARGETNAME1} -p ${PORTAL1} -o update -n node.session.auth.username -v ${USERNAME} # iscsiadm -m node --targetname ${TARGETNAME1} -p ${PORTAL1} -o update -n node.session.auth.password -v ${PASSWORD} # iscsiadm -m node --targetname ${TARGETNAME1} -p ${PORTAL1} -l 3) Disconnect from target-1: # iscsiadm -m node --logoutall all 4) Set discovery.sendtargets.{auth,password} in iscsid.conf for target-2 + restart iscsid service. 5) Set CHAP settings for target-2 + connect: # iscsiadm -m node --targetname ${TARGETNAME2} -p ${PORTAL2} -o update -n node.session.auth.username -v ${USERNAME} # iscsiadm -m node --targetname ${TARGETNAME2} -p ${PORTAL2} -o update -n node.session.auth.password -v ${PASSWORD} # iscsiadm -m node --targetname ${TARGETNAME2} -p ${PORTAL2} -l It works! Now you can login/logout in both iSCSI targets: # iscsiadm -m node --logoutall all # iscsiadm -m node --loginall all Cheers, -- Santi Saez http://woop.es ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- Dr. Ed Morbius, Chief Scientist /| Robot Wrangler / Staff Psychologist| When you seek unlimited power Krell Power Systems Unlimited| Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iSCSI disk preperation
on 09:34 Tue 08 Feb, John Hodrien (j.h.hodr...@leeds.ac.uk) wrote: On Tue, 8 Feb 2011, Dr. Ed Morbius wrote: *OR* as a special case, if access is *only* read-only (or read-only to all but one initiator). I get the all read-only case, but wouldn't the read-only clients end up caching filesystem data that has since been changed by the read-write client? I'd have thought the read-only initiators would get pretty quickly confused. Good point. If the data were highly volatile, this would seem to be likely. I'm not sure what the consequences of that confusion might be. This could be an interesting little side-research project. Infrequent writes, journaled filesystem, minimized caching, while not entirely kosher might work well enough in many cases. Probably not what you'd want in a production world though, and NFS read-only shares would seem a more appropriate solution. Cache coherence is very, very sticky stuff, and it's what burns a whole lot of computing operations. -- Dr. Ed Morbius Chief Scientist When you seek unlimited power Krell Power Systems Unlimited Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iSCSI disk preperation
on 16:28 Tue 08 Feb, Jason Brown (jason.br...@millbrookprinting.com) wrote: On 02/07/2011 05:09 PM, Dr. Ed Morbius wrote: on 15:19 Mon 07 Feb, Ross Walker (rswwal...@gmail.com) wrote: On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorb...@gmail.com wrote: on 13:56 Mon 07 Feb, Jason Brown (jason.br...@millbrookprinting.com) wrote: I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks ... What are you using for your iSCSI target (storage array)? ... Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable. If a best practices doc could be handed to you right now, what would you like it to contain? grin I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another. ... I would be happy to draft something up and put it on a wiki somewhere, but I would need a list of talking points to start with. How's this do you? In our configuration, we are going to have our iSCSI targets and initiators all connected to the same layer 3 switch and isolate the iSCSI traffic on separate networks. Would it be beneficial to also set up multipath for this as well? That's pushing the limits of my knowledge/understanding. Multipath aggregates multiple pathways to a data store. In the case of the Dell equipment mentioned in my post, there are two controllers, with 4 TOE/NIC cards each, offering 8 pathways to each target storage LUN. Multipath aggregates all 8 pathways to a single target, and provides both performance and availability enhancements by utilizing these pathways in turn (defaulting to round-robin sequencing), and presumably disabling use of any pathway(s) which become unavailable (whether or not any monitoring/alerting of this failover/fail-out is possible would be very useful to know). It's also possible to configure multiple initiator pathways, though in our case we've already aggregated multiple NICs into a bonded ethernet device. From the description you've provided, I don't think you've got a multipath configuration. I don't know what would happen if you attempted to set up multipath, but presuming not too much magic smoke escapes, I'd be interested in finding out. Presumably you'd have to configure /etc/multipath.conf appropriately to pick up the target(s). -- Dr. Ed Morbius Chief Scientist When you seek unlimited power Krell Power Systems Unlimited Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Ken Olsen od DEC, 1927-2011
on 17:02 Tue 08 Feb, Les Mikesell (lesmikes...@gmail.com) wrote: On 2/8/2011 4:40 PM, Johnny H wrote: Thanks Mark, for this and your previous email. It is always sad when anyone dies, God rest his soul. To be honest, I have never heard of him before but it appears he made a massive contribution in his life and he is probably a good inspiration to many people. Unfortunately, the thing he will probably be most remembered for is the 1977 quote: There is no reason for any individual to have a computer in his home. That and snake oil. In fairness, Olsen wasn't the only one to make a comically understated estimate of future widespread computer use. Ed Yourdon proclaimed in 1975 (the year Apple Computer was founded): unless you're very rich or very eccentric, you'll never have your own computer. In fairness, he fessed up to it in a later book: http://bit.ly/hlIO1v The snake oil comment is also interesting in context. The main thrust appeared to be that pouring the label UNIX over a pile of silicon and bits didn't magically make all compatibility issues disappear: http://sinix.org/blog/?p=16 Olsen without mentioning particular companies, likened some vendors of Unix products to “snake oil” salesmen and said the claim that Unix will resolve incompatibility problems within multi-vendor networks is “a naive idea.” “It still won’t resolve the problem of interchangeability”, he said, adding that the operating system is just one of the several components needed to achieve compatibility. He cited windowing ability and communications protocols as two other major components. Somewhat ironically, by the time Olsen made those comments (1987), interchangeability was being addressed by the GNU project (the GNU manifesto was written in 1985), windowing by the X11 project (1984), and communications by TCP/IP (in BSD UNIX as of 1983). Still, yes, among my first UNIX experiences were the campus PDP-11 systems, and I suffered for a while on VAX/VMS and Alpha/OpenVMS. I've resisted the impulse to declare the death of a snake-oil salesman, however. -- Dr. Ed Morbius Chief Scientist / Robot Wrangler When you seek unlimited power Krell Power Systems Unlimited Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Ken Olsen od DEC, 1927-2011
on 16:34 Tue 08 Feb, Raymond Lillard (r...@sonic.net) wrote: On 02/08/2011 03:28 PM, Dr. Ed Morbius wrote: on 17:02 Tue 08 Feb, Les Mikesell (lesmikes...@gmail.com) wrote: On 2/8/2011 4:40 PM, Johnny H wrote: Thanks Mark, for this and your previous email. Unfortunately, the thing he will probably be most remembered for is the 1977 quote: There is no reason for any individual to have a computer in his home. I respectfully disagree. One mis-statement in a public speech does not come close to defining the man. The statement is generally quoted without context as it is here. He was actually speaking of computer controlled homes, (temperature, lighting, etc...) rather than the PC (entertainment and communication) found in most homes today. We are of course, sneaking up on the home control thing (energy management), so in time he will have been mistaken in the context he intended. Ironically, in those years my well appointed apartment was furnished with a surplus PDP-8, an air mattress, stereo system and a Mr Coffee. This was of course prior to wife and family. :-) I did get to keep the stereo system. In fairness, Olsen wasn't the only one to make a comically understated estimate of future widespread computer use. Ed Yourdon proclaimed in 1975 (the year Apple Computer was founded): unless you're very rich or very eccentric, you'll never have your own computer. In fairness, he fessed up to it in a later book: http://bit.ly/hlIO1v Others have made similarly short-sighted remarks in public places. Most of us make them on less exalted stages and so are not called to account. Most of us aren't CEOs of leading tech firms, with a core professional competency being to sort which way the wind blows (or self-proclaimed tech visionaries in Yourdon's case). That said: in tech, the wind changes direction often, and there are many examples of the one-time pack leader making what are seen to be highly inaccurate dismissals of an upstart technology or products. Usually in the midst of trying to turn back the tide. I'm just wracking my brains right now to think if there might possibly be some examples involving Linux, but I can't for the life of me think of one. -- Dr. Ed Morbius Chief Scientist / Robot Wrangler When you seek unlimited power Krell Power Systems Unlimited Go to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iSCSI disk preperation
on 13:56 Mon 07 Feb, Jason Brown (jason.br...@millbrookprinting.com) wrote: I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system. What are you using for your iSCSI target (storage array)? Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another. Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions. We've elected to skip this locally and create a filesystem on the iSCSI device directly. Creating and mounting filesystems are both generally managed on the initiator. Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable. -- Dr. Ed Morbius Chief Scientist When you need power Krell Power Systems UnlimitedGo to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iSCSI disk preperation
on 15:19 Mon 07 Feb, Ross Walker (rswwal...@gmail.com) wrote: On Mon, Feb 7, 2011 at 2:36 PM, Dr. Ed Morbius dredmorb...@gmail.com wrote: on 13:56 Mon 07 Feb, Jason Brown (jason.br...@millbrookprinting.com) wrote: I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system. What are you using for your iSCSI target (storage array)? Generally, RAID management of the storage is managed on the target side. You'd use the target's native abilities to create/manage RAID arrays to configure one or more physical disks as desired. If you're using a dedicated vendor product, it should offer these capabilities through some interface or another. Presentation of the iSCSI device is as a block storage device to the initiator (host mounting the array). That's going to be an unpartitioned device. You can either further partition this device or use it as raw storage. If you're partitioning it, and using multipath, you'll have to muck with kpartx to make multipath aware of the partitions. We've elected to skip this locally and create a filesystem on the iSCSI device directly. Creating and mounting filesystems are both generally managed on the initiator. Truth is, there's a lot of flexibility with iSCSI, but not a lot of guidance as to best practices that I could find. Vendor docs have tended to be very poor. Above is my recommendation, and should generally work. Alternate configurations are almost certainly possible, and may be preferable. If a best practices doc could be handed to you right now, what would you like it to contain? grin I've got about 35 pages of that document sitting on my local HD now. Negotiating with management about releasing some of it in some form or another. It's based on some specific vendor experience (Dell MD3200i), and CentOS. Among the problems: I suspect there's a range of experiences and configuration issues. I would suspect that it would be different whether your setting up an initiator or a target, so maybe start by splitting it into two sections. Absolutely -- if you're setting up your own target, you've got a set of problems (and opportunities) not facing those with vendor-supplied kit with its various capabilities and limitations. Basics: - Terminology. I found the Wikipedia article and the Open-iSCSI README to be particularly good here. - Presentation of the devices. It took us a while to realize that we were going to get both a set of raw SCSI devices, AND a set of multipath (/dev/dm-number) devices, and that it was the latter we wanted to play with. We spent a rediculous amount of time trying to debug and understand what turned out to be expected and correct behavior, though this was either undocumented or very unclearly documented. How much of this is specific to the MD3xxxi kit with its multiple ports and controllers isn't clear to me. - Basic components. For us these were vendor tools (and identifying which of these were relevant was itself non-trivial), iscsiadm, multipath, and the CentOS netfs config scripts / environment. The device-mapper-mulitpath docs are shite. - Overview of target and initiator setup: phyisical, cabling, network topology (a dedicated switched LAN or vLAN being recommended, away from core network traffic). - As appropriate: multipath, its role, where it is/isn't needed, what it provides. - Target-side set-up, including virtual disk creation, RAID configuration, network configuration, host-to-LUN mapping, IQN generation / identification, CHAP configuration (if opted), etc. I'd slice this into Linux/CentOS target configuration (specific tasks), preceded by a more generic things you'll want to take care of section. If people/vendors want to fill in their specific configuration procedures in additional sub-sections, that would be an option. - Initiator-side set-up: installation of base packages, verifying iscsi functionality, verifying multipath functionality, verifying netfs functionality. Preparing storage (partitioning, filesystem creation). - Target discovery from initiator. CHAP configuration (if opted), session log-in, state querying, log out. DiscoveryDB querying
Re: [CentOS] iSCSI disk preperation
on 15:26 Mon 07 Feb, Ross Walker (rswwal...@gmail.com) wrote: On Mon, Feb 7, 2011 at 1:56 PM, Jason Brown jason.br...@millbrookprinting.com wrote: I am currently going through the process of installing/configuring an iSCSI target and cannot find a good write up on how to prepare the disks on the server. I would like to mirror the two disks and present them to the client. Mirroring isn't the question, its how I go about it is the problem. When I partitioned the two drives and mirrored them together, then presented them to the client, it showed to the client as a disk out no partion on it. Should I partition the drive again and then lay the file system down on top of that? Or should I delete the partitions on the target server and just have sda and sdb mirrored, then when the client attaches the disk, then partion it (/dev/sdc1) and write the file system. Whatever you export, the whole disk, partition or logical volume, the initiator will see as a whole disk. So if you mirror sdaX and sdbX and export md0 the initiator will see a disk the size and contents of sdaX/sdbX. Just create the filesystem on the disk on the initiator and use it there. REMEMBER: iSCSI isn't a way for multiple initiators to share the same disk (though they can using specialized clustering file systems), it is a way for multiple initiators to share the same disk subsystem. *OR* as a special case, if access is *only* read-only (or read-only to all but one initiator). You can't access the file system from both the target-side and initiator-side at once or it will corrupt the file system. If that's what you want then you want NFS or Samba and not iSCSI. Right, or other network-aware filesystem (andrew, coda, gluster), none of which are particularly widely used. http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_file_systems -- Dr. Ed Morbius Chief Scientist When you need power Krell Power Systems UnlimitedGo to Krell! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] filter unwanted email
on 21:20 Thu 03 Feb, Alexander Dalloz (ad+li...@uni-x.org) wrote: Am 03.02.2011 17:33, schrieb Robert Heller: Create a file named .procmailrc, that looks something like this: [ ... ] :0fw | /usr/bin/spamassassin -p /home/rtspam/.spamassassin/user_prefs It is the worst solution to spawn a new spamassassin process with each mail going through the filter. The consequences depend on the amount of mails transported and the powers of the host, but doing this with a resource hungry and slow process like spamassassin is not recommended in any way. It'd be helpful to mention the alternative: 1: Run spamassassin in daemon mode. 2: Invoke the spamc client (should be installed with spamassassin though I haven't confirmed for CentOS) in your procmail (or other mail filtering) systems. If you're running your own MTA locally or remotely, most have hooks which allow for automated spam rejection based on rules or filters as well. Discussion of that is beyond the scope of this list, but texts/documentation for postfix, exim4, smail, and sendmail should all address this adequately. MTA spam filtering, done at SMTP transaction time, can be _very_ efficient and effective. It also has the added benefit, if done correctly, of indicating to the sender what's happened (e.g.: mail was rejected as spam), which in the case of false-positive rejections may (assuming a clueful user, unfortunately not valid in many instances) be helpful in identifying and correcting such instances. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 4.8
on 14:46 Thu 03 Feb, Y. K. Liu (ykl...@gmail.com) wrote: I installed CentOS 4.8 (not CentOS 5) on VMware Fusion using an ISO file I downloaded. During the installation, it asked me to enter a user name and its password. I tried to enter root for the user name, but it would not let me do that. So I had to enter a non-root user name. So I did not have the root user name and password when the installation completed. I only had a non-privileged user name, and could not do any sudo work. How can I solve this problem? I noticed that CentOS 5 (not the 4.8 I needed to install) asking to enter the password for root during the installation. But I need to install CentOS 4.8, not 5. Thank you very much, David, for your help. I found out that when I installed CentOS 4.8 on VMware Fusion, I should not have used the VMware default easy installation. When I unchecked the easy installation, it went through the questions, including setting the root password. So now I can set the root password during the installation. Yep. VMWare's easy install does that. Pick the expert mode and you'll get full prompts. If you're installing multiple systems, it's very helpful to configure a kickstart server. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] centos 5.5 check memoray usage too high???
on 05:58 Thu 03 Feb, mcclnx mcc (mcc...@yahoo.com.tw) wrote: We have DELL R900 server with 128GB RAM (CENTOS 5.5)in it. This server only have one application running and few people use it. Every week I ata least get one or two messages from monitor tool mail to me say: Message=Memory Utilization is 92.02%, crossed warning (80) or critical (90) threshold. Since server have 128 GB RAM and only 1 application. I really don't belive that. Does there has some way can check memory utilitation ? Make sure your tool is reporting utilization less cache. As others have noted, the -/+ buffers/cache line in free output is what you're looking for. http://www.linuxatemyram.com/ If you've got sar (sysstat) installed and activated (read the manpage and /usr/share/doc/sysstat* materials if not), numerous system resource usages are logged every ten minutes. For memory usage: 'sar -r': 12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbswpfree kbswpused %swpused kbswpcad 12:10:01 AM 45877732 3573788 7.23208792 1948316 2097144 0 0.00 0 12:20:01 AM 45875572 3575948 7.23208792 1948336 2097144 0 0.00 0 12:30:01 AM 45877120 3574400 7.23208796 1948352 2097144 0 0.00 0 12:40:01 AM 45878352 3573168 7.23208796 1948368 2097144 0 0.00 0 12:50:01 AM 45876080 3575440 7.23208800 1948384 2097144 0 0.00 0 01:00:01 AM 45877244 3574276 7.23208800 1948408 2097144 0 0.00 0 There are tools (NOT currently in CentOS) to graph/visualize these outputs as well. If you want per-process accounting, you can get that as well, but it'll cost you some performance overhead and a lot of set-up. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] how to unmount an NFS share when the NFS server is unavailable?
on 07:54 Thu 27 Jan, John Hodrien (j.h.hodr...@leeds.ac.uk) wrote: On Wed, 26 Jan 2011, Dr. Ed Morbius wrote: I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ). Once you're into an OOM state, you're screwed anyway. Is turning off overcommit a sane option these days or not? Our suggested fix was to dramtically reduce overcommit, or disable it. I don't recall what was ultimately decided. Frankly, bouncing the box would generally be better than letting it get in some weird wedge state (and was what we usually ended up doing in this instance anyway). Environment was a distributed batch-process server farm. Engineers were disciplined to either improve memory management or request host resources appropriately. Now, if you were to run monit, out of init, and restart critical services as they failed, you might get around some of the borkage, but yeah, generally, what OOM is trying to tell you is that you're Doing It Wrong[tm]. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] SSH Automatic Log-on Failure - Centos 5.5
on 10:15 Thu 27 Jan, Robert Nichols (rnicholsnos...@comcast.net) wrote: On 01/27/2011 01:39 AM, Nico Kadel-Garcia wrote: Also, there's a stack of reasons that DSA is preferred to RSA for SSH keys these days. When you generate your private keys, use ssh-keygen -t dsa, not rsa. Care to elaborate on that? Searching, I find mostly a stack of reasons for preferring RSA now that its patent has expired, e.g.: * DSA is critically dependent on the quality of your random number generator. Each DSA signature requires a secret random number. If you use the same number twice, or if your weak random number generator allows someone to figure it out, the entire secret key is exposed. * DSA keys are exactly 1024 bits, which is quite possibly inadequate today. RSA keys default to 2048 bits, and can be up to 4096 bits. Reasons for preferring DSA for signatures are less compelling: * RSA can also be used for encryption, making it possible for misguided users to employ the same key for both signing and encryption. * While RSA and DSA with the same key length are believed to be just about identical in difficulty to crack, a mathematical solution for the DSA discrete logarithm problem would imply a solution for the RSA factoring problem, whereas the reverse is not true. (A solution for either problem would be HUGE news in the crypto world.) The main argument against RSA keys was the RSA patent. It's expired. Go RSA. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] SSH Automatic Log-on Failure - Centos 5.5
on 14:50 Thu 27 Jan, Always Learning (cen...@g7.u22.net) wrote: On Thu, 2011-01-27 at 12:33 +0530, Indunil Jayasooriya wrote: you expect Passwordless SSH. If so, I wanted a quick effortless automated log-on. That's what ssh-agent gives you. If you invoke a command under ssh-agent, that comamnd (and all its children) inherit ssh-agent's environment, which includes the SSH_AUTH_SOCK variable, pointing to the authentication socket. Only that user (or root, and you trust root, right) can access this socket. For convenience (and some risk), you can also enable agent-forwarding (I prefer doing this to a limted set of hosts or domains). This would enable you to say: ssh from localhost to adminbox.datacenter.example.com ssh from adminbox.datacenter.example.com to other hosts within the DC. Very handy if you need to run quick commands, git pulls/pushes, scp, rsync, etc., within the DC, without having to constantly re-type your password. Of course, the more often you type your password, the more memorable it becomes. # ssh-keygen -t rsa ( passphrase should be empty ) Yes I did exactly that but following advice from this mailing list have changed to DSA and imposed a passphrase. Either works. RSA takes merits. Password SHOULD be provided. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Static assignment of SCSI device names?
on 12:41 Thu 27 Jan, Chuck Munro (chu...@seafoam.net) wrote: Hello list members, In CentOS-5.5 I'm trying to achieve static assignment of SCSI device names for a bunch of RAID-60 drives on a Supermicro motherboard. The scsi_id command identifies all drives ok. The board has one SATA controller and three SAS/SATA controllers ... standard on-board ICH-10 ATA channels, an on-board LSI SAS/SATA controller, and two add-on SAS/SATA contoller cards. There are 13 drives in all, spread across the four controllers, all configured for Linux software RAID. The problem is in management of the drive names and figuring out which drive to pull in case of failure. Unfortunately the BIOS scan detects only the three drives connected to the ICH-10 SATA controller. That's ok because that's where the RAID-1 boot drives are. However, when the kernel starts it assigns those drives last, not first. For this reason I want to use a set of udev rules to assign specific names to the drives plugged into specific ports (to maintain my sanity :-) ). Identifying drives by their ID string (which includes the drive's serial number) and assigning names in the rules works ok. BUT, what happens when I have to swap out a failed drive? The serial number (and possibly model number) changes, and the udev assignment should fail, probably assigning an unexpected /dev/sd? name. RAID rebuild would choke until I change the MD device assignment. Is it possible to assign SCSI drive names by hardware path instead? I especially want the three RAID1+spare boot drives to always be assigned sda/sdb/sdc, because that sorts out other issues I'm having in CentOS-5. In the udev rules file I tried piping the output of scsi_id -g -i -u -s /block/... through cut to extract the path, but I get no match string when I run udevtest against that block device. Does the PROGRAM==. clause not recognize the pipe symbol? I tried a little shellscript to provide the RESULT match string, but udevtest didn't like that. Is there a supported way to predictably assign a drive name according to the hardware port it's plugged into ... it would make swapping drives a lot easier, since it becomes 'drive-id-string' agnostic. Better yet, is there any way to tell the kernel the order in which to scan the controllers? I'm also hoping the problem doesn't radically change when I install CentOS-6 on this box. I'm using CentOS-5 just to get practice in using KVM and RAID-60. Though I don't swear to understand it well, it's possible that multipath (device-mapper-multipath) may work in your situation. I've been using it for iSCSI storage, where it provides multipathing capabilities, including performance improvements, HA, and persistent device naming. Whether this applies to hotplugged SCSI devices I'm not so sure, and udev would be my first choice. The multipath documentation is unfortunately atrocious. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to disable screen locking system-wide?
on 13:11 Fri 21 Jan, Michael Gliwinski (michael.gliwin...@henderson-group.com) wrote: On Thursday 20 Jan 2011 22:26:08 Bob Eastbrook wrote: On Wed, Jan 19, 2011 at 12:18 PM, m.r...@5-cent.us wrote: But the locked screensaver wants the *same* password that you log in with. I'm having trouble understanding the problem... or is it that many of the users *never* log out? Yes, users will sign onto a workstation, and then disappear somewhere in the building. They usually forget that they're logged on, which means the workstation is unusable by anyone else for several days. Restarting the X server is one solution, but it will kill any running jobs. I'm not sure about GNOME or if that's available in version currently shipped in CentOS but in KDE the screensaver allows you to switch user, i.e. leave the currently logged on user's session running and start a new one for another user. That seems like a better solution if possible, no? Or, so long as your graphics card doesn't kill console access, go old school: - Switch to console. - Log into console. - Launch X. The problem here is the hanging console session, which you should kill. Better: Institute a policy that abandoned desktop sessions are fair game to be killed. As with hot stoves and children, the lesson would be learned after a few experiences. Systems work should be handled remotely via ssh (or VNC), within screen session, or via cronjobs. Another useful feature would be to have an auto-logoff set after a certain amount of inactivity. This doesn't seem to be available within GNOME, so you'd probably have to homebrew it. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] how to unmount an NFS share when the NFS server is unavailable?
on 10:23 Wed 26 Jan, Rudi Ahlers (r...@softdux.com) wrote: Hi All, How do I unmount an NFS share when the NFS server is unaivalable? I tried umount /bck but it hangs indefinitely umount -f /bck tells me the mount if busy and I can't unmount it: root@saturn:[~]$ umount -f /bck umount2: Device or resource busy umount: /bck: device is busy umount2: Device or resource busy umount: /bck: device is busy This non-working NFS share is causing problems on the server and I need to unmount it until such a time when the NFS server (faulty NAS) is repaired. The specific solution is 'umount -fl dir|device'. The general solution's a little stickier. I'd suggest the automount route as well (you're only open to NFS issues while the filesystem is mounted), but you then have to maintain automount maps and run the risk of issues with the automounter (I've seen large production environments in which the OOM killer would arbitrarily select processes to kill ). Monitoring of client and server NFS processes helps. If it's the filer heads which are failing, and need warrants it, look into HA failover options. Soft mounts as mentioned won't hange processes, but may result in data loss. This is most critical in database operations (where atomicity is assumed and generally assured by the DBMS). If the issue is one of re-running a backup job, and you can get a clear failure, risk would be generally mitigated. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath -- slightly OT
that fast-talking street-smart agents who proliferated. It is sad that IT industry treats its early community members so callously. I don't know but Dell seems to be headed the Sun way -- open for takeover by HP/IBM Above imho. Regards, Rajagopal ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath
on 20:07 Fri 21 Jan, Ross Walker (rswwal...@gmail.com) wrote: On Jan 21, 2011, at 7:20 PM, Edward Morbius dredmorb...@gmail.com wrote: On Fri, Jan 21, 2011 at 3:58 PM, Ross Walker rswwal...@gmail.com wrote: On Jan 21, 2011, at 6:41 PM, Edward Morbius dredmorb...@gmail.com wrote: We've been wrestling with this for ... rather longer than I'd care to admit. Host / initiator systems are a number of real and virtualized CentOS 5.5 boxes. Storage arrays / targets are Dell MD3220i storage arrays. ... Once this is installed you need to setup dm-multipath, look for multipathd.conf in /etc, get the product id and vendor id from dmesg after making an initial connection via open-iscsi and use that in the mutipath config. Your going to need to use path utility 'rdac' in the config instead of tur. Google is your friend here. -Ross ___ CentOS mailing list cen...@centos.or /etc/multipath.conf appears to be appropriately configured (we'd installed the MDSM host components): device { vendor DELL product MD32xxi ... AFAIK the RDAC you have installed looks correct and the config also looks good. Thanks. Did you start the multipath service make a connection to each IP and do a 'multipath -ll' and see what shows up? Yes, and yes. We've actually run some fairly intensive disk tests (bonnie++ and a few tens of thousands of 100MB file copies of random data) with no errors across various hosts. The on-connect errors are the biggest issue we've got, though general concensus seems to be that we can ignore these. What's moderately maddening is the lack of any clear documentation or guidance, from Dell, RH, or the upstream open-iscsi / multipath projects, on what we should be experiencing, and what, if any, errors are considered normal. Think we've got a handle on it, but we're checking our sanity as well. -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath -- slightly OT
on 23:06 Sat 22 Jan, Rajagopal Swaminathan (raju.rajs...@gmail.com) wrote: Greetings, On 1/22/11, Edward Morbius dredmorb...@gmail.com wrote: CentOS is not a Dell-supported configuration, and we've had little helpful advice from Dell. There's been some amount of FUD in that Dell don't seem to know what Dell's own software installation (the md3 Dell doesn't seem to have much OS experience generally. +1 It is to be expected from Dell as they outsource support to non-equal opportunity employers who do not hire support agents beond 40 years of age (per HR). We actually had a very good experience earlier with server hardware (PowerEdge R610 and R410 systems). Helpful, proactive, dug up some recent tech, even provided a significant amount of replacement RAM should that have proved to be the problem (it wasn't). The storage side of the house is an entirely different animal. The issue BTW was ECC memory errors (disabling C-State in BIOS was the fix). -- Dr. Ed Morbius Chief Scientist Krell Power Systems Unlimited ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos