Re: [CentOS] Strange Kernel for Centos 5.5
On Fri, 11 Feb 2011 at 6:38pm, Drew wrote RHEL and CentOS have much, much tighter basic privilege handling. The complexity of the NTFS ACL structure, for example, is so frequently mishandled that it's often ignored and simply dealt with as Administrator. The result is privilege escalation chaos. And how is the user-group-world permissions system any better? I work daily with both *nix NTFS ACL's and given the choice I prefer NTFS' for the finer grained control. Erm, *nix has fully functional ACLs as well. 'man setfacl' -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] offsite encrypted backups?
On Fri, 28 Jan 2011 at 8:51pm, Eero Volotinen wrote Any idea for remote encrypted backup system? files must be encrypted on local side ? duplicity? any better ideas? Amanda http://www.amanda.org/ can encrypt on the server or client side, and can use ssh authentication as well. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kernel update
On Tue, 25 Jan 2011 at 12:26am, mahmoud mansy wrote well,i meant to upgrade to the RHEL kernel and its module ,libraries,headers if found of course but what i meant is there any issue with that i.e no piece of software work with that module or libraryetc, and the main problem is that i wanna take the RHCE and the best suggested OS is centos not fedora and i wanna run it on my laptop which i tried to do so with the centos 5.5 but there was so many miisings like the wireless card driverr and the display card drivers so i am back to fedora 14 which i am using now and i wanna run oracle databse over linux which has some issues with fedora! such a dillama??? i think i make a clear picture now! any suggestions? 1) Run Fedora 14 on the hardware, and CentOS in a virtualization environment. or 2) Wait for CentOS 6. Upgrading the kernel *can* work, but defeats the purpose of running an enterprise OS. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to disable screen locking system-wide?
On Thu, 20 Jan 2011 at 11:00am, Rudi Ahlers wrote It probably depends on his environment. If it's an office where people actually work for money and need to address client issues then I'm sure your colleagues won't be please if you make them loose all their work just to be an arrogant IT manager who wants to prove a point. *snip* So, in such a case I do think the OP has a valid question and it could be addressed more professionally than to restart X, or even the PC just to prove a point. I was going to leave this alone, but I feel this lowers to the level of personal attacks and I'd like to address that. Yes, my response was a bit glib (and tongue-in-cheek, which obviously didn't come across correctly). But that doesn't mean that the reasoning behind it isn't valid in some situations, and it certainly doesn't make me arrogant or unprofessional. As others have pointed out, there are industries and workplaces where any unlocked, unattended workstation is a major security risk. Please don't assume that your use case is everybody else's. And please keep it civil. Thanks. We now return you to your regularly scheduled CentOS list programming (no pun intended). -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to disable screen locking system-wide?
On Wed, 19 Jan 2011 at 11:44am, Bob Eastbrook wrote By default, CentOS v5 requires a user's password when the system wakes up from the screensaver. This can be disabled by each user, but how can I disable this system-wide? Many of my users forget to do this, which results in workstations being locked up. Ctrl-Alt-Bksp will fix that right up. I'm not a big fan of users leaving workstations unsecured when they walk away. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to disable screen locking system-wide?
On Wed, 19 Jan 2011 at 9:49pm, Rudi Ahlers wrote On Wed, Jan 19, 2011 at 9:46 PM, Joshua Baker-LePain jl...@duke.edu wrote: On Wed, 19 Jan 2011 at 11:44am, Bob Eastbrook wrote By default, CentOS v5 requires a user's password when the system wakes up from the screensaver. This can be disabled by each user, but how can I disable this system-wide? Many of my users forget to do this, which results in workstations being locked up. Ctrl-Alt-Bksp will fix that right up. I'm not a big fan of users leaving workstations unsecured when they walk away. Don't you mean CTRL+ALT+DEL? That'd work too, but the reboot is unnecessary. Ctrl-Alt-Bksp will just kill the X server (and thus the user's session). X will then respawn itself and restart GDM. I don't think the OP wanted a plaster, he wants a solution :) One person's solution is another's giant gaping security hole. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ext4 or XFS
On Tue, 11 Jan 2011 at 1:49pm, Digimer wrote On 01/11/2011 01:47 PM, aurfal...@gmail.com wrote: Hi all, I've a 30TB hardware based RAID array. Wondering what you all thought of using ext4 over XFS. I've been a big XFS fan for years as I'm an Irix transplant but would like your opinions. This 30TB drive will be an NFS exported asset for my users housing home dirs and other frequently accessed files. You will need XFS for a single partition that large. You won't be able to make such a large ext4 partition, I don't think. This is correct. While ext4 theoretically supports volumes (much) larger than 16TB, the developers don't think it's production ready yet and the userspace tools don't support it yet. So, short answer -- XFS is the only way to go. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ext4 or XFS
On Tue, 11 Jan 2011 at 11:12am, aurfal...@gmail.com wrote My RAID has a strip size of of 32KB and a block size of 512bytes. I've usually just done blind XFS formats but would like to tune it for smaller files. Of course big/small is relative but in my env, small means sub 300MB or so. What would your XFS tuning params be for such an env? It's been a long while since I've done tuned XFS formats. But you also need to consider how many disks are in the array and what RAID level you're using. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] centos6 filesystem size limit
On Sun, 2 Jan 2011 at 1:45pm, Robert Arkiletian wrote I just read the rhel6 filesystem size limit. http://www.redhat.com/rhel/compare/ It says 16TB limit for ext4 (same as ext3)?!?! I thought ext4 was supposed to support 1EB ( ~ 1 million TB) limit. That was one of the main advantages of rhel6. After a little more digging all I found was that the user space formatting tools (mkfs.ext4) only support 32bit filesystems (not 48bits). I'm surprised about this, I thought people would be waiting for 16TB support in rhel6. Does anyone know if this is going to change in point releases of rhel/centos6? I did some googling on this recently, and I found that while ext4 theoretically supports such large filesystems (and you can find/compile userspace tools to create them), the developers don't really recommend using it for such yet. It's just not deemed ready for production yet. I saw multiple recommendation (including from RH devs) to use XFS if you need filesystems that big. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] networking printer
On Fri, 3 Dec 2010 at 1:46pm, Robert Heller wrote At Fri, 3 Dec 2010 10:35:14 -0800 (PST) CentOS mailing list centos@centos.org wrote: Hi all I would like to ask linux can do the networking printer as window Share it to office environment Yes. You need to install and setup Samba. Most (and all recent) versions of Windows will print to IPP printers just fine, so there's actually no need for Samba. Standard CUPS is all that's needed. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Novell sale news?
On Wed, 24 Nov 2010 at 10:00am, m.r...@5-cent.us wrote Not I can't resist the old quote: Someone, somewhere on usenet, posted something that was ...wrong. http://xkcd.com/386/ -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] xServes are dead ;-( / SAN Question
On Mon, 8 Nov 2010 at 9:36pm, Nicolas Ross wrote Thanks for the suggestions (others also), but I don't beleivee it'll do. We need to be able to access the file system directly via FC so we can lock files across systems. Pretty much like xSan, but not on apple. xSan is really StorNext from Qlogic, but half the price per node. So, we are searching for an alternative to xSan, on linux. For those who don't know xSan, you can access a fibre-channel volume directly, and simultanously among many clients computer or servers. Access, locking and other tasks are handled by a metadata controler who is reponsible for keeping things together. No controler, no volume, hence a failover controler is needed. Have you looked at Red Hat's GFS? That seems to fit at least a portion of your needs (I don't use it, so I don't know all that it does). -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] who uses Lustre in production with virtual machines?
On Tue, 3 Aug 2010 at 3:45pm, Lars Hecking wrote Emmanuel Noobadmin writes: I haven't used Lustre but was also researching on using it for same purpose as shared storage for VMs. Dropped it in the end from consideration after some discussion on the Lustre mailing list points out that it's more intended for high performance rather than high availability. So it might not be that suitable as a HA solution. Have you considered trying Gluster instead? What do Gluster or Lustre offer that the builtin Red Hat Cluster Suite does not? One does not need shared storage for Gluster. Each storage brick has its own storage, and Gluster handles replication/distribution across the nodes. Also, according to RH's site, RHCS is limited to 16 nodes. Gluster has no such limit. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] who uses Lustre in production with virtual machines?
On Tue, 3 Aug 2010 at 6:11pm, Rudi Ahlers wrote On Tue, Aug 3, 2010 at 5:13 PM, Emmanuel Noobadmin centos.ad...@gmail.com wrote: From what I understand, I cannot do the equivalent of network RAID 1 with a normal DRBD/HB style cluster. Gluster with replicate appears to do exactly that. I can have 2 or more storage servers with real time duplicates of the same data so that if any one fails the cluster does not run into problem. By using gluster distribute over pairs of server, it seems that I can also easily add more storage by adding more pairs of replicate server. ___ I'm thinking more in the lines of network RAID10, if it's possible? Yes, you can do that with Gluster. That's the standard config produced by gluster-volgen if you feed it more than 2 volumes. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] who uses Lustre in production with virtual machines?
On Tue, 3 Aug 2010 at 10:04pm, Rudi Ahlers wrote On Tue, Aug 3, 2010 at 9:16 PM, Emmanuel Noobadmin centos.ad...@gmail.com wrote: One of the problem with Lustre's style of distributed storage which Gluster points out is that the bottleneck is the meta server which tells clients where to find the actual data. Gluster supposedly scales with every client machine added because it doesn't use a meta server, file locations are determined using some kind of computed hash. But who uses gluster in a production environment then? I have seen less posts (both on forums and mailing lists) about Glusteter, than lustre. I just finished testing a Gluster setup using some of my compute nodes. Based on those results, I'll be ordering 8 storage bricks (25 drives each) to start my storage cluster. I'll be using Gluster to a) replicate frequently used data (e.g. biologic databases) across the whole storage cluster and b) provide a global scratch space. The clients will be the 570 (and growing) nodes of my HPC cluster, and Gluster will be helping to take some of the load off my overloaded NetApp. They also have a page on their website listing self-reported users http://www.gluster.org/gluster-users/. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] who uses Lustre in production with virtual machines?
On Tue, 3 Aug 2010 at 10:26pm, Rudi Ahlers wrote Thanx for the feedback. This is what I hoped to get from someone running lustre :) But I guess I'll look at gluster instead. You may want to head over to the beowulf mailing list -- you've probably got a higher probability of finding Lustre users there. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] boot process glitch due to missing 2nd disk
On Tue, 20 Jul 2010 at 11:34am, Dave wrote Thanks for all the discussion, but keyboard is not the issue. I guess I should edit the bios settings and look for a way to tell it hey, you've only got one disk now, be happy. All Dell desktops I've dealt with (including the Precision T3400 I use now) require you to go into the BIOS and explicitly tell you which busses (IDE before, SATA now) have disks attached to them. If you don't tell it about a disk you do have, the disk won't appear to the OS. And if you do tell it about a disk you don't have, then the boot will hang complaining about a missing disk. It's asinine and I've never seen any other BIOS like it. But it's consistent. I've never dealt with Dell's server hardware, so I have on idea if they do the same thing there (dear God I hope not). In any case, yes, you must go into the BIOS and explicitly enumerate your disks there. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: ?? Centos Still Broken, Red Hat won't fix ??
On Fri, 9 Jul 2010 at 8:32am, Seth Bardash wrote My intent was to inform and hear from people that had similar issues and to learn what they might have done to work around them. Not to cause a debate on business practices, criticize Red Hat or inflame the Centos community. I appreciate your clarification. I think what got folks up in arms was an impression from your original email (from, e.g., the all caps bits and implications of them having gotten too big) that you were indeed criticizing RH. And I'd still be interested (as in, genuinely curious, not skeptical) to hear what sorts of applications benefit from optimized kernels (HPC? I/O intense?) and what kind of performance increases one can get. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: ?? Centos Still Broken, Red Hat won't fix ??
On Thu, 8 Jul 2010 at 3:39pm, Seth Bardash wrote I am beginning to wonder if Red Hat is getting too big? Or that it just does not care. Other ideas less pleasant come to mind Today, the old bug was still marked as new (6+ months and counting). I entered a new bug report for RH 5.5 for the same issue. Is there no way, unless you are a huge customer, to get your bug listed as anything except LOW PRIORITY?? It has been stated many times and on many fora that Red Hat's bugzilla is not a mechanism for support. They are under no obligation to address issues raised there. Is it nice when they do? Absolutely. Should you expect (nay, demand) it? Nope. The proper way to get Red Hat to address an issue is to open a ticket via your support contract with them. Now we are looking at the AMD G34 CPU's and are building some demo units. I think its time to benchmark these systems with the working - non optimized Red Hat / Centos Linux versus the optimized Opensuse / SLES Linux for standard server functions and publish them. While that may be interesting to compare distributions, I think it would do little to evaluate the benefit of the kernel CPU optimizations. There are just too many other variables. I would be very interested to see numbers comparing the exact same Red Hat distribution benchmarked with and without the kernel optimizations (you said 5.3 worked just fine). Do you have previous numbers on that showing a marked benefit? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: ?? Centos Still Broken, Red Hat won't fix ??
On Thu, 8 Jul 2010 at 8:16pm, Whit Blauvelt wrote On Thu, Jul 08, 2010 at 06:35:47PM -0400, Joshua Baker-LePain wrote: It has been stated many times and on many fora that Red Hat's bugzilla is not a mechanism for support. They are under no obligation to address issues raised there. Is it nice when they do? Absolutely. There are two issues you're conflating here. The first, paramount one is: Is Red Hat taking responsibility for bugs people have taken the effort to accurately report to them? This is a measure of any software project, totally separate from the issue of whether and for what the project leads provide paid support. In particular, if they are marketing this software to anyone - even if the person kind enough to report the bug is not a paying customer - they have a responsibility _to their paying customers_ to resolve all serious bugs in a timely manner - or at least to indicate in their bugzilla why they are rejecting fixing them. To be clear here, the bug in question is not present in any binaries that Red Hat ships. None of their paying customers will ever experience this bug while running in a supported configuration. It's a case of you broke it, you get to keep the pieces. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Ganglia
On Thu, 17 Jun 2010 at 6:51pm, John R. Dennison wrote On Thu, Jun 17, 2010 at 06:20:03PM -0400, Whit Blauvelt wrote: - best complied from source, there are big dependency problems with the available rpms Very few packages are ever best compiled from source on an enterprise distro. What, specifically, is wrong with the 3.0.7 in EPEL? Well, if you have more than 4TB of RAM in your grid, the memory graph wraps. :) Other than that, though, it works wonderfully. That being said, it's trivial to recompile the F13 RPM for 3.1.2 for centos-5. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Motherboards for HPC applications
On Tue, 9 Mar 2010 at 9:49pm, Chan Chung Hang Christopher wrote If cpu processing power is the sole criteria, then why limit to dual-socket boards and not go for quad-socket boards? In general, the price goes up non-linearly as you go above 2 sockets, making 2 sockets the sweet spot when it comes to price/performance. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kexec for CentOS 4?
On Tue, 2 Mar 2010 at 10:10am, Tony Mountifield wrote I have a remote CentOS 4 machine on a network where I can't put a DHCP or PXE server, and I want to do a complete reinstall. So what I want to do is, from the currently-running system, to invoke an installation kernel and initrd in just the same way that GRUB would, giving it a boot command line that specifies a remote kickstart file, installation tree, and other required info. This is simple. Grab the vmlinuz and initrd.img files from the pxeboot directory of the repo you want to install from. Put those in /boot on the server in question. From there, there are a couple of ways you can go. The easiest is to actually put the ks.cfg on the server itself. Then you can add a stanza like the following (you'll need to tailor all the hard drive references to your own setup, of course) to your grub.conf: title reinstall root (hd0,0) kernel /boot/vmlinuz ks=hd:sda1:/ks.cfg ksdevice=eth0 initrd /boot/initrd.img Make that entry the default, reboot, and your kickstart will start. Obviously all of your network info needs to be specified in the ks.cfg file. If you want to grab the ks.cfg from a remote server, that can be done too, but you'll need to specify the network config options on the kernel line above. I don't have the exact syntax handy, but it's all documented. Install the anaconda package and look in /usr/share/doc/anaconda-$VERSION/command-line.txt and you can see all the options you can pass to the install kernel. On CentOS-5 installs I always use noipv6, since it seems to make things go much faster. For a one-off like this, installing cobbler is a bit (read: a lot) of overkill. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] creating partitions on a 2.7TB drive
On Tue, 23 Feb 2010 at 3:38pm, Khusro Jaleel wrote Now, after a few months I forgot all about the Ubuntu LiveCD and tried to setup server B using the CentOS 5.3 x86_64 CD. However the installer immediately complained that this disk in using a GPT partition table and this computer cannot boot using GPT and it keeps saying this no matter what I do. I've tried creating a separate /boot partition, using LVMs for everything, etc but nothing works, even dd did not give me much luck, although perhaps I should try deleting the end of the disk rather than the beginning? There are 2 facts at play here: 1) Any device larger than 2TB must use a GPT disklabel. 2) You cannot boot from a device with a GPT disklabel. None of the tricks you mention above will work. What you need to do is use the RAID card BIOS to divide the array into multiple devices. Most decent RAID cards will either auto-carve arrays into 2TB chunks or let you create a small boot-drive. The latter is preferable, IMO. If your RAID card doesn't offer such an option, then you'll need to either remove some disks from the array to use as boot drives or add more drives to the system. The additional mystery is that if I check server A which I partitioned a few months ago using Ubuntu, the label type is msdos!! How is that possible? In addition if I use the CentOS CD and try to use parted on server A now, it gives the following error: Weird things happen when trying to boot from GPT labeled devices, including all sorts of data-loss scenarios. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] creating partitions on a 2.7TB drive
On Tue, 23 Feb 2010 at 4:11pm, Khusro Jaleel wrote straight away. I understand what you guys are saying about GPT and not being able to boot off it, etc but how did I end up in this situation? There's an old saying that Unix gives you enough rope to hang yourself with... And is this dangerous? Yes. Absolutely yes. One day you'll reboot and your partition table (and all your data) will be gone and unrecoverable. Trust me. I am thinking that if this is possible, why not try and setup the second server the same way? But it just feels wrong that Ubuntu allows this and if CentOS does not, there must be a good reason. And that reason is that it *will* die horribly and eat your data. Set up the small logical drive in the RAID BIOS as another poster detailed so nicely. Now. Before now. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] dm-crypt/LUKS the state of the art for block device encryption?
On Tue, 2 Feb 2010 at 12:00pm, Robert P. J. Day wrote it's been a while since i've played with filesystem encryption so, on centos 5.4 (and other linux distros), is dm-crypt/LUKS considered to be the state of the art WRT encryption? i remember other solutions like loop-aes and others, but what's considered the gold standard these days? dm-crypt/LUKS is what the installer in Fedora sets up these days, so I'd say it's still the standard solution. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Mplayer and VDPAU
On Wed, 6 Jan 2010 at 3:01pm, fred smith wrote I've looked on the mplayer web site and it says you can use vdpau, but it doesn't say HOW. Would that be with something like -vo vdpau ?? I have the following in ~/.mplayer/config: vo=vdpau vc=ffmpeg12vdpau,ffh264vdpau,ffvc1vdpau,ffwmv3vdpau, It forces mplayer to try the vdpau accelerated codecs first and then, if none of them work, to select a non-vdpau one that does. Whether or not that's the *best* way, I don't know, but it does work. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Installing R on CentOS 5
On Mon, 7 Dec 2009 at 6:45pm, Diederick Stoffers wrote Has anyone been able to successfully install R on CentOS5.4? I am having problems with dependencies perl is installed. I use the packages from EPEL without a problem. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] simple NFSv4 setup
On Wed, 18 Nov 2009 at 5:05pm, Joshua Baker-LePain wrote I'm trying to setup a simple NFSv4 mount between two x86_64 hosts. On the server, I have this in /etc/exports: /export $CLIENT(ro,fsid=0) /export/qb3 $CLIENT(rw,nohide) ON $CLIENT, I mount via: mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3 However: $ touch /usr/local/sge62/qb3/foo touch: cannot touch `/usr/local/sge62/qb3/foo': Read-only file system I'd really rather not export the pseudo-root read-write, so how do I get this working? Any hints would be appreciated -- thanks. For the archives, my issue was that qb3 was on the /export filesystem. I instead mounted the filesystem at /export/qb3, and then the above setup worked. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] simple NFSv4 setup
On Thu, 19 Nov 2009 at 9:21am, Kwan Lowe wrote Not sure if this applies in your FS4 setup, but most of my NFS permissions errors have stemmed from user ID mismatches on the host server. My NFS4 mounts are not using any true NFS4 features, however. My problem isn't a permissions issue -- it's the fact that the mount on the client is read-only. And NFS4 doesn't rely on numerical UID/GID matching anymore. It uses the username string (via rpc.idmapd). In any case, both the usernames and UIDs/GIDs match on these two systems. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] simple NFSv4 setup
I'm trying to setup a simple NFSv4 mount between two x86_64 hosts. On the server, I have this in /etc/exports: /export $CLIENT(ro,fsid=0) /export/qb3 $CLIENT(rw,nohide) ON $CLIENT, I mount via: mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3 However: $ touch /usr/local/sge62/qb3/foo touch: cannot touch `/usr/local/sge62/qb3/foo': Read-only file system I'd really rather not export the pseudo-root read-write, so how do I get this working? Any hints would be appreciated -- thanks. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] simple NFSv4 setup
On Wed, 18 Nov 2009 at 4:05pm, Tim Nelson wrote - Joshua Baker-LePain jl...@duke.edu wrote: /export $CLIENT(ro,fsid=0) /export/qb3 $CLIENT(rw,nohide) Your export: /export/qb3 $CLIENT(rw,nohide) And your mount: mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3 The remote path is wrong. Either that's a typo or could be the cause of your problem? No, that's how NFSv4 mounts work -- it's relative to the pseudo-root (the fsid=0 entry) on the server. And the mount succeeds. But it's a read-only mount, where it should be rw. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Caught between a Red Hat and a CentOS
On Tue, 20 Oct 2009 at 11:47am, Joseph L. Casale wrote I can't believe I'm jumping into this thread This useless thread will never end, FOSS guys have their sh!t in a knot over MS for reason of which I have my own opinions. I wonder what those opinions are. One of the main reasons *I* am no fan of MS is their clear subversion of standards for their own ends. Exchange, e.g., has a *horrible* IMAP implementation which they have point blank admitted they have no intentions of fixing. They, of course, want you to use their proprietary mail client. And then there was the whole ODF fiasco. I hear they make good mice though... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Setting up large (12.5 TB) filesystem howto?
On Fri, 28 Aug 2009 at 1:03pm, Götz Reinicke - IT-Koordinator wrote fdisk and parted fail to create any information on the device or fail completely. You can't use fdisk on a volume that large. parted should work fine. What was the error you were getting (exactly)? For a volume that large, you must use a GPT disk label, not the default msdos one. But, I can't create a filesystem on it: mkfs.ext3 -m 2 -j -O dir_index -v -b 4096 -L iscsi2lvol0 /dev/mapper/VolGroup02-lvol0 mke2fs 1.39 (29-May-2006) mkfs.ext3: Filesystem too large. No more than 2**31-1 blocks (8TB using a blocksize of 4k) are currently supported. As has been pointed out, you need to use -F to force mkfs.ext3 to make a filesystem bigger than 8TB. IMHO, this is misleading. Filesystems up to 16TB are fully supported in centos 5.1, so I don't see why the upstream vendor left the requirement for -F in mkfs.ext3. So my question: What is my missunderstanding or what's wrong with my system? Where are the real limits? Do I have to switch the OS to 64 Bit? You do not have to switch to 64bit, and your setup should be fully supported. Other folks have mentioned XFS, and that's an option. But if you want to stay fully compatible with upstream, then ext3 is your only option. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Setting up large (12.5 TB) filesystem howto?
On Fri, 28 Aug 2009 at 8:58am, Akemi Yagi wrote On Fri, Aug 28, 2009 at 8:36 AM, Joshua Baker-LePainjl...@duke.edu wrote: You do not have to switch to 64bit, and your setup should be fully supported. Other folks have mentioned XFS, and that's an option. But if you want to stay fully compatible with upstream, then ext3 is your only option. Support for xfs has been added to RHEL 5.4 which will be released any day now. So it has. I recall looking in the beta release notes when they first came out and not seeing it. So either I just plain missed it or it's been added there since then. In any case, that's great news and something that is *long* overdue. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos 5.3, no AHCI on HP DL320 G5p?
On Mon, 27 Jul 2009 at 1:19pm, Veiko Kukk wrote I'm not sure for this particular model server, but normally this is a *BIOS* setting for the SATA controller. There are no settings in BIOS for AHCI mode, it's only possible to choose between raid and sata controller mode, i have chosen sata mode. Check to see if there's a BIOS update on HP's site. I had some DL160s with an old BIOS with no option for AHCI mode. After upgrading to the most recent BIOS, the option was there. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT - Sensa Fuze
On Fri, 17 Jul 2009 at 3:13pm, Ed Donahue wrote Anyone have experience with using a Sensa Fuze mp3/ogg player on CentOS 5.x? As long as a player supports mass storage mode (which the Fuze claims to), then it'll work rather easily in any vaguely modern Linux. I'm looking for a player that plays vorbis and isn't a M$/DRM/Apple slave and this one looks like a good one to buy. If you want to watch video as well, the Cowon S9 is a great choice -- the AMOLED screen is utterly gorgeous. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can't play DVD movies on CentOS 5.3 after following guidance on the wiki
On Wed, 3 Jun 2009 at 12:07pm, MHR wrote I like xine for most DVD playing - as long as it recognizes the DVD, I have no trouble with it at all. It also has a feature that mplayer lacks - turning off the screen saver while the movie is playing (which also has its drawbacks...). Erm, what? $ man mplayer . . -stop-xscreensaver (X11 only) Turns off xscreensaver at startup and turns it on again on exit. If your screensaver supports neither the XSS nor XResetScreen- Saver API please use -heartbeat-cmd instead. That being said, that flag didn't work for me with gnome-screensaver (it works just fine with xscreensaver). The referenced heartbeat-cmd flag and the example included in the manpage works just fine, though. Any of those options can be included in one's ~/.mplayer/config file to be automatically used during any mplayer session. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can't play DVD movies on CentOS 5.3 after following guidance on the wiki
On Wed, 3 Jun 2009 at 12:26pm, MHR wrote On Wed, Jun 3, 2009 at 12:14 PM, Joshua Baker-LePain jl...@duke.edu wrote: Erm, what? $ man mplayer . . -stop-xscreensaver (X11 only) Turns off xscreensaver at startup and turns it on again on exit. If your screensaver supports neither the XSS nor XResetScreen- Saver API please use -heartbeat-cmd instead. Hey, gimme a break! It's a LONG man page Heh, that it is. Also, thanks! No problem. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can't play DVD movies on CentOS 5.3 after following guidance on the wiki
On Wed, 3 Jun 2009 at 12:44pm, MHR wrote Erm, what? -stop-xscreensaver (X11 only) Turns off xscreensaver at startup and turns it on again on exit. These are the only two appearances of the word screensaver on the man page. There is no reference to a heartbeat-cmd (CentOS 5.3) What version of mplayer are you using (and from what repo)? The CentOS version doesn't help too much, as mplayer isn't included in any of the default repos. And this doesn't appear to be a very recent feature (googling reveals references to it that are over a year old). -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can't play DVD movies on CentOS 5.3 after following guidance on the wiki
On Wed, 3 Jun 2009 at 1:06pm, MHR wrote On Wed, Jun 3, 2009 at 12:53 PM, Joshua Baker-LePain jl...@duke.edu wrote: What version of mplayer are you using (and from what repo)? The CentOS version doesn't help too much, as mplayer isn't included in any of the default repos. And this doesn't appear to be a very recent feature (googling reveals references to it that are over a year old). You made me look, and I found that I was missing a couple of packages. I now have: mplayer-fonts-1.1-3.0.rf.noarch mplayerplug-in-3.55-1.el5.rf.x86_64 mplayer-skins-1.8-1.nodist.rf.noarch mplayer-1.0-0.40.rc1try2.el5.rf.x86_64 mplayerplug-in-3.55-1.el5.rf.i386 mplayer-docs-1.0-0.40.rc1try2.el5.rf.x86_64 These are the most recent imports from rpmforge. That may be, but that's actually a *very* old version of mplayer. The most recent release of mplayer is 1.0rc2, and *that's* dated 10/7/07 (see http://www.mplayerhq.hu/MPlayer/releases/ChangeLog). Most folks run SVN snapshots of mplayer -- rpmfusion's package for Fedora, e.g., is a SVN snapshot from 9/3/08. And even that is too old for a lot of things. For my HTPC, e.g., I compiled mplayer from my own SVN checkout so I could use VDPAU, which only got added within the last few months. BUT: I looked again, and the man page remains the same I even checked the mplayer home site, and neither option is mentioned in their documentation. I see -heartbeat-cmd on http://www.mplayerhq.hu/DOCS/man/en/mplayer.1.html. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 32bit vs 64bit memory usage
On Thu, 21 May 2009 at 4:59pm, Robert Heller wrote No, you are not wrong. All x86 flavered 64-bit processors will run as 32-bit (i686) processors and when running in 32-bit mode are effectively just a i686 as far as any 32-bit program can tell. There is no reason NOT to just install a straight 32-bit OS on such a machine if there is less than 4gig of virtual memory and non-of the programms being run has any reason to use the 64-bit address space. Web hosting That's not strictly true. On some x86_64 chips, there are extra registers which are only available when running in 64-bit mode. Running without those registers can hamper performance, even if the program isn't using the larger address space. This can make a big difference, e.g., in the HPC space. Web hosting, yeah, probably not so much. But just saying 64bit iff 4GB RAM doesn't tell the whole story. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Preventing hour-long fsck on ext3-filesystem
On Thu, May 14, 2009 at 2:03 PM, Scott Silva ssi...@sgvwater.com wrote: on 5-14-2009 1:24 PM Pasi � spake the following: It seems XFS might be added as a default to RHEL 5.4.. Probably not a default, but an option. I wonder which high-end customer *finally* drove them to do this (if, indeed, they are going to). Us regular folks have been agitating for this for ages, but we were always told that ext3 was just fine and why would we need anything else. Somebody with $$ must have told them in no uncertain terms XFS or we're outta' here. -- Joshua conspiracy theorist for a day Baker-LePain Department of Biomedical Engineering Duke University ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kickstart question
On Mon, 4 May 2009 at 1:28pm, Jerry Geis wrote I do not have package @mysql in the list - yet after install rpm -qa | grep -i mysql reports mysql loaded. how can I stop mysql from loading from anaconda? Type 'yum remove mysql' and see what depends on it. I'd guess something in the gnome-desktop group is bringing it in as a dependency. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kickstart question
On Mon, 4 May 2009 at 1:38pm, Jerry Geis wrote / I do not have package @mysql in the list - yet after install rpm -qa | // grep -i mysql reports mysql loaded. // // how can I stop mysql from loading from anaconda? / Type 'yum remove mysql' and see what depends on it. I'd guess something in the gnome-desktop group is bringing it in as a dependency. I get dovecot and mysqlclient ?? Keep going up the tree -- try 'yum remove'ing those and see how much gets ripped out. If you're ok with it, then put those packages at the bottom of the 'packages section in your ks.cfg with - signs in front of them. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [OT] Godaddy hell...
On Fri, 3 Apr 2009 at 2:26pm, Scott Silva wrote on 4-3-2009 10:16 AM David G. Miller spake the following: When my oldest brother was living in upstate New York his employer gave him a temporary assignment in Plymouth, England. One of the neighbors commented, Won't that be a long drive? Like the comedian Jeff Foxworthy says, Here's your sign! Close but no cigar -- that would be Bill Engval. -- Joshua why do I know that? Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] running yum from userid
On Fri, 13 Mar 2009 at 4:57pm, Robert Moskowitz wrote I added via visudo my userid for authorization of me ALL(ALL) NOPASSWD: ALL and I still cannot run yum as me. Is this just not possible? What happens when you run sudo yum commands? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 5 for IA64
On Fri, 6 Mar 2009 at 12:55am, Rainer Duffner wrote Am 05.03.2009 um 23:06 schrieb Nigel Kendrick: Can anyone with a well-connected crystal ball suggest a timeframe for an IA64 release of CentOS 5? Weeks? Months? Never? If you can afford an IA64-box, you can also afford the RHEL licence ;-) Not to pick nits, but not everyone buys their IA64 hardware. Some inherit it, some have it donated, etc. It's not necessarily an indication of great wealth. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] smartd and 3ware 9xxx configs
On Tue, 10 Feb 2009 at 9:42pm, Jim Perrin wrote I'm looking to do a bit more monitoring of my 3ware 9550 with smartd, and wanted to see what others were doing with smart for monitoring 3ware hardware. Do you have the smartd.conf configured to test, or simply monitor health status? Are you monitoring the drive as centos sees it (/dev/sdX) or are you using the 3ware /dev/twaX for monitoring? Opinions and discussions are welcome :-P Have you thought about tying tw_cli into nagios? That's one of my round-tuit projects. I'm sure there are already plugins for it, and it seems like you may get better info. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] More than 2TB RAID...
On Tue, 27 Jan 2009 at 6:43pm, Jake wrote I should say that I STRONGLY recommend not creating ext3 file systems in the 2TB+ range - fsck takes too long and you'd hate to get hit by one of those in what is supposed to be a quick reboot...and disabling them on the file system isn't a good idea either. On the other hand, nothing is as well supported on RHEL/CentOS as is ext3. So if you're data is really important to you, think hard about using another FS. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Poor RAID performance new Xeon server?
On Sat, 10 Jan 2009 at 10:46pm, Stewart Williams wrote Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory To actually test disk performance, you need to use a filesize of at least 2X (and preferably 4X) memory size. Otherwise you're just testing memory performance. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] Enterprise Package Tracker
On Sat, 29 Nov 2008 at 2:07pm, Joseph L. Casale wrote Is that how rpmfind works? Don't know _exactly_ how I searches, but I think that point is mute. ObPetPeeve: moot. The point is moot. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Where is the file that sets aliases?
On Mon, 10 Nov 2008 at 7:42pm, Anne Wilson wrote Looking back, I still can't see it, Kai. I remember being told to look in ~/.bashrc. If you're root (why are you logging in as root?), then ~ *is* /root. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HA Storage Cookbook?
On Fri, 7 Nov 2008 at 2:35pm, nate wrote Gordon McLellan wrote: I guess I'm saying, if you interpret the name Serial Attached Scsi literally, then the Seagate ES.2 is not an SAS drive - it is not a scsi drive with a serial interface. However, if you interpret SAS as an interface standard, then the interface board determines what the drive is, more so than its mechanical construction. SAS and SATA use the same physical interface, the drive mentioned is most definitely SATA. Largest SAS drive I have heard of myself is 400GB, same as the max size for FC drives. No. No it isn't. It's SAS. The platters etc are the same hardware used in the SATA part, but the interface circuitry is native SAS. Note that they offer the drive in both SATA and SAS variants. While SATA and SAS are *supposed* to be able to be mixed freely, my vendor has warned me that it doesn't always work out that well. They have seen compatibility issues using SATA drives on SAS controllers. So for applications where you want/need a SAS controller but still need big capacity, these are the drives they recommend. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HA Storage Cookbook?
On Fri, 7 Nov 2008 at 4:22pm, nate wrote Joshua Baker-LePain wrote: While SATA and SAS are *supposed* to be able to be mixed freely, my vendor has warned me that it doesn't always work out that well. They have seen compatibility issues using SATA drives on SAS controllers. So for applications where you want/need a SAS controller but still need big capacity, these are the drives they recommend. Sounds like you need a better vendor for a solution that will work. Wait, what? They steer me away from squirrely configs and find me one that works within my budget, and you're criticizing them? I'm rather confused. Don't try to explain, though. I don't think I'll get it. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] boot problems
On Tue, 28 Oct 2008 at 9:35pm, Phil Schaffner wrote At the risk of sounding even MORE pedantic, many would appreciate it if you ran a spell-checker as well. (Grammar checkers seem to be beyond the state-of-the-art in email clients.) :D The GRUB shell is quiet powerful and can help in debugging your ^ Thus conforming to the rule that every spelling flame must contain at least one typo of its own -- well done! ;) -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Slow NFS writes
On Sat, 18 Oct 2008 at 11:21pm, Craig White wrote Server, CentOS 5.2 and updated earlier today, just installed a week ago. Client, Macintosh G4, OS X 10.4.11 *snip* Copy To Win2K AFP SMB NFS 1m40.053s 0m22.566s 0m23.817s 2m11.849s Copy From Win2K AFP SMB NFS 1m34.478s 0m20.709s 0m20.823s 0m23.487s Do you have any Linux clients to test with? That way you could determine whether the problem is on the server or the client side. ISTR hearing bad things about Apple's NFS implementation (shocking, I know). You also want to test with larger files (at least 2x RAM of the server or client, whichever is larger) to make sure you're not just seeing cache effects. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Slow NFS writes
On Mon, 20 Oct 2008 at 10:49am, Craig White wrote I was using a 648 Gb 'PSD' file which surely is beyond caching. Your initial email stated 648 Megabyte Photoshop file (PSD). Also, your transfer times are on the order of 20 seconds. Unless you have a network running at 32 GB/s, I don't think you mean 648 GB. As an aside, it'd be fun to watch Photoshop try to open a file over half a terabyte in size. You definitely were correct in your assertion and stupid me should have tested it from another Linux box. $ time cp BackgroundGraphic.psd /home/filesystems/srv-adv/ real0m18.547s user0m0.015s sys 0m3.306s so the problem isn't NFS slow writes...it's slow NFS writes from Macintosh client ;-( Get rid of the Macs. Problem solved. ;) -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] strict memory
On Thu, 16 Oct 2008 at 12:48pm, Mag Gam wrote Running 5.2 at our university. We have several student's processes that take up too much memory. Our system have 64G of RAM and some processes take close to 32-48G of RAM. This is causing many problems for others. I was wondering if there is a way to restrict memory usage per process? If the process goes over 32G simply kill it. Any thoughts or ideas? Have a look at /etc/security/limits.conf. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] RAID / OS Monitoring tools ?
On Mon, 22 Sep 2008 at 5:53pm, Fred Kienker wrote It's a dell 2900 : quad core XEON (with a spare slot for a 2nd quad core chip) 8G memory (expandable to 64G) 8 hot swap SATA HD slots onboard PERC6i RAID controller *snip* Now I want to be able to monitor the box, specifically with respect to the RAID drives so I'll know if one has gone bad and the RAID configuration has failed over to it. Anyone have any suggestions for tools to use ? You need the Dell OMSA tools. They are easy to install and work great with CentOS. Google Dell and OMSA to find them. You can add the Dell repository to yum to make it even easier to install and maintain them. If you're like me and generally hate vendor tools, I believe that Dell's PERC controllers are rebadged LSIs. LSI has a command line tool you can use to monitor them. MegaCli is more than just a bit obtuse, but a little bit of scripting goes a long way. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS exporting a GFS mount point
On Mon, 8 Sep 2008 at 10:23pm, Yanagisawa, Koji wrote Hello, I have a storage offering some 11 TB of space. I'd happily use ext3 and NFS export to 4 client machines, but 8 TB seems to be the tested maximum. I'd really like one mount point for the whole 11 TB. Since GFS offers lock_nolock option for local mounting, I'm assuming it's not so out of line to NFS export this GFS mount point. ext3 in CentOS 5.2 supports up to 16TB volumes. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Yum Issues with Dev groups
On Fri, 22 Aug 2008 at 9:37am, Akemi Yagi wrote On Fri, Aug 22, 2008 at 9:02 AM, Joseph L. Casale [EMAIL PROTECTED] wrote: which xen rpms did you install? The ones from centos, or the ones from xensource? Rolled my own from the 3.2.0 srpm. Generally when building for x86_64, it's best to remove all traces of x86 packages on the system. How do you do this at install? Wouldn't that be cleaner? I suppose a rpm command with a --queryformat ARCH string would list all that is x86 and I couild pipe that into a remove command? Any ideas on how to do this cleanly? First inspect what i386 packages are on your system: rpm -qa --queryformat %{name}-%{version}-%{release}.%{arch}\n | grep i386 If you are sure you can delete all of them, then: yum remove *.i386 will do the job. It will ask Y/n, so look through the list before hitting the Enter key :-D Actually, both of those commands should be looking for i[36]86, otherwise you'll miss, e.g., glibc.i686. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] Yum Issues with Dev groups
On Fri, 22 Aug 2008 at 11:41am, Joseph L. Casale wrote Actually, both of those commands should be looking for i[36]86, otherwise you'll miss, e.g., glibc.i686. Any way to simply not install them when doing an install? Unfortunately, not that I'm aware of. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Yum Issues with Dev groups
On Fri, 22 Aug 2008 at 11:22am, Akemi Yagi wrote On Fri, Aug 22, 2008 at 11:10 AM, Joshua Baker-LePain [EMAIL PROTECTED] wrote: On Fri, 22 Aug 2008 at 11:41am, Joseph L. Casale wrote Actually, both of those commands should be looking for i[36]86, otherwise you'll miss, e.g., glibc.i686. Any way to simply not install them when doing an install? Unfortunately, not that I'm aware of. There is a known issue with yum. See, for example, http://lists.centos.org/pipermail/centos-devel/2008-June/002961.html And a newer version of yum has a fix for that: http://lists.centos.org/pipermail/centos-devel/2008-June/002967.html For people who are interested, yum-3.2.17-0_beta is in the *testing* repo at this moment. When Joseph said when doing an install, I assumed that meant at system install time. I know of no way of doing a pure x86_64 install via anaconda (although I'd love to be told I'm wrong on that). For installing packages/package groups, then yum comes into the picture. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] What fires logrotate
On Thu, 21 Aug 2008 at 11:06am, Al Sparks wrote I've been taking a look at how RedHat (and CentOS) handles logrotate. According to the man page, logrotate is supposed to be fired by cron. But when I look at root's crontab $ sudo crontab lu root no crontab for root What exactly fires logrotate (and other scheduled events like logwatch, which ends up in root's inbox)? Look in /etc/cron.{hourly,daily,monthly,weekly}. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RE: Adaptec 2820 2gig+ partitions
On Wed, 20 Aug 2008 at 5:19pm, Brian Marshall wrote The controller is 8 channel SATA 300. That's not the problem. The problem is getting the OS to mount a 4TB array after the initial install of the OS. We had a 1.5 TB on there for a few years and never had this problem with arrays under 2 TB. Your original post said 2GB, not TB. Are you trying to boot from this device? Because you can't. Search the list history for multiple posts about dealing with 2TB drives. The highlights are: a) You can't boot from it b) You must use a gpt disklabel c) You must use parted to partition it -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Re: 6TB SCSI RAID vs. Centos
On Fri, 25 Jul 2008 at 2:17pm, Scott Silva wrote on 7-25-2008 6:26 AM Marcelo Roccasalva spake the following: You can't install centos (or redhat) over a gpt partition (unless itanium platform) and there is a big chance your bios won't boot such installation. I came with 2 solutions: if disk access performance isn't important (as for backup), I do software raid; or I install two little raid1 disks for the OS and then I use GPT or LVM on the multi-tera raid of big disks. Or partition the array with a small partition for OS and big partition (gpt) for data. You should be able to carve up the array that way. I think you're mixing your terminology there. gpt isn't a partition type, it's a disklabel. There's only one per disk (obviously), no matter how many partitions are on the disk. What some array controllers can do is carve a single array into multiple volumes (usually each presented on their own LUN). Then you could carve the one array into a small boot volume (with an msdos disklabel and multiple partitions) on one LUN and the rest in a large data bolume (with a gpt disklabel) on the other LUN. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 6TB SCSI RAID vs. Centos
On Thu, 24 Jul 2008 at 6:42pm, Milt Mallory wrote I have an Infortrend RAID box I'd like to see as one big 6TB partition, but I only can get 2.2TB partitions to work. I was trying to do this with an Adaptec controller but apparently they are only (any of them) 48 bits wide. Does anybody have a working system for SCSI/Centos over 2.2TB? Are you using gpt disk labels and parted (rather than fdisk) to do your partitioning? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ps to pdf
On Mon, 21 Jul 2008 at 9:35am, Craig White wrote I need a way to convert files that I save with Firefox as a 'print to file' to 'pdf' I tried 'convert' but that rendered the text as graphics which grew the file and wasn't what I wanted. How would someone accomplish this - or can I just print to a PDF? Shockingly, there's ps2pdf... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] yum remove old kernel pkgs -- wants to remove a to n of stuff
On Wed, 16 Jul 2008 at 12:21pm, Bowie Bailey wrote I didn't think there was any functional difference between: rpm -e package-name and yum remove package-name Isn't yum just a front-end for the rpm system? yum also does dependency checking/resolution. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] hard drive info
On Wed, 9 Jul 2008 at 12:18pm, Hiep Nguyen wrote i'm acessing a centos box via ssh, is there any way that i can find out the hard drive info, such IDE/SATA, format, size, make model, etc...? dmesg df man smartctl -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] PCI express ether cards
On Thu, 26 Jun 2008 at 8:10pm, Milt Mallory wrote Greetings. I'm looking for recommendations for a PCI Express ethernet card that works with Centos5. Kernel is: Linux mgw1.topix.net 2.6.18-53.1.4.el5PAE #1 SMP Fri Nov 30 01:21:20 EST 2007 i686 i686 i386 GNU/Linux 1) Upgrade. That kernel is vulnerable to the vmsplice exploit. 2) Intel. Period. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] dm-multipath use
On Wed, 25 Jun 2008 at 7:49pm, Geoff Galitz wrote Are folks in the Centos community succesfully using device-mapper-multipath? I am looking to deploy it for error handling on our iSCSI setup but there seems to be little traffic about this package on the Centos forums, as far as I can tell, and there seems to be a number of small issues based on my reading the dm-multipath developer lists and related resources. I am in the midst of setting this up on C-5 attached to a MSA1000 running active/passive. The documentation is... sparse, to say the least. There's more than a bit of guesswork in my setup, and I have yet to actually test the failover. I certainly think it'd be worthwhile for you to document your experience somewhere. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware 9650 issues
On Sun, 22 Jun 2008 at 1:37pm, Peter Arremann wrote On Sunday 22 June 2008 12:04:47 am Joshua Baker-LePain wrote: I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it, running as a single RAID6 w/ a hot spare. What size power supply do you have in your server? 1500W. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Re: 3ware 9650 issues
On Sun, 22 Jun 2008 at 10:23am, Scott Silva wrote on 6-21-2008 9:04 PM Joshua Baker-LePain spake the following: This of course leads to a several hour downtime as the system has to be powered down (not just rebooted) and then the volume needs to be fscked. I've been back and forth with both the vendor and (via the vendor) 3ware with this. The card has been replaced, as well as the whole system. I'm running the latest firmware and drivers from 3ware. That looks like either drive, cabling, or power problems. I'd agree, except for a) all the hardware has been swapped out and b) 1500W should be plenty. It's starting to sound like this may be a somewhat known issue with a *long* overdue fix coming from 3ware. *sigh* Thanks all. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] 3ware 9650 issues
I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it, running as a single RAID6 w/ a hot spare. These issues boil down to the card periodically throwing errors like the following: sd 1:0:0:0: WARNING: (0x06:0x002C): Command (0x8a) timed out, resetting card. Usually when this happens, it's followed by: 3w-9xxx: scsi1: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=0. On the less pleasant occasions, it's followed by: scsi1: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx: scsi1: ERROR: (0x06:0x002B): Controller reset failed during scsi host reset. sd 1:0:0:0: scsi: Device offlined - not ready after error recovery This of course leads to a several hour downtime as the system has to be powered down (not just rebooted) and then the volume needs to be fscked. I've been back and forth with both the vendor and (via the vendor) 3ware with this. The card has been replaced, as well as the whole system. I'm running the latest firmware and drivers from 3ware. Have other folks had good luck with this card? What sorts of configs are you running? I'm in the position of needing more storage, and I'm a bit gun shy on 3ware at the moment... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware 9650 issues
On Sat, 21 Jun 2008 at 9:12pm, John R Pierce wrote Joshua Baker-LePain wrote: I've been having no end of issues with a 3ware 9650SE-24M8 in a server that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB) hooked to it, running as a single RAID6 w/ a hot spare. These issues boil down to the card periodically throwing errors like the following: Have other folks had good luck with this card? What sorts of configs are you running? I'm in the position of needing more storage, and I'm a bit gun shy on 3ware at the moment... I have no experience with that raid card, most of our larger systems use external SAN storage, but I will say that, IMHO, is a very large raid-6. we usually don't make single raid sets much large than 7-8 drives, and for a very large storage system, will stripe multiple raid5/6 sets rather than have one huge one. Would that I had such luxuries. This is a university lab with needs for massive amounts of data and not much money with which to do it. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware 9650 issues
On Sun, 22 Jun 2008 at 1:01am, Ruslan Sivak wrote Joshua Baker-LePain wrote: On Sat, 21 Jun 2008 at 9:12pm, John R Pierce wrote I have no experience with that raid card, most of our larger systems use external SAN storage, but I will say that, IMHO, is a very large raid-6. we usually don't make single raid sets much large than 7-8 drives, and for a very large storage system, will stripe multiple raid5/6 sets rather than have one huge one. Would that I had such luxuries. This is a university lab with needs for massive amounts of data and not much money with which to do it. Wouldn't striping a bunch of raid6 volumes give you about the same amount of space? No. We have 24 drives. Use one for a hot spare - leaves 23. 1 array: 23 drives, - 2 for parity - capacity = 21 * drive capacity 2 arrays: array1 = 12 drives - 2 for parity - 10 drives array2 = 11 drives - 2 for parity - 9 drives - capcity = 19 * drive capcity 3 arrays: array1 = 8 drives - 2 for parity - 6 drives array2 = 8 drives - 2 for parity - 6 drives array3 = 7 drives - 2 for parity - 5 drives - capcity = 17 * drive capacity With 1TB drives, you're losing 2TB worth of volume space for each increased number of arrays. That's a lot of space. Unless I misunderstood you... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware performance in CentOS
On Fri, 20 Jun 2008 at 9:55am, Florin Andrei wrote Florin Andrei wrote: Anyway, I did a test with the 2.6.18-93.el5.bz444759 kernel and there's no difference: 65 minutes, 27 MB/s. Looks like it doesn't matter which kernel I use, at least for this simple test with dd. I wonder if a test closer to real life, such as reading/writing stuff from/to MySQL, would produce different results. I guess there's only one way to find out. ;-) As a side note, the artificial benchmark reveals a huge difference between Ext3 and XFS - the latter is much faster when writing. Might be an artifact of some setting (after all, I do use a hardware RAID card). But the difference is very real. I was planning to use XFS anyway, so I'm not sure if I'll spend too much time troubleshooting Ext3. I don't think this is some kind of hidden effect of the MWI bug. XFS has *always* been faster on 3ware than ext3. RH has never been interested in looking at why. *shrug* -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware performance in CentOS
On Fri, 20 Jun 2008 at 10:34am, John R Pierce wrote Jim Perrin wrote: Nope. This is just a long-standing performance thing. You can tune ext3 to perform better, but on a 3ware card xfs will win, hands down. of course, XFS can also fail spectacularly.ext3fs fully journals all metadata updates. I'm sure this is a major portion of the performance differences on writes. Every FS can fail spectacularly. XFS (obviosly) journals as well, but it doesn't force an ordered mode as ext3 does by default. However, even if you mount ext3 with data=writeback (which is roughly analogous to XFS' journaling mode), ext3 still doesn't perform nearly as well as XFS on 3ware. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 3ware performance in CentOS
On Thu, 19 Jun 2008 at 10:55am, Florin Andrei wrote Have a look at these pages: http://www.bofh-hunter.com/2008/06/13/3ware-performance-in-centos/ https://bugzilla.redhat.com/show_bug.cgi?id=444759 I'm comparing the default 5.1 64bit kernel with the patched one posted in the bug report (kernel-2.6.18-53.1.21.el5.bz32.x86_64) and I don't quite see any significant difference in write performance for this command: That's the wrong patched kernel. You'd need to be using one of the kernels in http://people.redhat.com/thenzl/kernel/ -- kernel-2.6.18-93.el5.bz444759.x86_64.rpm. I'd be interested in a way of telling from within the OS whether or not MWI is enabled... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Forbidden: You don't have permission to access /phpMyAdmin/ on this server.
On Wed, 18 Jun 2008 at 7:32pm, Herta Van den Eynde wrote Environment: - CentOS 5.1, - Apache 2.2.3 - php 5.1.6 - phpMyAdmin 2.11.6 - MySQL 5.0.22 Brand new system, brand new installation of all the above products. All looks well, but when I try to connect to phpMyAdmin, I get an error: Forbidden: You don't have permission to access /phpMyAdmin/ on this server. I'll forgo all the paths I followed trying to get this to work and cut to the solution: I renamed the phpMyAdmin directory to pma, copied all files in the pma directory to a new phpMyAdmin (FWIIW, using 'cp -pr'), and voilà, problem vanished. (I cannot explain why I even tried that.) My first idea was that maybe the copy somehow resolved some issue at the directory level, but when I output an 'ls -laR' of the two directories to two files, 'diff' shows both files to be identical (apart from the timestamps on . and .. directories). The pma and phpMyAdmin directories reside in the same documentroot, have the same ownership, and the same permissions. This must be about the weirdest experience in my professional career. If anyone can shed a light on this, it'd be most welcome. I still have the original (malfunctioning) directory on the system to bounce ideas off if anyone has any inspiration (system will go live this weekend). 2 things spring to mind: 1) httpd config with directory based allow/deny 2) selinux -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] TCP offload cards in linux
On Fri, 13 Jun 2008 at 10:52am, nate wrote Anyone have experience with any? I've been having a real hard time finding info on any cards that actually support this under linux. (most of the cards work but I don't see drivers that actually offload the TCP stack) I have seen some comments where kernel developers don't like the idea as well. A pretty good discussion of this just occurred over on the beowulf mailing list. See http://marc.info/?t=12107921039r=1w=2. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Problems installing 5.1 on a Tyan Thunder HEsl with a SCSI controller
On Wed, 11 Jun 2008 at 9:03pm, Timothy Selivanow wrote I'm trying to install 5.1 using the onboard LSI Symbios 53C1010, and I'm running into some trouble. When the computer first boots, the SCSI BIOS sees the three HDDs, but when I go to install, the installer hangs for a while at inserting the sym53c8xx driver and if I go over to the screen on F4 it shows that it is trying to scan the SCSI bus and is resetting all of the IDs. Once that is done, it moves on the the actual installer, but does not see any drives. Have you tried all the usual SCSI voodoo -- check the cables, check your termination, ensure you used the proper color goat? -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] FW: Partitioning help
On Tue, 3 Jun 2008 at 1:18pm, Rajeev R. Veedu wrote I have Centos server 4.5 with 3.3TB raid disk on a 3 ware controller. Now the problem is that I am not able to see the partition in full since it shows only 1.2TB. You need CentOS 5 to support devices 2TB. And you can't boot from such devices because, as the other posted mentioned, you must use a gpt partition label, which grub does not support. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Re: XFS install issue
On Tue, 3 Jun 2008 at 12:29pm, [EMAIL PROTECTED] wrote Rebooted # fdisk /dev/sdb Created the partition. fdisk reported fdisk can't handle devices that large. You must 1) Use parted 2) mklabel gpt -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] XFS install issue
On Mon, 2 Jun 2008 at 7:03am, Johnny Hughes wrote I would also not use XFS in production ... but that is just me. If XFS was production ready, it would be in RHEL. Since it is turned on in Fedora and since it is purposely turned off in RHEL, one can reasonably conclude that the upstream people DO NOT THINK it is stable enough to use in production on RHEL. This is JUST my opinion :D IIRC, RH's stated reason (stated on the mailing lists in the midst of folks clamoring for XFS' inclusion) for not having XFS turned on in RHEL is *not* that it's not production ready. It's that they only have the resources (read: folks with knowledge in-depth enough to satisfy enterprise customers) to support 1 FS, and that's ext3. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] R (statistics package) on CentOS-5 ?
On Wed, 7 May 2008 at 9:57pm, Kay Diederichs wrote the statistics package R (www.r-project.org) was available for CentOS-4, but I need it for CentOS-5 (64bit). RPMforge has R-2.5.1 for CentOS-4, so I thought I'd try to install that on CentOS-5. However, yum complains about missing ggv, and the specfile indeed says that R requires ggv, which is not in CentOS-5 (it has kghostview). I could probably install with rpm -Uvh --nodeps, but the question is rather: has anybody built R for CentOS-5 ? It's in EPEL. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] I need storage server advice
On Tue, 6 May 2008 at 12:11pm, Ed Morrison wrote Situation: My current storage needs are approximately 1.5 TB annually. This will increase to about 3.5 TB annually over the next 5 years (rough est.). This box will just be a data archive and once it is full it will only be used very infrequently if not used at all. Files are small up to 10 MB but numerous. CentOS: Upgrading to the newer CentOS flavors. I will not have the ability to archive this data to tape and I am concerned about loosing the data when upgrading the OS. How best to handle this? You have to be careful, but it's quite easy to leave partitions (and thus their data) alone when you are updating/reinstalling the OS. Storage limitation. It is my understanding that there is a 2 TB storage limitation with Linux (and windows) in general particularly for stability. I see that ReiserFS can go up to 16 TB. Is any one using this? If so, how has it been for you? You cannot boot from a device larger than 2TiB, but that's the only limitation at that size. I run several multi-TB servers (including over 8TB) on CentOS-5 with no issues (using ext3). You do not want to use ReiserFS. It's not supported under CentOS, and it's future is far less than certain (and I do not want to restart *that* OT conversation). ext3 is the default FS under CentOS and works pretty well. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ext3 filesystems larger than 8TB
On Fri, 2 May 2008 at 2:36pm, Monty Shinn wrote I am trying to create a 10TB (approx) ext3 filesystem. I am able to successfully create the partition using parted, but when I try to use mkfs.ext3, I get an error stating there is an 8TB limit for ext3 filesystems. I looked at the specs for 5 on the upstream vendor's website, and they indicate that there is a 16TB limit on ext3. Has anyone been able to create a ext3 filesystem larger than 8TB? Yes. 'mke2fs -F' forces it to make the FS, even though it thinks it's too big. They should have changed that when 16TB ext3 fs support moved from tech preview to production ready, but I think it got missed. Maybe in 5.2... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kickstart question
On Wed, 30 Apr 2008 at 2:46pm, Jerry Geis wrote I have a couple lines like: part / --ondisk=sda --fstype ext3 --size=2 --asprimary part swap --ondisk=sda --size=4000 --asprimary part /home --ondisk=sda --fstype ext3 --size=1 --asprimary --grow in my kickstart file. Is there a way to have 1 kickstart file that works for hda and sda both??? If you only expect to have 1 drive in the systems you're installing, you can just leave off the --ondisk=. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Re: kickstart question
On Wed, 30 Apr 2008 at 4:30pm, Jerry Geis wrote / I have a couple lines like: // // part / --ondisk=sda --fstype ext3 --size=2 --asprimary // part swap --ondisk=sda --size=4000 --asprimary // part /home --ondisk=sda --fstype ext3 --size=1 --asprimary --grow // // in my kickstart file. // // Is there a way to have 1 kickstart file that works for hda and sda both??? / If you only expect to have 1 drive in the systems you're installing, you can just leave off the --ondisk=. Thanks, what do I do when I am installing RAID with 2 disks then. AFAIK, there you're stuck with calling the disks by hda/sda. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OpenGL and CAD Apps
On Fri, 28 Mar 2008 at 10:05am, Joseph L. Casale wrote I am likely going to switch a desktop over from Windows to either CentOS or Fedora and run two CAD apps that now have Linux ports. I was wondering as I can't seem to find much specific info about the support of OpenGL in RH based OS's. Does CentOS support this out of the box or do I need to do anything with it? Both Fedora and CentOS support GL out of the box. However you do have to be careful about your grahpics hardware in order to get hardware-accelerated 3D. Certain older ATI cards have open-source 3D support built right into the distro. For most newer cards, however, you'll need to use closed-source drivers from ATI or nvidia. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OpenGL and CAD Apps
On Fri, 28 Mar 2008 at 11:39am, Frank Cox wrote On Fri, 28 Mar 2008 13:35:36 -0400 (EDT) Joshua Baker-LePain [EMAIL PROTECTED] wrote: For most newer cards, however, you'll need to use closed-source drivers from ATI or nvidia. Or the open-source drivers for Intel video chipsets. Intel's 3D performance still lags *far* behind that of ATI or nvidia. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] OpenGL and CAD Apps
On Fri, 28 Mar 2008 at 11:48am, Joseph L. Casale wrote For most newer cards, however, you'll need to use closed-source drivers from ATI or nvidia. Or the open-source drivers for Intel video chipsets. The nvidia drivers look simple! On the other hand, the Intel drivers look difficult to install :( Anyone succesfully done the Intel ones and can share some pointers? The Intel drivers (being open-source) are already in CentOS. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] questions on kickstart
On Fri, 28 Mar 2008 at 2:43pm, Jerry Geis wrote I have 2 questions dealing with 2 different kickstart files. 1) my kickstart sections for RAID disk setup and kickstart reports it cannot find sda. Why is that. sda is there and works. clearpart --all --initlabel part raid.01 --asprimary --bytes-per-inode=4096 --fstype=raid --onpart=sda1 Is sda already partitioned? raid / --bytes-per-inode=4096 --device=md0 --fstype=ext3 --level=1 raid.01 raid.03 raid / --bytes-per-inode=4096 --device=md1 --fstype=ext3 --level=1 raid.02 raid.04 If that's cut and pasted from your ks file, then you're requesting the same partition twice... 2) my kickstart section for normally single disk setup. However with 2 disks present in box it put / on sda and /home on sdb. Is there a way to put it ALL on sda??? If there is a second disk I want it left alone. clearpart --all --initlabel part / --fstype ext3 --size=2 --asprimary part swap --size=4000 --asprimary part /home --fstype ext3 --size=100 --grow --asprimary Add --ondisk=sda to each line. I refer to http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Installation_Guide/s1-kickstart2-options.html quite often... -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Re: questions on kickstart
On Fri, 28 Mar 2008 at 4:23pm, Jerry Geis wrote clearpart --all --initlabel part --ondisk=sda raid.01 --asprimary --bytes-per-inode=4096 --fstype=raid --onpart=sda1 --size=2 part --ondisk=sda swap--asprimary --bytes-per-inode=4096 --fstype=swap --onpart=sda2 --size=4000 part --ondisk=sda raid.02 --asprimary --bytes-per-inode=4096 --fstype=raid --onpart=sda3 --size=1 --grow part --ondisk=sdb raid.03 --asprimary --bytes-per-inode=4096 --fstype=raid --onpart=sdb1 --size=2 part --ondisk=sdb swap--asprimary --bytes-per-inode=4096 --fstype=swap --onpart=sdb2 --size=4000 part --ondisk=sdb raid.04 --asprimary --bytes-per-inode=4096 --fstype=raid --onpart=sdb3 --size=1 --grow raid / --bytes-per-inode=4096 --device=md0 --fstype=ext3 --level=1 raid.01 raid.03 raid /home --bytes-per-inode=4096 --device=md1 --fstype=ext3 --level=1 raid.02 raid.04 I changed the config to use --ondisk above and at install I get a message saying: Unable to locate partition sda1 to use for . Press OK to reboot your system. Remove the 'onpart's. You can't use 'clearpart' and 'onpart' together. The manual says onpart tells anaconda to Put the partition on the *already existing* device, but clearpart *removes* any already extant devices. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
RE: [CentOS] Re: questions on kickstart
On Fri, 28 Mar 2008 at 4:32pm, Ross S. W. Walker wrote I think you might be missing a little something in there, like /boot? /boot is not required to be its own partition. In the days of yore, when BIOSes couldn't boot from partitions the crossed the 1024 cylinder barrier, it made sense to have a small /boot as your first partition. These days? Not so much. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] A few questions regarding CentOS (5.0)
On Thu, 27 Mar 2008 at 3:24pm, Morten Nilsen wrote Yes, I am well aware of the dependency thing.. I used to maintain a large selection of packages in TSL contrib.. I did rpm -e libgnomesomething and added on packages until it stopped complaining about deps.. 'yum remove libgnomesomething' will do the depsolving for you (just like 'yum install'). -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 5.1 install via PXE Failure
On Thu, 13 Mar 2008 at 5:11pm, James Gray wrote The installer starts, loads the kickstart script (attached), successfully verifies the installation media, checks the dependencies for the packages to be installed, formats the hard drive(s), then attempts to download the package: sysklogd-1.4.1-39.2.x86_64.rpm Somehow, your install is looking for CentOS 5.0 files rather than 5.1. Make sure that your repodata files match up with what is actually in your repo. Make sure there aren't any crossed symlinks somewhere. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos