Re: [CentOS] RHEL changes
On Thu, 21 Jan 2021, Phil Perry wrote: On 21/01/2021 22:40, John R. Dennison wrote: Surely anyone requiring less than 16 licences will now ditch CentOS 7 in favour of RHEL7? Drat. I have 25 systems at home running CentOS 7.9, and one system running OEL 7.9. Steve -- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 3901 N Charles St VSW Support: support AT vgersoft DOT com Baltimore MD 21218 "186,282 miles per second: it's not just a good idea, it's the law" ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 8 Stream: The Good, The Bad, The Ugly
On Wed, 9 Dec 2020, Joshua Kramer wrote: It can, however, be mitigated if RedHat backtracks, admits their mistake, and affirmatively commits to support future CentOS point releases. I'll be interested to see how this turns out. It may already be too late. Even if RedHat says "my bad" and goes back on this decision, not many will trust them in the future. Steve -- -------- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 3901 N Charles St VSW Support: support AT vgersoft DOT com Baltimore MD 21218 "186,282 miles per second: it's not just a good idea, it's the law" ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [CentOS-announce] Release for CentOS Linux 7 (1503 ) on x86_64
On Thu, 2 Apr 2015, Les Mikesell wrote: I didn't see any indication there that you were planning to turn the /etc/redhat-release file into a symlink. In CentOS, /etc/redhat-release has always been a symlink to /etc/centos-release. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [CentOS-announce] Release for CentOS Linux 7 (1503 ) on x86_64
On Thu, 2 Apr 2015, Les Mikesell wrote: Well if you define 'always' as 'for CentOS6 and later... Yes, you are right. I was relying on my obviously faulty and aged memory, so I checked on my two remaining CentOS 5 boxes. There is no /etc/centos-release file there at all, only an /etc/redhat-release, so obviously not a symlink at all. More coffee. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Strange crash after NVidia driver installation from ELRepo
On Tue, 3 Mar 2015, i...@microlinux.fr wrote: Any suggestions? There are multiple nvidia drivers in Elrepo, depending on the model of video card. Install and run nvidia-detect to find out which driver you need; just installing the kmod-nvidia package is not guaranteed to give you a working driver. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] SSI?
All, What is the state of SSI (single system image) on CentOS 6/7 these days? I'm interested in this at present just for the fun aspect of trying it. Obviously openMosix is no more, but OpenSSI, Kerrighed and LinuxPMI appear to be dormant. TIA, Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] rsync question: building list taking forever
On Sun, 19 Oct 2014, Keith Keller wrote: I suspect that sshfs's relatively poor performance is having an impact on your transfer. I have a 30TB filesystem which I rsync over an OpenVPN link, and building the file list doesn't take that long (maybe an hour?). (The links themselves are reasonably fast; if yours are not that would have a negative impact too.) Don't forget that the time taken to build the file list is a function of the number of files present, and not their size. If you have many millions of small files, it will indeed take a very long time. Over sshfs with a slowish link, it could be days. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 7, xfs
On Thu, 25 Sep 2014, John R Pierce wrote: yes, you need inode64, as without it, it will be unable to create directories after the first 2TB(?) fills up. I have recently found that with XFS and inode64, certain applications won't work properly when the file system is exported w/NFS4 to a 32-bit system, such as firefox and gnome. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 7, xfs
On Thu, 25 Sep 2014, John R Pierce wrote: On 9/25/2014 2:01 PM, Steve Thompson wrote: On Thu, 25 Sep 2014, John R Pierce wrote: yes, you need inode64, as without it, it will be unable to create directories after the first 2TB(?) fills up. I have recently found that with XFS and inode64, certain applications won't work properly when the file system is exported w/NFS4 to a 32-bit system, such as firefox and gnome. even if you specify a = 32bit fsid on the export? No, I did not specify the fsid in this case. -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ZFS
On Mon, 15 Sep 2014, Fernando Cassia wrote: It´s called BTRFS. It´s supported by SUSE, Fujitsu, Oracle, among others. Yeah, but is it supported by the *US Government* ??? Steve___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] sssd and authconfig and ldap database lookups
On Wed, 6 Aug 2014, Mauricio Tavares wrote: 1. I see that when you install sssd (this is centos 6), sssd.conf is not created. It certainly should install it; /etc/sssd/sssd.conf. It's in the RPM. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sun, 18 May 2014, Ted Miller wrote: How recently have you looked at Gluster? It has seen some significant progress, though small files are still its weakest area. I believe that some use-cases have found that NFS access is faster for small files. I last looked at Gluster about two months ago, using version 3.4.2. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sun, 18 May 2014, Les Mikesell wrote: Do you really need filesystem semantics or would ceph's object store work? Yes, I really need file system semantics; I am storing home directories. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sun, 18 May 2014, Andrew Holway wrote: Have you looked at parallel filesystems such as Lustre and fhgfs? I have not looked at Lustre, as I have heard many negative things about it (including Oracle ownership). The only business using Lustre where I know the admins has had a lot of trouble with it. No redundancy. Fhgfs looks interesting, and I am planning on looking at it, but have not yet done so. MooseFS and GlusterFS have both been evaluated, and were too slow. In the case of GlusterFS, wy too slow. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Large file system idea
This idea is intruiging... Suppose one has a set of file servers called A, B, C, D, and so forth, all running CentOS 6.5 64-bit, all being interconnected with 10GbE. These file servers can be divided into identical pairs, so A is the same configuration (diks, processors, etc) as B, C the same as D, and so forth (because this is what I have; there are ten servers in all). Each file server has four Xeon 3GHz processors and 16GB memory. File server A acts as an iscsi target for logical volumes A1, A2,...An, and file server B acts as an iscsi target for logical volumes B1, B2,...Bn, where each LVM volume is 10 TB in size (a RAID-5 set of six 2TB NL-SAS disks). There are no file systems directly built on any of the LVM volumes. Each member of a server pair (A,B) are in different cabinets (albeit in the same machine room) and are on different power circuits, and have UPS protection. A server system called S (which has six processors and 48 GB memory, and is not one of the file servers), acts as iscsi initiator for all targets. On S, A1 and B1 are combined into the software RAID-1 volume /dev/md101. Similarly, A2 and B2 are combined into /dev/md102, and so forth for as many target pairs as one has. The initial sync of /dev/md101 takes about 6 hours, with the sync speed being around 400 MB/sec for a 10TB volume. I realize that only half of the 10-gig bandwidth is available while writing, since the data is being written twice. All of the /dev/md10X volumes are LVM PV's and are members of the same volume group, and there is one logical volume that occupies the entire volume group. An XFS file system (-i size=512, inode64) is built on top of this logical volume, and S NFS-exports that to the world (an HPC cluster of about 200 systems). In my case, the size of the resulting file system will ultimately be around 80 TB. The I/O performance of the xfs file system is most excellent, and exceeds by a large amount the performance of the equivalent file systems built with such packages as MooseFS and GlusterFS: I get about 350 MB/sec write speed through the file system, and up to 800 MB/sec read. I have built something like this, and by performing tests such as sending a SIGKILL to one of the tgtd's, I have been unable to kill access to the file system. Obviously one has to manually intervene on the return of the tgtd in order to fail/hot-remove/hot-add the relevent target(s) to the md device. Presumably this will be made easier by using persistent device names for the targets on S. One could probably expand this to supplement the server S with a second server T to allow the possibility of failover of the service should S croak. I haven't tackled that part yet. So, what failure scenarios can take out the entire file system, assuming that both members of a pair (A,B) or (C,D) don't go down at the same time? There's no doubt that I haven't thought of something. Steve -- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT vgersoft DOT com Ithaca, NY 14850 186,282 miles per second: it's not just a good idea, it's the law ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sat, 17 May 2014, SilverTip257 wrote: Sounds like you might be reinventing the wheel. I think not; see below. DRBD [0] does what it sounds like you're trying to accomplish [1]. Especially since you have two nodes A+B or C+D that are RAIDed over iSCSI. It's rather painless to set up two-nodes with DRBD. I am familiar with DRBD, having used it for a number of years. However, I don't think this does what I am describing. With a conventional two-node DRBD setup, the drbd block device appears on both storage nodes, one of which is primary. In this case, writes to the block device are done from the client to the primary, and the storage I/O is done locally on the primary and is forwarded across the network by the primary to the secondary. What I am describing in my experiment is a setup in which the block device (/dev/mdXXX) appears on neither of the storage nodes, but on a third node. Writes to the block device are done from the client to the third node and are forwarded over the network to both storage servers. The whole setup can be done with only packages from the base repo. I don't see how this can be accomplished with DRBD, unless the DRBD two-node setup then iscsi-exports the block device to the third node. With provision for failover, this is surely a great deal more complex than the setup that I have described. If DRBD had the ability for the drbd block device to appear on a third node (one that *does not have any storage*), then it would perhaps be different. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sat, 17 May 2014, Eero Volotinen wrote: How about glusterfs? I have tried glusterfs; the large file performance is reasonable, but the small file performance is too low to be useable. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Large file system idea
On Sun, 18 May 2014, Dennis Jacobfeuerborn wrote: Why specifically do you care about that? Both with your solution and the DRBD one the clients only see a NFS endpoint so what does it matter that this endpoint is placed on one of the storage systems? The whole point of the exercise is to end up with multiple block devices on a single system so that I can combine them into one VG using LVM, and then build a single file system that covers the lot. On a budget, of course. Also while with you solution streaming performance may be ok latency is going to be fairly terrible due to the round-trips and synchronicity required so this may be a nice setup for e.g. a backup storage system but not really suited as a more general purpose solution. Yes, I hear what you are saying. However, I have investigated MooseFS and GlusterFS using the same resources, and my experimental iscsi-based setup gives a file system that is *much* faster than either in practical use, latency notwithstanding. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disappearing Network Manager config scripts
My last test with Network Manager was a couple of years ago. At that time, a client that was set to boot using DHCP and NM would not set its hostname when such was provided with the DHCP response. That was a show stopper for me (none of my 200+ non-wifi clients have any configuration on them that identifies the machine in any way). Is this still the case? Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Single sign-on for CentOS-6
On Tue, 28 Jan 2014, James B. Byrne wrote: Does anyone here use a Samba4 setup for single sign-on for MS_Win workstations and CentOS-6 boxes? Does anyone here use it for imap and/or smtp authentication? Yes to all of these, using sssd on CentOS, for about 18 months now. It works very well. We have two DC's on CentOS, no Windows DC's. No winbind. I can post the sssd.conf if anyone is interested. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Single sign-on for CentOS-6
On Thu, 30 Jan 2014, Bob Marcan wrote: Please post sssd.conf. OK, here it is. Note that we're using service discovery to locate the DC's, which avoids having to hard-code the DC host names. This particular sssd.conf was from a machine called nebula, and europa.icse.cornell.edu is the domain (and realm) name. [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss, pam domains = LOCAL, EUROPA [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 pam_pwd_expiration_warning = 7 [domain/LOCAL] description = Local Users domain id_provider = local enumerate = false min_id = 400 max_id = 499 [domain/EUROPA] description = EUROPA Environment id_provider = ldap auth_provider = krb5 chpass_provider = krb5 enumerate = false min_id = 1000 max_id = 5 dns_discovery_domain = europa.icse.cornell.edu ldap_sasl_mech = GSSAPI ldap_sasl_authid = HOST/nebula.icse.cornell@europa.icse.cornell.edu ldap_search_base = DC=europa,DC=icse,DC=cornell,DC=edu ldap_id_use_start_tls = false ldap_tls_reqcert = never ldap_tls_cacertdir = /etc/openldap/cacerts ldap_schema = rfc2307bis ldap_referrals = false ldap_force_upper_case_realm = true ldap_access_order = expire ldap_account_expire_policy = ad ldap_sasl_canonicalize = false ldap_user_search_base = CN=users,DC=europa,DC=icse,DC=cornell,DC=edu ldap_user_object_class = person ldap_user_name = sAMAccountName ldap_user_fullname = displayName ldap_user_gecos = displayName ldap_user_uid_number = uidNumber ldap_user_gid_number = gidNumber ldap_user_home_directory = unixHomeDirectory ldap_user_shell = loginShell ldap_user_principal = userPrincipalName ldap_user_modify_timestamp = whenChanged ldap_group_search_base = CN=users,DC=europa,DC=icse,DC=cornell,DC=edu ldap_group_object_class = group ldap_group_name = sAMAccountName ldap_group_gid_number = gidNumber ldap_group_modify_timestamp = whenChanged ldap_group_nesting_level = 2 krb5_server = europa.icse.cornell.edu krb5_kpasswd = europa.icse.cornell.edu krb5_realm = EUROPA.ICSE.CORNELL.EDU krb5_ccachedir = /tmp krb5_ccname_template = FILE:%d/krb5cc_%U_XX krb5_auth_timeout = 15 -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] A question about 7
On Wed, 15 Jan 2014, mark wrote: What do you mean, slot? All of my servers, and our systems at home, the NIC's on the m/b. What slot is that? Is it labeled *anywhere*? No, of course not. Many servers have PCI cards for NICs in addition to those on the motherboard (if any). For example, most of my file servers have eight ethernet interfaces (six 1GBE, two 10GbE). On my Dell servers, the built-in interfaces are labeled on the back panel. However, at least in CentOS 6, you can call the interfaces anything you want by suitably changing /etc/udev/rules.d/70-persistent-net.rules. The names used have to be consistent with /etc/sysconfig/network-scripts/ifcfg-* of course. BTW, I have some workstations that have only a single interface, and that comes up as p2p1. I actually like the new scheme better, but don't get me started on the use of UUID in /etc/fstab... Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Odd useradd/LDAP behaviour
On Fri, 11 Oct 2013, Paul Jones wrote: So why is LDAP making useradd use the wrong values? It isn't. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Anyone using CentOS Active Directory like system?
On Sat, 28 Sep 2013, Rajagopal Swaminathan wrote: Have you looked into Samba 4 which provides build for Centos and it seems it does support AD as DC: One more vote for Samba4. -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] erase disk
On Thu, 26 Sep 2013, Bret Taylor wrote: A fairly simple solution is dd if=/dev/zero (or urandom) of=/dev/(device) I usually hit the disk with a hammer. Satisfying :-) -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] ntpd heavy CPU consumption
CentOS 6.4, x86_64. ntpd on one of my systems has started consuming 66% of one core, although it appears to be functioning correctly otherwise. No pertinent logs. Of course, nothing was changed :) I've seen this before many times, but usually the CPU consumption falls back to normal within a day or so, but this has been going on for several weeks now. Stopping and restarting ntpd makes no difference. Reinstalling ntpd makes no difference. The ntpd.conf file is the same as on 200+ other systems, all of which work normally. Anyone seen this before? Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] What on Centos is wiping out my eth0 IP address every 5 minutes?
On Tue, 23 Jul 2013, Rock wrote: Why? How do I stop this? (All I want is for eth0 to *stay* at the IP address I set it to!) finger NetworkManager. Probably need NM_CONTROLLED=no in ifcfg-eth0. -steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] What on Centos is wiping out my eth0 IP address every 5 minutes?
On Tue, 23 Jul 2013, Rock wrote: If I set this to no and reboot, will it have any negative implication for my normal wireless network (which I use all day)? It will not; it has effect only for eth0. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NFSv4
On Fri, 14 Jun 2013, Steve Thompson wrote: I still have an issue with user access to the NFSv4 mount, and a workaround for it, but that's for another time. And now is another time (but I am at the point on giving up on this for now, as it has become a large consumer of time). To reiterate, I am trying to get Keberized NFSv4 to work with CentOS 6.4 clients in a Samba4 domain, using sssd and Samba 4.0.5 (no winbind). Now, CentOS 6.4 uses kernel 2.6.32-358.6.2.el6 and nfs-utils 1.2.3-36. First of all, the Samba4 KDC (a separate pair of systems) appears to be working, DNS (samba4+bind9+dlz) is working (forward and reverse), and NFSv4 is working just fine with sec=sys (ie no Kerberos), so I believe the basic infrastructure to be sound, including ID mapping. I am using the sec=sys case in production with Samba4, so I know that to be good (and, interestingly, it feels a lot snappier than NFSv3). NFSv4 mounts with sec=krb5 work fine as long as I create a suitable UPN in the Samba database. On the client and server: # kinit Administrator # FQDN=`hostname` # msktutil \ --base CN=Computers \ --keytab /etc/krb5.keytab \ --dont-expire-password \ --no-pac \ --computer-name nfs-$HOST \ --hostname $FQDN \ --service nfs/$FQDN \ --upn nfs/$FQDN \ --user-creds-only and the nfs/... entries show up within the client and server's /etc/krb5.keytab and look correct: # klist -ke Keytab name: FILE:/etc/krb5.keytab KVNO Principal -- 1 host/fqdn@REALM (des-cbc-crc) 1 host/fqdn@REALM (des-cbc-md5) 1 host/fqdn@REALM (arcfour-hmac) 1 host/fqdn@REALM (aes128-cts-hmac-sha1-96) 1 host/fqdn@REALM (aes256-cts-hmac-sha1-96) 1 host/shortname@REALM (des-cbc-crc) 1 host/shortname@REALM (des-cbc-md5) 1 host/shortname@REALM (arcfour-hmac) 1 host/shortname@REALM (aes128-cts-hmac-sha1-96) 1 host/shortname@REALM (aes256-cts-hmac-sha1-96) 1 SHORTNAME$@REALM (des-cbc-crc) 1 SHORTNAME$@REALM (des-cbc-md5) 1 SHORTNAME$@REALM (arcfour-hmac) 1 SHORTNAME$@REALM (aes128-cts-hmac-sha1-96) 1 SHORTNAME$@REALM (aes256-cts-hmac-sha1-96) 1 HOST/fqdn@REALM (des-cbc-crc) 1 HOST/fqdn@REALM (des-cbc-md5) 1 HOST/fqdn@REALM (arcfour-hmac) 1 HOST/fqdn@REALM (aes128-cts-hmac-sha1-96) 1 HOST/fqdn@REALM (aes256-cts-hmac-sha1-96) 2 nfs-shortname$@REALM (arcfour-hmac) 2 nfs-shortname$@REALM (aes128-cts-hmac-sha1-96) 2 nfs-shortname$@REALM (aes256-cts-hmac-sha1-96) 2 nfs/fqdn@REALM (arcfour-hmac) 2 nfs/fqdn@REALM (aes128-cts-hmac-sha1-96) 2 nfs/fqdn@REALM (aes256-cts-hmac-sha1-96) Here /data is the exported bind mount that is underneath the fsid=0 exports entry: # mount -t nfs4 -o sec=krb5 server_fqdn:/data /mnt # (works) I can browse the mount point as root and all permissions and ownerships are correct, except of course that I cannot descend into directories for which root (aka nobody) does not have permissions, as expected. Now as a user (me, with UID 1002), using the server as a client (but using a separate client makes no difference), I can't even browse: $ kinit $ ls /mnt ls: cannot access /mnt: Permission denied and that's as far as I can get. From /var/log/messages: rpc.gssd[7564]: using FILE:/tmp/krb5cc_1002 as credentials cache for client with uid 1002 for server server_fqdn rpc.gssd[7564]: using environment variable to select krb5 ccache FILE:/tmp/krb5cc_1002 rpc.gssd[7564]: creating context using fsuid 1002 (save_uid 0) rpc.gssd[7564]: creating tcp client for server server_fqdn rpc.gssd[7564]: DEBUG: port already set to 2049 rpc.gssd[7564]: creating context with server nfs@server_fqdn rpc.gssd[7564]: WARNING: Failed to create krb5 context for user with uid 1002 for server fqdn I have of course researched this at length, and found lots of instances of folks seeing the same Failed to create krb5 context message, but no-one with the same combination of OS and Samba4, and no resolutions. I have also tried a Fedora 18 client and server (kernel 3.9.5-201.fc18, nfs-utils 1.2.7-6) with a different but equivalent pair of Samba4 domain controllers. Again, NFSv4 with sec=sys works fine, and with sec=krb5 it fails in *exactly* the same way as for CentOS. Using nfs ads keytab add nfs... properly creates an SPN, and this is not sufficient, on both CentOS and Fedora. Any ideas? Please stop me from drinking so much coffee. TIA! -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NFSv4
On Thu, 20 Jun 2013, steve wrote: Thanks for your reply! I am really pulling my hair out over this one, and I don't have that much left :( What do you have in /etc/idmapd.conf The content of this file is correct as far as I understand it, as it works with NFSv3 and NFSv4 with sec=sys: [General] Verbosity = 0 Domain = icse.cornell.edu Local-Realms = TITAN.TEST.CORNELL.EDU [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nsswitch (and I have nsswitch.conf correctly configured). Note: in my case, the value of Domain in idmapd.conf is NOT the same as the DNS domain name. But as I understand it, as long as it is the same on all servers and clients, this should not matter, as it is just a label. I tried setting it to the DNS domain name, but it didn't make any difference. And changing it on just the server and not the clients leaves all ownerships as being nobody:nobody instead of the proper ownerships, which is (a) expected, and (b) leads me to believe that rpc.idmapd is working as it should. Starting rpc.idmapd with -vvv dumps the mappings to /var/log/messages, and they are correct. In any case, clients don't all have the same DNS domain name. What does ps aux | grep rpc give? rpc 1616 0.0 0.0 18972 992 ?Ss Jun18 0:00 rpcbind rpcuser 1649 0.0 0.0 25420 1380 ?Ss Jun18 0:00 rpc.statd root 1678 0.0 0.0 0 0 ?SJun18 0:00 [rpciod/0] root 1679 0.0 0.0 0 0 ?SJun18 0:01 [rpciod/1] root 5789 0.0 0.0 50112 2072 ?Ss 12:06 0:00 rpc.svcgssd -vvv root 5795 0.0 0.0 107304 276 ?Ss 12:06 0:00 rpc.rquotad root 5799 0.0 0.0 22832 2560 ?Ss 12:06 0:00 rpc.mountd --no-nfs-version 2 root 5850 0.0 0.0 36900 1048 ?Ss 12:06 0:00 rpc.idmapd -vvv root 8807 0.0 0.0 37340 2556 ?Ss 16:37 0:00 rpc.gssd -vvv All the expected daemons are present, including rpc.gssd and rpc.svcgssd. I have rpc.svcgssd running on the clients too, although it should not be necessary there (but the CentOS init scripts don't give the option to not start it). Can the user browse using nfs3? mount -t nfs3 -o sec=krb5 server_fqdn:/data /mnt No; exactly the same result as NFSv4. But yes with sec=sys. Have a look at the gotchas. There's loadsa wrong info abut kerberos and nfs4: http://linux-nfs.org/wiki/index.php/Nfsv4_configuration That's one of the many articles that I've read (several times). I don't see anything wrong in what I have done (btw, I don't agree that the fsid=0 export should be mode 1777, and I don't agree that your first exports example is the proper way to do it. But in any event I have tried those too, to no effect). Steve -- To unsubscribe from this list go to the following URL and read the instructions: https://lists.samba.org/mailman/options/samba ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NFSv4
On Thu, 20 Jun 2013, John Hodrien wrote: Is it possible that Samba4 includes a large PAC on the kerberos credential and you're going over the limit in kernel? Well, that is a good avenue to explore. The user that I am testing with (me) is only in five groups, but nevertheless I will take a further look at that Five minutes later: holy crap! That is it. I took a user in only one group: permission denied. I set the NO_AUTH_DATA_REQUIRED flag in userAccountControl (via ldbedit), and hey presto NFSv4+krb5 now works. You sir are a steely-eyed missile man! I'm not convinced your comment about having to run svcgssd on clients is enforced due to CentOS init scripts, but it shouldn't cause any bother as you say. No, it doesn't cause any bother. It just seems that the start of both rpc.gssd and rpc.svcgssd are conditional on SECURE_NFS being set to yes. There are no NEED_GSSD or NEED_SVCGSSD or whatever to filter it further. Thanks, Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NFSv4
On Tue, 11 Jun 2013, Steve Thompson wrote: * allow_weak_crypto=yes is REQUIRED in krb5.conf for this software version combo. * a separate user object is REQUIRED with the UPN nfs/fqdn. I add this using msktutil on the client when the client is joined to the domain. Using net ads keytab add nfs is NOT sufficient, since it adds an SPN and not a UPN. Aw crap, I hate it when I do that. It turns out that allow_weak_crypto=yes is NOT required at all, provided that the nfs/fqdn UPN that is created supports the necessary enctypes. I original had --enctypes=0x3 when I created the UPN with msktutil; by recreating the UPN without using --enctypes at all, allow_weak_crypto=yes is no longer needed on either client or server, and NFSv4 mounts work just fine with everything essentially stock. It is still true that a UPN must be created, and net ads keytab add is not sufficient. This is with a Samba4 domain, btw. I still have an issue with user access to the NFSv4 mount, and a workaround for it, but that's for another time. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NSFv4
On Sat, 8 Jun 2013, Steve Thompson wrote: Running out of ideas! Well, I managed to solve this one. It turned out to be nothing to do with Samba4, nor the version of nfs-utils (1.2.3-36) or the version of the kernel (2.6.32-358.6.2.el6) on the NFS server and client. It was in the /etc/exports file; I was exporting /mnt/exports (the NFSv4 root with fsid=0) with sec=sys:krb5 and /mnt/exports/data (a file system), also with sec=sys:krb5, but also /mnt/data (the real file system, which is bind-mounted on to /mnt/exports/data), this time without specifying sec=. The latter was as a service to clients using NFSv3. It transpired that by adding sec=sys:krb5 to the latter export, the NFSv4+krb5 mounts all started working. I could argue that this is a bug, but whatever, it is now working. Notes: * allow_weak_crypto=yes is REQUIRED in krb5.conf for this software version combo. * a separate user object is REQUIRED with the UPN nfs/fqdn. I add this using msktutil on the client when the client is joined to the domain. Using net ads keytab add nfs is NOT sufficient, since it adds an SPN and not a UPN. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [Samba] Samba4 and NSFv4
Let's see if on my third attempt I can spell NFS properly :) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Supermicro Boot Failures with DVD Centos 6.2
On Sun, 21 Apr 2013, mark wrote: Ok. Let me start by saying we have some honkin' hot servers (48 64 cores), and they'd had *some* problems, but when we went from 5.x to 6.x, we started seeing more problems... I have about 20 servers with Supermicro boards, and there was a fair amount of instability under 6.3 (hangs for no apparent reason every couple of weeks on about a quarter of the machines). Since I updated to 6.4, everything has been solid. YMMV, of course. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Lockups with kernel-2.6.32-358.0.1.el6.i686
On Fri, 8 Mar 2013, Kwan Lowe wrote: Yes, latest BIOS installed. I have 2 of these also with similar configurations except for the NIC. One works perfectly the other has constant freezes. The working one has a slightly older BIOS so I'm thinking of downgrading the giltchy one. Just a wild idea: is the NIC in the system that freezes a Broadcom and in the other system something else? If so, disable_msi=1 may help. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] raid 1 question
On Thu, 7 Mar 2013, John R Pierce wrote: On 3/7/2013 3:35 PM, Gerry Reno wrote: Dave, I've been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don't quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub. +1 have you been putting /boot on a mdraid? that's what the article is recommending against. I don't understand why. As long as you remember to install grub on each drive you're good to go. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] drbd and el6
On Tue, 26 Feb 2013, John R Pierce wrote: the initial sync of the 8TB starting volumes is looking to be a 460 hour affair. Something wrong here. That's only 5 MB/sec; I did an initial sync of a 10TB volume in less than a day (dual bonded gigabits, dedicated). Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS-virt] Virtualisation, guests cached memory and density
On Fri, 8 Feb 2013, Karanbir Singh wrote: Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate. My experience is the exact opposite. -s ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Virtualisation, guests cached memory and density
On Fri, 8 Feb 2013, Karanbir Singh wrote: On 02/08/2013 05:20 PM, Steve Thompson wrote: On Fri, 8 Feb 2013, Karanbir Singh wrote: Xen, because of the way it works, will always get to higher density / performance than KVM when desity and reasonable performance are on the plate. My experience is the exact opposite. Do tell more.. I have about 5-ish years Xen experience and 2 years with KVM, covering several hundred different VM's. I switched from Xen to KVM a few weeks after trying KVM for the first time (so my Xen experience is two years out of date). All Linux VM's are PV with virtio; Windows uses virtio also. Bridged networking. One example: the physical host was a Dell PE2900 with 8 cores and 24 GB memory, running (now) CentOS 5.9. I wished to run 38 VM's on this, with the guest O/S being various CentOS versions and Windows XP, 2003 and 7. I could never get 30 or more VM's to start under Xen. It did not matter in which order I started the 30 VM's; the 30th machine always (no matter which one it was) failed to boot. There were periodic strange failures with the Xen guests, and accurate time keeping was always a problem. I switched to KVM just to see what the fuss was about. Using the same disk images as input, I had the 30 VM's up and running without fuss in less than 2 hours. I went on to run the whole 38 in short order. I had no issues with KVM, and to this day I have several physical hosts with about 75 guests, and I have never had a single problem with KVM (really). Time keeping does not appear to be problematical. One of my workloads consists primarily of builds of large software packages, so it is a heavy fork() load. Performance of the guests, measured in terms of both build time and network performance, has been so much better in KVM than under Xen that it's not even funny. I posted on this some time ago. At the time of my last Xen experience, the memory assigned to all active guests had to fit simultaneously in the host's physical memory, so that provided an upper limit. With KVM, the guest's memory is pageable, and so this limit goes away (unless in a practical sense the guest are all active simultaneously, which is not true for any of my workloads). I see the ability to run top as a normal user on a KVM host and see what the guests are up to as a big advantage. Sure, one can run xentop on Xen, but only if you have root access. Xen hosts have to run a Xen-enabled kernel; not so with KVM. I typed this off the top of my head, so I'm sure I missed a bunch of things. Steve ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Virtualisation, guests cached memory and density
On Fri, 8 Feb 2013, James Hogarth wrote: Have you looked at KVM with a c6 host yet? It's a marked improvement over c5 hosts... Yes, I have a samba4 domain controller running as a KVM guest on a CentOS 6.3 host (two of them w/DRS, actually). Runs like a champ. I also have two LVS-DR load balancers running as CentOS 6.3 KVM guests with keepalived w/VRRP, one on a C5.8 host and one on a C6.3 host. Each guest has three network interfaces, as does the host (bridged mode with dual bonded interfaces underneath). Services are LDAP, Windows remote desktop, HTTP, webmail, IMAP and SMTP. Everything works fine for all services with the exception of SMTP, which works fine on C5 but not on C6 (same guest setup, same realservers), where it loses connections when sending mail messages larger than about 10MB. Even setting rp_filter=2 does not help; I have not pinned this one down yet, but I doubt that KVM is responsible since C5 works. As a point of interest, the Windows RDP service feeds 31 Windows XP virtio realservers, which are themselves KVM guests on a Dell R710 (8 physical cores + hyperthreading, 48GB) running CentOS 5.9 (soon to be 6.3). Runs most excellently. Steve ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS] home directory server performance issues
On Wed, 12 Dec 2012, Matt Garman wrote: Could you perhaps elaborate a bit on your scenario? In particular, how much memory and CPU cores do the servers have with the really high NFSD counts? Is there a rule of thumb for nfsd counts relative to the system specs? Or, like so many IO tuning situations, just a matter of test and see? My NFS servers that run 256 nfsd's have four cores (Xeon, 3.16 GHz) and 16 GB memory, with three incoming network segments on which the clients live (each of which is a dual bonded GbE link). I don't know of any rule of thumb; indeed I am using 256 nfsd's at the moment because that is the nature of the current workload. It might be different in a few month's time, especially as we add more clients. Indeed I started with 64 nfsd's and kept adding more until the NFS stalls essentially stopped. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] home directory server performance issues
On Tue, 11 Dec 2012, Dan Young wrote: Just going to throw this out there. What is RPCNFSDCOUNT in /etc/sysconfig/nfs? This is in fact a very interesting question. The default value of RPCNFSDCOUNT (8) is in my opinion way too low for many kinds of NFS servers. My own setup has 7 NFS servers ranging from small ones (7 TB disk served) to larger ones (25 TB served), and there are about 1000 client cores making use of this. After spending some time looking at NFS performance problems, I discovered that the number of nfsd's had to be much higher to prevent stalls. On the largest servers I now use 256-320 nfsd's, and 64 nfsd's on the very smallest ones. Along with suitable adjustment of vm.dirty_ratio and vm.dirty_background_ratio, this makes a huge difference. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS-virt] (no subject)
On Sat, 8 Dec 2012, SilverTip257 wrote: I have a WinXP Pro 32bit VM with virtio drivers and it runs just fine. I don't watch the load on it, so I don't know if its CPU goes idle. I'll have to take a peek at it next week. I have XP, 2003 and Win7 with virtio drivers, and the CPU does go idle on all of them when Windows is doing nothing. However, Windows is often not doing nothing; make sure that you have volume indexing, for example, turned off. Steve ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS] mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UU] [==..] resync = 90.2% (880765600/976222720) finish=44.6min speed=35653K/sec # cat /sys/block/md11/md/mismatch_cnt 1439285488 The mismatch count grows until the assembly is complete, and then remains at its highest value. A subsequent check resets it to zero (immediately) and everything is fine thereafter. The device is not in use by any other system component. I have reproduced this on several different systems; it always happens with CentOS 6.3 and never with CentOS 5.x and earlier (in 5.x, mismatch_cnt always stays at zero while assembling). I am using whole drives in this example, but it's the same if I use partitions instead. The count, size and type of drives appears to have no bearing. Perhaps just a curiosity, but I'm curious as to why it does this. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Copying a file to a usb stick
On Mon, 5 Nov 2012, m.r...@5-cent.us wrote: Twanny Azzopardi wrote: Hello list, I formatted a 64MB usb stick with this command 'mkfs.ext2 -b 1024 /dev/XXX1' , to copy a file of 9230653440, but when it reached 7921295360, it gave input/output error and the file is not copied. How should this be done? What's the o/p of df? Assuming that is the file size in bytes, it's bigger than will fit on the stick. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Virtualization Options!
On Wed, 31 Oct 2012, Stephen Harris wrote: I'm still using KVM, but using with kickstart rather than templates. I switched from Xen (and, before that, VMware) to KVM about eighteen months or so ago just to see what the fuss was about, and have not looked back. I have since deployed about 75-80 virtual machines with KVM (Linux and Windows XP, 2003, 7) and have had zero problems; everything worked perfectly first time out and has continued that way. Performance is better too. I install Linux on a KVM guest using the same PXE+kickstart procedures that are used for physical boxes. I can expound further on my KVM likes if anyone is interested. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Sending Email Via Telnet
On Tue, 16 Oct 2012, John Reddy wrote: [root@mydomain ccc]# telnet 127.0.0.1 25 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. EHLO MAIL FROM: m...@gmail.com RCPT TO: m...@gmail.com DATA testing from server . ^] but I never get back to a command prompt. Please advise. (1) EHLO usually takes an argument (2) QUIT is missing -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 6.3 on Dell Poweredge R815
On Tue, 16 Oct 2012, Surya Saha wrote: Any folks on this list who have installed CentOS 6.3 on the new Dell Poweredge R815 servers? How was your experience? Thanks I don't have any R815's, but I have run CentOS 6.3 on a variety of other Dell hardware (PE2900, R410, R710, etc) with no issues at all. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] slapd 100% cpu
On Fri, 21 Sep 2012, Craig White wrote: as a server, OpenLDAP resources will use RAM based upon the number of entries but until you get upwards of 100,000 entries it shouldn't be of any concern and CPU usage should be extremely light save the brief moment of starting the daemon. As an example, I run three OpenLDAP servers that are accessed through a load balancer from 400+ clients. Total CPU usage of all three servers by slapd averages a bit less than 2 hours per day. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Steve Thompson wrote: On Mon, 10 Sep 2012, Steve Thompson wrote: On Mon, 10 Sep 2012, Dale Dellutri wrote: This looks like it should work for Client A, but maybe not for Client B (see below). So maybe it's a firewall problem (iptables chain FORWARD) on the host? Let me expand on this. There is no issue with a client on net1 communicating with a client on net2; the host passes packets from one subnet to the other as it should. The only issue is when the client is a virtual machine on the host. For those following along at home, the solution to this turned out to be related to the change in the function of the net.ipv4.conf.default.rp_filter parameter in the CentOS 6 kernels; it had nothing to do with KVM. Changing the value of rp_filter from 1 to 2 resolved all issues. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Basic KVM networking question
A CentOS 6.3 box (host) runs several KVM virtual machines, each of which has two interfaces attached to the two bridges br1 and br2 (and each thus has two IP's; one on 192.168.0.0/22 and one on 192.168.4.0/22); net.ipv4.ip_forward on the host is 1. Simplified diagram: host +---+ | | net1 = 192.168.0.0/22 | | net2 = 192.168.4.0/22 ---+ br1 br2 +- | | || | | || Client A +---+Client B (hosts KVM1, KVM2, etc) Each client uses the bridge's IP address on the same side as default gateway. Client A can successfully ping or ssh (for example) to a KVM machine by IP address by using the KVM machine's net1 IP address. Client B can likewise communicate using the KVM machine's net2 IP address. However, neither client can communicate by using the address on the opposing segment (eg, Client A using KVM1_net2_IP); I can see from tcpdump that the packets are received by the virtual machine but no reply is ever made. Any clue? Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Dale Dellutri wrote: Routing problem? Not that I can see, but here is the info (omitting interfaces that are not up). I included on one KVM since the problem is common to the others, and they are all set up the same way. On the host: 3: em2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 84:2b:2b:47:e8:7d brd ff:ff:ff:ff:ff:ff inet6 fe80::862b:2bff:fe47:e87d/64 scope link valid_lft forever preferred_lft forever 4: p1p1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:21:6f:2b:4c brd ff:ff:ff:ff:ff:ff inet6 fe80::21b:21ff:fe6f:2b4c/64 scope link valid_lft forever preferred_lft forever 7: br1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether 84:2b:2b:47:e8:7d brd ff:ff:ff:ff:ff:ff inet 192.168.4.2/22 brd 192.168.7.255 scope global br1 inet6 fe80::862b:2bff:fe47:e87d/64 scope link valid_lft forever preferred_lft forever 8: br2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:1b:21:6f:2b:4c brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/22 brd 192.168.3.255 scope global br2 inet6 fe80::21b:21ff:fe6f:2b4c/64 scope link valid_lft forever preferred_lft forever 10: virbr0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:9d:ad:f7 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 11: virbr0-nic: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:9d:ad:f7 brd ff:ff:ff:ff:ff:ff 12: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:54:00:13:73:28 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe13:7328/64 scope link valid_lft forever preferred_lft forever 13: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:54:00:12:99:dd brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe12:99dd/64 scope link valid_lft forever preferred_lft forever 14: vnet2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:54:00:72:f5:33 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe72:f533/64 scope link valid_lft forever preferred_lft forever 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 192.168.4.0/22 dev br1 proto kernel scope link src 192.168.4.2 192.168.0.0/22 dev br2 proto kernel scope link src 192.168.0.2 default via 192.168.0.1 dev br2 On KVM1: 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:13:73:28 brd ff:ff:ff:ff:ff:ff inet 192.168.3.253/22 brd 192.168.3.255 scope global eth0 inet6 fe80::5054:ff:fe13:7328/64 scope link valid_lft forever preferred_lft forever 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:12:99:dd brd ff:ff:ff:ff:ff:ff inet 192.168.7.253/22 brd 192.168.7.255 scope global eth1 inet6 fe80::5054:ff:fe12:99dd/64 scope link valid_lft forever preferred_lft forever 192.168.4.0/22 dev eth1 proto kernel scope link src 192.168.7.253 192.168.0.0/22 dev eth0 proto kernel scope link src 192.168.3.253 default via 192.168.0.1 dev eth0 On client A: 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:14:22:27:9b:51 brd ff:ff:ff:ff:ff:ff inet 192.168.0.172/22 brd 192.168.3.255 scope global eth0 inet6 fe80::214:22ff:fe27:9b51/64 scope link valid_lft forever preferred_lft forever 192.168.0.0/22 dev eth0 proto kernel scope link src 192.168.0.172 default via 192.168.0.2 dev eth0 On client B: 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:19:b9:c7:23:ad brd ff:ff:ff:ff:ff:ff inet 192.168.5.241/22 brd 192.168.7.255 scope global eth0 inet6 fe80::219:b9ff:fec7:23ad/64 scope link valid_lft forever preferred_lft forever 192.168.4.0/22 dev eth0 proto kernel scope link src 192.168.5.241 default via 192.168.4.1 dev eth0 -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Dale Dellutri wrote: This looks like it should work for Client A, but maybe not for Client B (see below). So maybe it's a firewall problem (iptables chain FORWARD) on the host? Client B's default route is 192.168.4.1. This address is not on the host. Did you mean to use .2? If not, is .1 aware of the routing to the 192.168.0.0/22 network? Actually I have two similar setups, one with .1 and one with .2, so I mixed up the examples here. But in reality it is setup up correctly. And it doesn't work for either client :-( Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Steve Thompson wrote: On Mon, 10 Sep 2012, Dale Dellutri wrote: This looks like it should work for Client A, but maybe not for Client B (see below). So maybe it's a firewall problem (iptables chain FORWARD) on the host? Client B's default route is 192.168.4.1. This address is not on the host. Did you mean to use .2? If not, is .1 aware of the routing to the 192.168.0.0/22 network? Actually I have two similar setups, one with .1 and one with .2, so I mixed up the examples here. But in reality it is setup up correctly. And it doesn't work for either client :-( Let me expand on this. There is no issue with a client on net1 communicating with a client on net2; the host passes packets from one subnet to the other as it should. The only issue is when the client is a virtual machine on the host. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Les Mikesell wrote: What does that mean? A bridge shouldn't have an address and a gateway needs to be the IP of something capable of routing. Sure it has an address: # ip addr show br1 7: br1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN ... inet 192.168.4.2/22 brd 192.168.7.255 scope global br1 inet6 fe80::862b:2bff:fe47:e87d/64 scope link valid_lft forever preferred_lft forever -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Basic KVM networking question
On Mon, 10 Sep 2012, Les Mikesell wrote: Do the things you are trying to reach have a route back through the KVM host? Yep. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] CentOS with sssd and samba4
If you are authenticating your CentOS 6 systems using sssd from a samba4 DC using GSSAPI, I'd like to hear from you. I have been able to get it to work using only cleartext passwords in sssd.conf, and of course I'd prefer to use GSSAPI. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS with sssd and samba4
On Sat, 18 Aug 2012, Steve Thompson wrote: If you are authenticating your CentOS 6 systems using sssd from a samba4 DC using GSSAPI, I'd like to hear from you. I have been able to get it to work using only cleartext passwords in sssd.conf, and of course I'd prefer to use GSSAPI. I found the solution. One cannot use GSSAPI and have start_tls on at the same time. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] stupid bash question
On Wed, 15 Aug 2012, Craig White wrote: the relevant snippet is... NAME=*.mov cd $IN if test -n $(find . -maxdepth 1 -name $NAME -print -quit) and if there is one file in this directory - ie test.mov, this works fine but if there are two (or more) files in this directory - test.mov, test2.mov then I get an error... find: paths must precede expression The substitution of $NAME is expanding the wild card, giving you a single -name with two arguments. You probably want something like: NAME=\*.mov Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Universal server hardware platform - which to choose?
On Tue, 26 Jun 2012, m.r...@5-cent.us wrote: We've had a number of servers fail, and it *seems* to be related to the motherboard. I too have had bad experiences with SuperMicro motherboards; never had one last more than three years. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Postfix will not start due to permission errors on CentOS 6.2 x86
On Wed, 16 May 2012, Gilbert Sebenste wrote: I'm stumped. Does anyone have any ideas what the issue could be? Yes, you overwrote /etc/passwd and who knows what else, so it is no wonder that it is completely broken. Start again. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] XEN or KVM - performance/stability/security?
On Sun, 6 May 2012, Pasi Kärkkäinen wrote: with fork performance I assume you're comparing Xen PV to KVM ? Yes, PV has disadvantage (per design) for that workload, since the hypervisor needs to check and verify each new process page table, and that has some performance hit. For good fork performance you can use Xen HVM VMs, which will perform well for that workload, and won't have the mentioned performance hit. I used both PV and HVM VMs. I don't have the details to hand at the moment, but KVM was superior to both. PV drivers where applicable. I have been running KVM for about 15 months now, with 30 VM's on one host and 38 VM's on another. It has been solid; no problems, but unfortunately I had problems with Xen. Steve___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] XEN or KVM - performance/stability/security?
On Mon, 23 Apr 2012, Peter Peltonen wrote: I've been quite happy with Xen under CentOS5. For CentOS6 the situation is a bit more problematic, as RH switched to KVM and left Xen behind. I used Xen for about four or five years before switching to KVM. I like KVM better in every way, and for my fork-heavy workloads, the performance is a lot better than Xen. It is also much easier to use and is in my experience more stable. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
On Thu, 19 Apr 2012, Giovanni Tirloni wrote: Did you run this command during the hang or is it constantly returning you that? It is returning the time out only during the hang; the rest of the time it works normally. If the later, are you blocking UDP on either the server or the client? No blocking. If you don't specify transport protocol, rpcinfo will use whatever is defined in the /etc/netconfig database and that's usually UDP. Using UDP or TCP makes no difference. rpcinfo -{u,t} host nfs both give a timeout during the hang, and work normally during other times. - Is it happening at the exact same minute (eg. 2:15, 2:45, 3:15, 3:45). This might help you to identify a script/program that follows that schedule. It is not related to any script that I can find. It is not happening at _exactly_ the same time all the time, although it is similar within a few minutes. - Is there any configuration different between this server and the others? /etc/system, root crontab, etc. No differences that I can find. - When you say everything else BUT NFS is working fine, are pings answered properly without increased latency during the hang ? Yes. I can even run an iperf server on the host during the hang, and from a client I run iperf -c and get normal performance. - What about other services? Can you set up a monitoring script connecting to some other service (eg. ftp, ls, exit or ssh) and reporting the total run time? No other service appears to be impacted at all. - Can you set up a monitoring script running rpcinfo on localhost to make sure both local and remote communications hang? Yes, can do. -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
All, Many thanks to everyone who commented on this issue. I believe that I have solved it. It turns out that the number of nfsd's that I was running (32) was way too low. I observed that adding more nfsd's when NFS was hung always caused the hang to go away immediately. Now I am in the tuning stage where I'm adding more nfsd's until there are no more hangs. I am up to 172 of them now, and the hang frequency has decreased by about a factor of six. Evidently my workload has changed when I wasn't looking closely enough. I'll probably end up with about 256 nfsd's. For the sake of completeness, here's how to change the number of nfsd's on the fly: echo 172 /proc/fs/nfsd/threads and, of course, edit /etc/sysconfig/nfs to change RPCNFSDCOUNT to set the value for the next boot. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
Interesting. It looks like some kind of RPC failure. During the hang, I cannot contact the nfs service via RPC: # rpcinfo -t server nfs rpcinfo: RPC: Timed out program 13 version 0 is not available even though it is supposedly available: # rpcinfo -p server program vers proto port 102 tcp111 portmapper 102 udp111 portmapper 1000241 udp 1007 status 1000241 tcp 1010 status 1000211 udp 35077 nlockmgr 1000213 udp 35077 nlockmgr 1000214 udp 35077 nlockmgr 1000211 tcp 56622 nlockmgr 1000213 tcp 56622 nlockmgr 1000214 tcp 56622 nlockmgr 1000111 udp 1009 rquotad 1000112 udp 1009 rquotad 1000111 tcp 1012 rquotad 1000112 tcp 1012 rquotad 132 udp 2049 nfs 133 udp 2049 nfs 134 udp 2049 nfs 132 tcp 2049 nfs 133 tcp 2049 nfs 134 tcp 2049 nfs 151 udp605 mountd 151 tcp608 mountd 152 udp605 mountd 152 tcp608 mountd 153 udp605 mountd 153 tcp608 mountd However, I can connect to the service via telnet: # telnet server nfs Trying ipaddr... Connected to server (ipaddr). Escape character is '^]'. so the service is running but internally borked in some way. Steve -- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT vgersoft DOT com Ithaca, NY 14850 186,282 miles per second: it's not just a good idea, it's the law ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
On Wed, 18 Apr 2012, Ross Walker wrote: Is iptables disabled? If not, problem with rules or RPC helper? Yes, iptables is not in use. What about selinux? Disabled. -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this the hang. It has been doing this for four days now. There are no log messages of any kind pertaining to this. The other three servers are fine, although they are less loaded. Between hangs, performance is excellent. Load is more or less constant, not peaky. NFS clients do get the usual not responding, still trying message during a hang. There are no cron or other jobs that launch every half an hour. All hardware on the affected server seems to be good. Disk volumes being served are RAID-5 sets with write-back cache enabled (BBU is good). RAID controller logs are free of errors. NFS servers used dual bonded gigabit links in balance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_no_metrics_save = 1 The {r,w}mem_{max,default} values are twice what they were previously; changing these had no effect. The number of dirty pages is nowhere near the dirty_ratio when the hangs occur; there may be only 50MB of dirty memory. A local process on the NFS server is reading from disk at around 40-50 MB/sec on average; this continues unaffected during the hang, as do all other network services on the host (eg an LDAP server). During the hang the server seems to be quite snappy in all respects apart from NFS. The network itself is fine as far as I can tell, and all NFS-related processes on the server are intact. NFS mounts on clients are made with UDP or TCP with no difference in results. A client mount cannot be completed (timed out) and access to an already NFS mounted volume stalls during the hang (both automounted and manual mounts). NFS block size is 32768 r and w; using 16384 makes no difference. Tcpdump shows no NFS packets exchanged between client and server during a hang. I have not rebooted the affected server yet, but I have restarted NFS with no change. Help! I cannot figure out what is wrong, and I cannot find anything amiss. I'm running out of something but I don't know what it is (except perhaps brains). Hints, please! Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
On Tue, 17 Apr 2012, Ross Walker wrote: Take a look at the NIC and switch port flow control status during an outage, they may be paused due to switch load. Is there anything else on the network switches that might flood them every half hour for a two minute duration? Unfortunately not. All of the NFS servers are on the same switch (an HP procurve) and only the one is having issues. The hang is always the same length, too. Nice try though! Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
On Tue, 17 Apr 2012, Ross Walker wrote: Let me also add that constant spanning tree convergence can cause this too. Make sure your choice of protocol and priority suit your topology and equipment. Gives me an idea! The switch is under control of different people. I did have a new VLAN created for an unrelated purpose two days before this all started. Hmmm... Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Help needed with NFS issue
On Wed, 18 Apr 2012, Fajar Priyanto wrote: Also shot in the dark from me. There maybe some IP conflict in the network. Yes, I thought of that one too. I am in control of all IP's on the network, so I am sure that nothing changed around the time that the trouble started. I checked for that anyway :-( Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 5 - problem with kernel/process: task blocked for more than 120 seconds.
On Wed, 11 Apr 2012, m.r...@5-cent.us wrote: Been seeing that on 6.2, also, though I've not noticed them locking up (the ones that have were in a cluster, and someone could have hosed memory). It got less with the latest kernel, but I'm still seeing them occasionally. This is most likely a problem due to data arriving too quickly for the VM subsystem to flush dirty pages to disk to keep up; the kernel switches to synchronous mode when the dirty_ratio percentage of memory pages is used. You might try upping your dirty_ratio, or reducing it drastically. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 5 - problem with kernel/process: task blocked for more than 120 seconds.
On Wed, 11 Apr 2012, m.r...@5-cent.us wrote: Steve Thompson wrote: On Wed, 11 Apr 2012, m.r...@5-cent.us wrote: Been seeing that on 6.2, also, though I've not noticed them locking up (the ones that have were in a cluster, and someone could have hosed memory). It got less with the latest kernel, but I'm still seeing them occasionally. This is most likely a problem due to data arriving too quickly for the VM subsystem to flush dirty pages to disk to keep up; the kernel switches to synchronous mode when the dirty_ratio percentage of memory pages is used. You might try upping your dirty_ratio, or reducing it drastically. But mine are *not* VMs. VM = Virtual Memory. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Interface starts when it shouldn't
In CentOS 5.7 and earlier versions, an alias interface is defined via ifcfg-interface:foo which contains ONBOOT=no. The ONBOOT setting appears to be ignored, and the interface always starts when the system boots or if networking is restarted. This is a serious bug that seems to date back many years (I found references in 2005). Anyone know why it hasn't been fixed, or if there is indeed a fix? Steve -- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT vgersoft DOT com Ithaca, NY 14850 186,282 miles per second: it's not just a good idea, it's the law ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can anyone talk infrastructure with me?
On Wed, 25 Jan 2012, Jason T. Slack-Moehrle wrote: can you explain to the calculation to determine that 300gb is 2mbps? What it is 300gb a day? Comcast has told me in the last two days I went through 127gb 127Gb in 2 days is 0.73 Mbps. Did you mean 127 GB? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Dedicated Firewall/Router
On Mon, 16 Jan 2012, Jason T. Slack-Moehrle wrote: I want to build a dedicated firewall/router as I am launching a NPO and I can host this in my garage. (Comcast offered me a 100 x 20 circuit for $99/mo with 5 statics) I use two Dell R310's in a master/backup setup with shorewall and keepalived. -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] LVM question
On Mon, 16 Jan 2012, Jonathan Vomacka wrote: It is to my understanding that the /boot partition should never be placed on LVM and should be a physical partition on the hard drives (or on top of a RAID array). Is this an accurate statement? /boot on LVM is quite safe as long as it is below 2GB. Hopefully it is. Also please advise if the SWAP filesystem is safe to be placed under LVM, or if this should be a hard partition / hard limit as well. Swap on LVM is quite safe; in fact it is desired. -s ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] NFS: hostname vs IP address
CentOS 5.7 x86_64. Lots of hosts NFS mounting a file system. All are configured identically (same LDAP servers, same DNS, same autofs config, same patches, etc). On some of them I see an NFS mount displaying a host name: % df -P | grep smt hostname:/mnt/foo 1651345888 264620688 1386725200 17% /fs/home/smt and on some just the IP address: % df -P | grep smt aa.bb.cc.dd:/mnt/foo 1651345888 264620688 1386725200 17% /fs/home/smt I cannot see any difference between the configuration of these two clients. Everything works. Anyone have a clue as to the source of this difference? -steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Changes at Red Hat confouding CentOS
On Tue, 15 Nov 2011, Timothy Murphy wrote: Reindl Harald wrote: But isn't everyone today using laptops for everyday use? this is what some braindead developers seems to think but it is not true nor will it never get true! why in the world should i use a laptop in my office if i can have a Core i7 Quad combined with much more and better hardware as ever possible in a laptop? Don't you think you are in a very small minority, like 1% of the world? Not by a long long way. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Broadcom NetXtremeII and CentOS 6
On Mon, 10 Oct 2011, stw...@comcast.net wrote: Has anyone managed to get CentOS 6 x86_64 running on a server with a Broadcom NetXtremeII BCM5709 network adaptor? If so have you seen any issues with the network freezing? I haven't tried CentOS 6 yet, but I have had this problem with CentOS 5. Try this in /etc/modprobe.conf: options bnx2 disable_msi=1 and reboot. This is a fix on all of the systems that I have tried it on. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Wierd cursor jump when I type letter y
On Sun, 10 Jul 2011, Ljubomir Ljubojevic wrote: In large number of times when I type letter y, like in you my typing cursor jumps 2-3 rows up or 1-2 words to the left. The only times I have ever seen anything like this was due to a bad keyboard or a bad KVM switch. Does it behave the same way in a non-X session or at BIOS level? Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] mounting a CentOS 5.5-based NFS partitions from a Mac OS X machine
On Wed, 29 Jun 2011, Boris Epstein wrote: In short - we have two CentOS-based NFS servers. They work fine with a variety of Linux machines but when I try to mount them from a Mac OS X 10.5 or 10.6 machine I get nowhere. I.e., the Mac does not complain yet reads nothing over the NFS. I have CentOS 5.5 NFS servers and a load of Macs, both Leopard and Snow Leopard. I too had a lot of NFS trouble, especially for multi-homed NFS servers, until I switched from NFS over udp to NFS over tcp (for the Macs only), and now everything works well. I even had trouble with NFS over udp to Mac clients from an OSX NFS server. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] mounting a CentOS 5.5-based NFS partitions from a Mac OS X machine
On Wed, 29 Jun 2011, Boris Epstein wrote: Thanks. I am only doing NFS over TCP and still no dice. Any special options you use either on the client or on the server side? As Tom mentioned, you need the insecure exports option on the NFS server side, otherwise I don't do anything special on the client. I'm sourcing the automount maps through LDAP. Try mounting via IP address rather than NFS server name; I've had some issues with this on Mac clients. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Strange issue's with LDAP and too many open files
On Fri, 24 Jun 2011, Sebastiaan Koetsier | proserve wrote: I've been growing a large headache on this one, i have a number of LDAP servers behind loadbalancing, since 2 days i constantly get the error: Too many open files. Although I'm not a newbie with linux I'm unable to resolve this, I have took the following stept: You need to specify nofile for ldap in the /etc/sysconfig/ldap file. For example, I have: ULIMIT_SETTINGS=-n 16384 Setting it for the ldap user in /etc/security/limits.conf will not have any effect, since it is root that starts the ldap server (so, the setting should be for root, not ldap). Steve -- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT vgersoft DOT com Ithaca, NY 14850 186,282 miles per second: it's not just a good idea, it's the law ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Strange issue's with LDAP and too many open files
On Fri, 24 Jun 2011, Sebastiaan Koetsier | proserve wrote: I've changed the settings and I'm waiting until the sessions to ldap grow, I will keep you posted on this. You can: cat /proc/pid-of-slapd/limits to check what the actual open file limit is in the running server. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS-virt] Recommendations, please
On Thu, 16 Jun 2011, Eric Shubert wrote: On 06/16/2011 08:07 AM, Steve Campbell wrote: What might most of you recommend for the type of virtualization software I use. I seem to recall that xen might not be the best choice due to it's lack of development. I could be wrong, though. For the present time, all of the VMs will be Centos based. Wait for CentOS6 (scheduled to be available early next week) and use KVM. I have KVM running on CentOS 5.6, and have had no issues at all. In fact, it has been excellent all round. Steve ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS] Config file semantics.
On Wed, 15 Jun 2011, m.r...@5-cent.us wrote: Mike A. Harris wrote: Personally, I find that indenting config files by 3 spaces has a lot of advantages to indenting them by 4 spaces although conventional wisdom might suggest otherwise. Who's with me on this? Indentation wars. I don't *think* there was a usenet newsgroup for that It's four, unless I'm holding a beer. Then it's 2. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Config file semantics.
On Wed, 15 Jun 2011, Keith Keller wrote: On Wed, Jun 15, 2011 at 02:23:29PM -0700, Cody Jackson wrote: I prefer two or four, usually two. Three is extremely disturbing to me because it is not a multiple of two I am constantly frustrated by being limited to a whole number of spaces. What if I want pi spaces? Or e*i? You can get e^(i*pi) spaces with the BS key. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Possible to use multiple disk to bypass I/O wait?
On Thu, 9 Jun 2011, Emmanuel Noobadmin wrote: I'm trying to resolve an I/O problem on a CentOS 5.6 server. The process basically scans through Maildirs, checking for space usage and quota. Because there are hundred odd user folders and several 10s of thousands of small files, this sends the I/O wait % way high. The server hits a very high load level and stops responding to other requests until the crawl is done. If the server is reduced to a crawl, it's possible that you are hitting the dirty_ratio limit due to writes and the server has entered synchronous I/O mode. As others have mentioned, setting noatime could have a significant effect, especially if there are many files and the server doesn't have much memory. You can try increasing dirty_ratio to see if it has an effect, eg: # sysctl vm.dirty_ratio # sysctl -w vm.dirty_ratio=50 Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Why VM?
On Fri, 27 May 2011, Digimer wrote: Live migration between physical hosts. Also, ease of recovery in the event of a failure. Can move the VM to entirely new hardware when the old hardware is no longer powerful enough... etc. And if you have licensed software that ties its network license keys to a specific MAC address, you no longer have to tie the license server to a specific physical box. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Why VM?
On Fri, 27 May 2011, Devin Reade wrote: Thank god for test environments. And backups. Backups? Que? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] LDAPs causing System Message Bus to hang when there's no network
On Thu, 28 Apr 2011, Benjamin Hackl wrote: On Thu, 28 Apr 2011 16:21:58 +0200 Mattias Geniar matt...@nucleus.be wrote: Here's my /etc/ldap.conf file: Did you include nss_initgroups_ignoreuser in your /etc/ldap.conf? nss_initgroups_ignoreusers root,ldap This works: nss_initgroups_ignoreusers root,ldap,named,avahi,haldaemon,dbus -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos 5.6 and httpd
On Sun, 10 Apr 2011, Gregory P. Ennis wrote: A problem with httpd on 5.6 [...] chown root.apache /etc/httpd/alias/*.db chmod 0640 /etc/httpd/alias/*.db I had to make the same changes. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Virtualization platform choice
On Mon, 28 Mar 2011, Pasi Kärkkäinen wrote: On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote: First. With Xen I was never able to start more than 30 guests at one time with any success; the 31st guest always failed to boot or crashed during booting, no matter which guest I chose as the 31st. With KVM I chose to add more guests to see if it could be done, with the result that I now have 36 guests running simultaneously. Hmm.. I think I've seen that earlier. I *think* it was some trivial thing to fix, like increasing number of available loop devices or so. I tried that, and other things, but was never able to make it work. I was using max_loop=64 in the end, but even with a larger number I couldn't start more than 30 guests. Number 31 would fail to boot, and would boot successfully if I shut down, say, #17. Then #17 would fail to boot, and so on. Hmm.. Windows 7 might be too new for Xen 3.1 in el5, so for win7 upgrading to xen 3.4 or 4.x helps. (gitco.de has newer xen rpms for el5 if you're ok with thirdparty rpms). Point taken; I realize this. Third. I was never able to successfully complete a PXE-based installation under Xen. No problems with KVM. That's weird. I do that often. What was the problem? I use the DHCP server (on the host) to supply all address and name information, and this works without any issues. In the PXE case, I was never able to get the guest to communicate with the server for long enough to fully load pxelinux.0, in spite of the bridge setup. I have no idea why; it's not exactly rocket science either. Can you post more info about the benchmark? How many vcpus did the VMs have? How much memory? Were the VMs 32b or 64b ? The benchmark is just a make of a large package of my own implementation. A top-level makefile drives a series of makes of a set of sub-packages, 33 of them. It is a compilation of about 1100 C and C++ source files, including generation of dependencies and binaries, and running a set of perl scripts (some of which generate some of the C source). All of the sources and target directories were NFS volumes; only the local O/S disks were virtualized. I used 1 vcpu per guest and either 512MB or 1GB of memory. The results I showed were for 64-bit guests with 512MB memory, but they were qualitatively the same for 32-bit guests. Increasing memory from 512MB to 1GB made no significant difference to the timings. Some areas of the build are serial by nature; the result of 14:38 for KVM w/virtio was changed to 9:52 with vcpu=2 and make -j2. The 64-bit HVM guests w/o PV were quite a bit faster than the 32-bit HVM guests, as expected. I also had some Fedora diskless guests (no PV) using an NFS root, in which situation the 32-bit guests were faster than the 64-bit guests (and both were faster than the HVM guests w/o PV). These used kernels that I built myself. I did not compare Xen vs KVM with vcpu 1. Did you try Xen HVM with PV drivers? Yes, but I don't have the exact timings to hand anymore. They were faster than the non-PV case but still slower than KVM w/virtio. Fifth: I love being able to run top/iostat/etc on the host and see just what the hardware is really up to, and to be able to overcommit memory. xm top and iostat in dom0 works well for me :) I personally don't care much for xm top, and it doesn't help anyway if you're not running as root or have sudo access, or if you'd like to read performance info for the whole shebang via /proc (as I do). Steve___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Virtualization platform choice
On Sun, 27 Mar 2011, Jussi Hirvi wrote: KVM would be a natural way to go, I suppose, only it is too bad CentOS 6 will not be out in time for me - I guess KVM would be more mature in CentOS 6. I have been using Xen with much success for several years, now with two CentOS 5.5 x86_64 Dom0's, hosting 29 (mixed Linux and Windows) and 30 (all Windows) guests respectively, using only packages from the distro along with the GPLPV drivers on the Windows guests (so it's Xen 3.1, not the latest). A couple of weeks ago I decided (on the first of these hosts) to give KVM a look, since I was able to take the machine down for a while. All guests use LVM volumes, and were unchanged between Xen and KVM (modulo pv drivers). The host is a Dell PE2900 with 24 GB memory and E5345 processors (8 cores). Bridged mode networking. What follows is obviously specific to my environment, so YMMV. The short story is that I plan to keep using KVM. It has been absolutely solid and without any issues whatsoever, and performance is significantly better than Xen in all areas that I have measured (and also in the feels good benchmark). Migration from Xen to KVM was almost trivially simple. The slightly longer story... First. With Xen I was never able to start more than 30 guests at one time with any success; the 31st guest always failed to boot or crashed during booting, no matter which guest I chose as the 31st. With KVM I chose to add more guests to see if it could be done, with the result that I now have 36 guests running simultaneously. Second. I was never able to keep a Windows 7 guest running under Xen for more than a few days at a time without a BSOD. I haven't seen a single crash under KVM. Third. I was never able to successfully complete a PXE-based installation under Xen. No problems with KVM. Fourth. My main work load consists of a series of builds of a package of about 1100 source files and about 500 KLOC's; all C and C++. Here are the elapsed times (min:sec) to build the package on a CentOS 5 guest (1 vcpu), each time with the guest being the only active guest (although the others were running). Sources come from NFS, and targets are written to NFS, with the host being the NFS server. * Xen HVM guest (no pv drivers): 29:30 * KVM guest, no virtio drivers: 23:52 * KVM guest, with virtio: 14:38 Fifth: I love being able to run top/iostat/etc on the host and see just what the hardware is really up to, and to be able to overcommit memory. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Virtualization platform choice
On Sun, 27 Mar 2011, Nico Kadel-Garcia wrote: How did you get the PXE working? I already had a PXE server for physical hosts, so I just did a virt-install with the --pxe switch, and it worked first time. The MAC address was pre-defined and known to the DHCP server. I installed both Linux and Windows guests with PXE. And do you have widgets for setting up the necessary bridged networking? I edited the ifcfg-eth0 file on the host and added an ifcfg-br0, all by hand, and then rebooted. I didn't have to think about it again. /etc/sysconfig/network-scripts/ifcfg-eth0: DEVICE=eth0 HWADDR=xx:xx:xx:xx:xx:xx ONBOOT=yes BRIDGE=br0 NM_CONTROLLED=0 /etc/sysconfig/network-scripts/ifcfg-br0: DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=braddr IPADDR=ipaddr NETMASK=netmask NETWORK=network ONBOOT=yes For each guest, something like this was used: interface type='bridge' mac address='52:54:00:1d:58:cf'/ source bridge='br0'/ model type='virtio'/ /interface Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server locking up everyday around 3:30 AM
PJ wrote: Mar 8 03:33:18 web1 kernel: INFO: task wget:13608 blocked for more than 120 seconds. Check the number of dirty pages: grep Dirty /proc/meminfo relative to the dirty_ratio setting: cat /proc/sys/vm/dirty_ratio to see if the system is going into synhronous flush mode around that time (especially if dirty_ratio is large and you have a lot of physical memory). This is what I usually see as the cause of the blocked for more than message. I've also found that it can be several minutes, and up to 20 minutes, before the system recovers (but recover it always does). -Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT - disk bays
On Thu, 3 Feb 2011, m.r...@5-cent.us wrote: Oh, that silliness. What I dislike are the PERC 700s, that will *only* accept Dell drives, not commodity ones. My understanding is that Dell has reversed this policy via a firmware update after a flood of complaints. I don't have any 700's to check this, but I believe they will now accept non-Dell drives. Certainly the Perc 5's and 6's do. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] parted usage
On Tue, 11 Jan 2011, aurfal...@gmail.com wrote: I'm attempting to use parted to create a partition on a 28TB volume which consists of 16x2TB drives configuired in a Raid 5 + spare, so total unformatted size is 28TB to the OS.. I don't know the answer to your parted question, but let me be the first of many to express horror at the idea of using RAID-5 for such a large volume with so many spindles, even with a hot spare. The rebuild times are probably going to be days, and the chance of a second spindle failure in that time is high enough to make it dangerous. Use RAID-6 at least. Steve ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos