[CentOS] Data migration from one server to another- Centos 7
I want to move all my OS, services like Apache configuration and application (web server) files and folders to another brand new server. Both servers are running on Centos 7. i want to clone everything. My questions 1- what is the easiest way to do this? 2- do I need to partition hard-drive of new server exactly as old one? 3- Is there any changes to make on new server after full migration? 4- will root and other user credentials stay the same? Any other suggestion appreciated, -- Cheers, Matt Zand Cell: 202-420-9192 Work: 240-200-6131 High School Technology Services <https://myhsts.org/> DC Web Makers <https://dcwebmakers.com/> Coding Bootcamps <https://coding-bootcamps.com/> ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Convert from LVM
> On 08.06.2018 19:11, Matt wrote: >> I have a Centos 7 install using EXT4 on LVM. Its running as a VM >> inside KVM. Issue I have run into is that fstrim does not work due to >> the LVM. Without fstrim snapshots have gotten huge. Is there a way >> convert it from LVM to non-LVM without a complete reinstall? > > please consider to enable trim over LVM. You have just to set > > issue_discards = 1 > > in /etc/lvm/lvm.conf. Does that still work if it is not a THIN LVM? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Convert from LVM
I have a Centos 7 install using EXT4 on LVM. Its running as a VM inside KVM. Issue I have run into is that fstrim does not work due to the LVM. Without fstrim snapshots have gotten huge. Is there a way convert it from LVM to non-LVM without a complete reinstall? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 7 and btrfs
I am setting up a new test server. Doing a fresh install from CD onto a couple 4TB drives. Would like to try btrfs in a RAID 1 format. Are there any how to's on how to do that? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Apache Compression
I have a centos 7 install with apache running. How do I get apache to use gzip compression on html and text based content? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Unable to map more than 4 workspaces to keyboard shortcuts on C7.
Hi If you go Applications >> Utilities >> Tweak Tool >> Workspaces, you can set the number of workspaces. On 07/06/17 09:28 AM, m...@tdiehl.org wrote: Hi, I have a new install of c7 with the gnome desktop. I run it with 12 workspaces. Normally I create the shortcuts so that ctrl+f1 maps to workspace 1 ctrl+f2 maps to f2, etc. When I goto applications -> settings -> keyboard -> shortcuts -> navigation, I only have the ability to define "Switch to workspace" 1 through 4. There does not appear to be a way to map keyboard shortcuts for "Switch to workspace" 5-12. Does anyone know how I can add the ability to create shortcuts for workspaces 5-12? In previous versions (c-6 and older) the number of available keyboard shortcuts matched the available number of workspaces configured. I am thinking that there is some file someplace that I should be able to modify to make this work but so far I cannot find it. Regards, ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] systemd order help?
Check out this page: https://scottlinux.com/2014/12/08/how-to-create-a-systemd-service-in-linux-centos-7/ On 12/06/17 11:47 AM, James Pearson wrote: I'm looking into 'porting' some custom init.d scripts that are in use on CentOS 6 boxes for use on CentOS 7 using systemd One particular init.d script needs to run after autofs has been started, but before X11 is started I'm guessing I could use something like: After=autofs.service Before=graphical.target Is this correct? However, I would like to use the same systemd unit file on servers that won't run X - will the above work? Or is there a better/another way of doing this? Thanks James Pearson ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Python 3.x on Centos 7
On a related note I need SNMP support to do snmpget and snmpset to devices with Python3. Is there an easy way to get that without breaking anything also? On Fri, Mar 24, 2017 at 1:04 PM, John R Pierce <pie...@hogranch.com> wrote: > On 3/24/2017 6:52 AM, Matt wrote: >>> >>> # yum install python34 >> >> I already have epel installed. If it breaks something is it as simple >> as yum erase python34 to restore everything back to normal? > > > be pretty hard to break anything, it installs... > > /usr/bin/python3 > /usr/bin/python3.4 > > and puts all the runtime libs in /usr/lib64/python3.4 > > the default C7 python is... > > /usr/bin/python > /usr/bin/python2 > /usr/bin/python2.7 > > with libraries in /usr/lib64/python2.7 > > so no overlap at all.if you want python 3.4, you have to invoke it > explicitly. > > > > -- > john r pierce, recycling bits in santa cruz > > > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Systemd interface rename does not work
Hi, I'm moving to Systemd for my network management but I don't see my link name changed when I try to using a .link name. The .network file works right, networkmanager is removed as well to accomplish this. Any idea why the rename is not done ? /etc/systemd/network/0-eth.network [Match] MACAddress=00:1a:4a:a9:0a:17 [Network] Address=172.16.3.141/24 Gateway=172.16.3.254 DNS=172.16.3.1 DNS=172.16.3.2 /etc/systemd/network/0-eth.link [Match] MACAddress=00:1a:4a:a9:0a:17 [Link] Name=eth0 ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Python 3.x on Centos 7
> # yum install python34 I already have epel installed. If it breaks something is it as simple as yum erase python34 to restore everything back to normal? On Thu, Mar 23, 2017 at 6:27 PM, Christian, Mark <mark.christ...@intel.com> wrote: > On Thu, 2017-03-23 at 18:16 -0500, Matt wrote: >> Is there a way to install Python 3.x on Centos 7.x without breaking >> anything that depends on an older version of Python? This server is a >> minimal Centos 7 install that primarily runs a simple LAMP setup. > Yes. > # yum install python34 > > >> ___ >> CentOS mailing list >> CentOS@centos.org >> https://lists.centos.org/mailman/listinfo/centos > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Python 3.x on Centos 7
Is there a way to install Python 3.x on Centos 7.x without breaking anything that depends on an older version of Python? This server is a minimal Centos 7 install that primarily runs a simple LAMP setup. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kerberized-nfs - any experts out there?
On Wed, Mar 22, 2017 at 6:11 PM, John Jasen <jja...@realityfailure.org> wrote: > On 03/22/2017 03:26 PM, Matt Garman wrote: >> Is anyone on the list using kerberized-nfs on any kind of scale? > > Not for a good many years. > > Are you using v3 or v4 NFS? v4. I think you can only do kerberized NFS with v4. > Also, you can probably stuff the rpc.gss* and idmapd services into > verbose mode, which may give you a better ideas as to whats going on. I do that. The logs are verbose, but generally too cryptic for me to make sense of. Web searches on the errors yield results at best 50% of the time, and the hits almost never have a solution. > And yes, the kernel does some kerberos caching. I think 10 to 15 minutes. To me it looks like it's more on the order of an hour. For example, a simple test I've done is to do a "fresh" login on a server. The server has just been rebooted, and with the reboot, all the /tmp/krb5cc* files were deleted. I login via ssh, which implicitly establishes my Kerberos tickets. I deliberately do a "kdestroy". Then I have a simple shell loop like this: while [ 1 ] ; do date ; ls ; sleep 30s ; done Which is just doing an ls on my home directory, which is a kerberized NFS mount. Despite having done a kdestroy, this works, presumably from cached credentials. And it continues to work for *about* an hour, and then I start getting permission denied. I emphasized "about" because it's not precisely one hour, but seems to range from maybe 55 to 65 minutes. But, that's a super-simple, controlled test. What happens when you add screen multiplexers (tmux, gnu screen) into the mix. What if you login "fresh" via password versus having your gss (kerberos) credentials forwarded? What if you're logged in multiple times on the same machine by via different methods? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] kerberized-nfs - any experts out there?
On Wed, Mar 22, 2017 at 3:19 PM, <m.r...@5-cent.us> wrote: > Matt Garman wrote: >> (2) Permission denied issues. I have user Kerberos tickets >> configured for 70 days. But there is clearly some kind of >> undocumented kernel caching going on. Looking at the Kerberos server >> logs, it looks like it "could" be a performance issue, as I see 100s >> of ticket requests within the same second when someone tries to launch >> a lot of jobs. Many of these will fail with "permission denied" but >> if they immediately re-try, it works. Related to this, I have been >> unable to figure out what creates and deletes the >> /tmp/krb5cc_uid_random files. > > Are they asking for *new* credentials each time? They should only be doing > one kinit. Well, that's what I don't understand. In practice, I don't believe a user should ever have to explicitly do kinit, as their credentials/tickets are implicitly created (and forwarded) via ssh. Despite that, I see the /tmp/krb5cc_uid files accumulating over time. But I've tried testing this, and I haven't been able to determine exactly what creates those files. And I don't understand why new krb5cc_uid files are created when there is an existing, valid file already. Clearly some programs ignore existing files, and some create new ones. > And there's nothing in the logs, correct? Have you tried attaching strace > to one of those, and see if you can get a clue as to what's happening? Actually, I get this in the log: Mar 22 13:25:09 daemon.err lnxdev108 rpc.gssd[19329]: WARNING: handle_gssd_upcall: failed to find uid in upcall string 'mech=krb5' Thanks, Matt ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] kerberized-nfs - any experts out there?
Is anyone on the list using kerberized-nfs on any kind of scale? I've been fighting with this for years. In general, when we have issues with this system, they are random and/or not repeatable. I've had very little luck with community support. I hope I don't offend by saying that! Rather, my belief is that these problems are very niche/esoteric, and so beyond the scope of typical community support. But I'd be delighted to be proven wrong! So this is more of a "meta" question: anyone out there have any general recommendations for how to get support on what I presume are niche problems specific to our environment? How is paid upstream support? Just to give a little insight into our issues: we have an in-house-developed compute job dispatching system. Say a user has 100s of analysis jobs he wants to run, he submits them to a central master process, which in turn dispatches them to a "farm" of >100 compute nodes. All these nodes have two different krb5p NFS mounts, to which the jobs will read and write. So while the users can technically log in directly to the compute nodes, in practice they never do. The logins are only "implicit" when the job dispatching system does a behind-the-scenes ssh to kick off these processes. Just to give some "flavor" to the kinds of issues we're facing, what tends to crop up are one of three things: (1) Random crashes. These are full-on kernel trace dumps followed by an automatic reboot. This was really bad under CentOS 5. A random kernel upgrade magically fixed it. It happens almost never under CentOS 6. But happens fairly frequently under CentOS 7. (We're completely off CentOS 5 now, BTW.) (2) Permission denied issues. I have user Kerberos tickets configured for 70 days. But there is clearly some kind of undocumented kernel caching going on. Looking at the Kerberos server logs, it looks like it "could" be a performance issue, as I see 100s of ticket requests within the same second when someone tries to launch a lot of jobs. Many of these will fail with "permission denied" but if they immediately re-try, it works. Related to this, I have been unable to figure out what creates and deletes the /tmp/krb5cc_uid_random files. (3) Kerberized NFS shares getting "stuck" for one or more users. We have another monitoring app (in-house developed) that, among other things, makes periodic checks of these NFS mounts. It does so by forking and doing a simple "ls" command. This is to ensure that these mounts are alive and well. Sometimes, the "ls" command gets stuck to the point where it can't even be killed via "kill -9". Only a reboot fixes it. But the mount is only stuck for the user running the monitoring app. Or sometimes the monitoring app is fine, but an actual user's processes will get stuck in "D" state (in top, means waiting on IO), but everyone else's jobs (and access to the kerberizes nfs shares) are OK. This is actually blocking us from upgrading to CentOS 7. But my colleagues and I are at a loss how to solve this. So this post is really more of a semi-desperate plea for any kind of advice. What other resources might we consider? Paid support is not out of the question (within reason). Are there any "super specialist" consultants out there who deal in Kerberized NFS? Thanks! Matt ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] MRTG and eth0
Is there an easy way to graph ethernet eth0 on Centos 7 with MRTG without using SNMP? I thought I found a way to do this in past by using a shell script to poll the interface but cannot find it back. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Spotty internet connection
On Fri, Feb 3, 2017 at 12:08 PM, John R Piercewrote: > for Comcast/Xfinity, I'm using a Arris SB6183 that I got at Costco. this > is a simple modem/bridge, so /my/ router behind it gets the public IP. Note that some residential ISPs may not offer "naked" Internet, and/or won't allow you to bring your own device (BYOD). At least in my area, there are only two options for residential Internet; cable-based via Comcast, and DSL-based via AT I used to routinely switch back and forth between the two, to play them against each other for the best rates. However, I had to give up on AT because they stopped offering a "naked" service. That is, when I was using them, I had the most basic DSL modem, that literally did nothing except provide me with a public Internet IP and the service. Last I talked to them, I could only use their service with their fancy all-in-one devices, that are both a DSL modem and gateway/router/wireless AP. I already have all that infrastructure in my house, and I trust my ability to manage it more than I trust the blackbox firmware that AT provides. Going from memory, that all-in-one DSL service did give me a public IP, but the device itself implemented NATing, so it looked like I was getting a private IP. There *may* have been a way to remove most of the functionality of the all-in-one device ("DMZ mode" or something like that); it's been discussed pretty heavily on the DSLReports Forums. (But, either way, even ignoring the technical grievances with their service, AT's prices are higher and speed tiers lower than Comcast's.) TL;DR: (1) some ISPs may not allow BYOD; (2) if it looks like your ISP is giving you a private IP, dig a little deeper, it could simply appear that way due to the way the ISP configures the assigned device. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Spotty internet connection
On Thu, Feb 2, 2017 at 7:13 PM, TE Dukeswrote: > Lately I have been getting slow and partial page loads, server not found, > server timed out, etc.. Get knocked off ssh when accessing my home server > from work, etc. Its not the work connection because I don't have problems > accessing other sites, just here at home and my home server. > > Is there any kind of utility to check for failing hardware? I have the exact same problems from time to time via Comcast. Mine comes and goes, and lately it hasn't been too bad. But when it comes, it's down for very small amounts of time, maybe 30-90 seconds, which is just long enough to be annoying, and make the service unusable. When it was really bad (intermittent dropouts as described above, almost every night during prime time, usually for several hours at a time) I wrote a program to do constant pings to several servers at once. If you're interested, I'll see if I can find that script. But, conceptually, it ran concurrent pings to several sites, and kept some stats on drops longer than some threshold. Some tips on a program like this: use IP addresses, rather than hostnames, because ultimately using a hostname implicitly does a DNS lookup, which likely requires Internet service to work. I also did several servers at once, so I could prove it wasn't just the one site I was pinging. Included in the list of servers was also the nexthop device beyond my house (presumably Comcast's own router). Use traceroute to figure out network paths. After running this for a while---before I called them with the evidence---the problem magically cleared up, and since then it's been infrequent enough that I haven't felt the need to fire up the script again. When it comes to residential Internet, I am quite cynical towards monopoly ISPs like Comcast... so maybe they saw the constant pings and knew I was building a solid case and fixed the problem. Or maybe enough people in my area complained of similar problems and they actually felt uncharacteristically caring for a second. I haven't been there in a while, but in the past, I've gotten a lot of utility out of the DSLReports Forums[1]. There are private forums that will put you in direct contact with technical people at your ISP. It can sometimes be a good way to side-step the general customer service hotline and get in touch with an actual engineer rather than a script reader. Maybe not, but worst-case you're only out some time. Also, you might post this same question to one of the public forums over there, as there seems to be lots of knowledgeable/helpful people hanging out there. (Despite the name, it's not only about DSL, but consumer ISPs in general.) [1] http://www.dslreports.com/forums/all Good luck, let us know if you come up with any decent resolution! ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Cron.Hourly
When I have multiple scripts in /etc/cron.hourly/ using noanacron do they all start at same time or sequentially? I would rather they all went at same time in case one takes close to an hour to complete. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos 7 Boot Partition
>> What size is recommended for the /boot partition? After doing a fresh >> install and lengthy backup restore I realized I only made it 200M. Is >> this going to be a problem? > > Mine was about 500 MB and I removed some kernels because I got a warning the > partition was getting full. > > With only two kernels installed, 182 MB are used. I would suggest 1 GB and I > believe that is what CentOS 7.3 will do by default. Can I just change yum.conf with the setting installonly_limit=2 to limit kernels installed too two? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 7 Boot Partition
What size is recommended for the /boot partition? After doing a fresh install and lengthy backup restore I realized I only made it 200M. Is this going to be a problem? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 7 Minimal and Quotas
I have installed Centos 7 Minimal in a 1TB KVM. Used XFS file system. I did not use LVM. I need to install Directadmin which requires quotas. I have this in fstab: UUID=b482396d-d2fc-49ed-b9df-c49e9387405b / xfs defaults 0 0 UUID=e24a16e0-57ab-42b2-af0b-9edf789376e5 /boot xfs defaults 0 0 UUID=b08b8243-6b46-444c-a7bd-02934246d884 swap swap defaults 0 0 I get this: # repquota / repquota: Mountpoint (or device) / not found or has no quota enabled. repquota: Not all specified mountpoints are using quota. I tried changing / from defaults to defaults,usrquota,grpquota in fstab and rebooting but I still get the same result. What do I need to do here? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
On Thu, Oct 27, 2016 at 12:03 AM, Larry Martellwrote: > This site is locked down like no other I have ever seen. You cannot > bring anything into the site - no computers, no media, no phone. You > ... > This is my client's client, and even if I could circumvent their > policy I would not do that. They have a zero tolerance policy and if > ... OK, no internet for real. :) Sorry I kept pushing this. I made an unflattering assumption that maybe it just hadn't occurred to you how to get files in or out. Sometimes there are "soft" barriers to bringing files in or out: they don't want it to be trivial, but want it to be doable if necessary. But then there are times when they really mean it. I thought maybe the former applied to you, but clearly it's the latter. Apologies. > These are all good debugging techniques, and I have tried some of > them, but I think the issue is load related. There are 50 external > machines ftp-ing to the C7 server, 24/7, thousands of files a day. And > on the C6 client the script that processes them is running > continuously. It will sometimes run for 7 hours then hang, but it has > run for as long as 3 days before hanging. I have never been able to > reproduce the errors/hanging situation manually. If it truly is load related, I'd think you'd see something askew in the sar logs. But if the load tends to spike, rather than be continuous, the sar sampling rate may be too coarse to pick it up. > And again, this is only at this site. We have the same software > deployed at 10 different sites all doing the same thing, and it all > works fine at all of those. Flaky hardware can also cause weird intermittent issues. I know you mentioned before your hardware is fairly new/decent spec; but that doesn't make it immune to manufacturing defects. For example, imagine one voltage regulator that's ever-so-slightly out of spec. It happens. Bad memory is not uncommon and certainly causes all kinds of mysterious issues (though in my experience that tends to result in spontaneous reboots or hard lockups, but truly anything could happen). Ideally, you could take the system offline and run hardware diagnostics, but I suspect that's impossible given your restrictions on taking things in/out of the datacenter. On Thu, Oct 27, 2016 at 3:05 AM, Larry Martell wrote: > Well I spoke too soon. The importer (the one that was initially > hanging that I came here to fix) hung up after running 20 hours. There > were no NFS errors or messages on neither the client nor the server. > When I restarted it, it hung after 1 minute, Restarted it again and it > hung after 20 seconds. After that when I restarted it it hung > immediately. Still no NFS errors or messages. I tried running the > process on the server and it worked fine. So I have to believe this is > related to nobarrier. Tomorrow I will try removing that setting, but I > am no closer to solving this and I have to leave Japan Saturday :-( > > The bad disk still has not been replaced - that is supposed to happen > tomorrow, but I won't have enough time after that to draw any > conclusions. I've seen behavior like that with disks that are on their way out... basically the system wants to read a block of data, and the disk doesn't read it successfully, so it keeps trying. The kind of disk, what kind of controller it's behind, raid level, and various other settings can all impact this phenomenon, and also how much detail you can see about it. You already know you have one bad disk, so that's kind of an open wound that may or may not be contributing to your bigger, unsolved problem. So that makes me think, you can also do some basic disk benchmarking. iozone and bonnie++ are nice, but I'm guessing they're not installed and you don't have a means to install them. But you can use "dd" to do some basic benchmarking, and that's all but guaranteed to be installed. Similar to network benchmarking, you can do something like: time dd if=/dev/zero of=/tmp/testfile.dat bs=1G count=256 That will generate a 256 GB file. Adjust "bs" and "count" to whatever makes sense. General rule of thumb is you want the target file to be at least 2x the amount of RAM in the system to avoid cache effects from skewing your results. Bigger is even better if you have the space, as it increases the odds of hitting the "bad" part of the disk (if indeed that's the source of your problem). Do that on C6, C7, and if you can a similar machine as a "control" box, it would be ideal. Again, we're looking for outliers, hang-ups, timeouts, etc. +1 to Gordon's suggestion to sanity check MTU sizes. Another random possibility... By somewhat funny coincidence, we have some servers in Japan as well, and were recently banging our heads against the wall with some weird networking issues. The remote hands we had helping us (none of our staff was on site) claimed one or more fiber cables were dusty, enough that it was affecting
Re: [CentOS] NFS help
On Tue, Oct 25, 2016 at 7:22 PM, Larry Martellwrote: > Again, no machine on the internal network that my 2 CentOS hosts are > on are connected to the internet. I have no way to download anything., > There is an onerous and protracted process to get files into the > internal network and I will see if I can get netperf in. Right, but do you have physical access to those machines? Do you have physical access to the machine which on which you use PuTTY to connect to those machines? If yes to either question, then you can use another system (that does have Internet access) to download the files you want, put them on a USB drive (or burn to a CD, etc), and bring the USB/CD to the C6/C7/PuTTY machines. There's almost always a technical way to get files on to (or out of) a system. :) Now, your company might have *policies* that forbid skirting around the technical measures that are in place. Here's another way you might be able to test network connectivity between C6 and C7 without installing new tools: see if both machines have "nc" (netcat) installed. I've seen this tool referred to as "the swiss army knife of network testing tools", and that is indeed an apt description. So if you have that installed, you can hit up the web for various examples of its use. It's designed to be easily scripted, so you can write your own tests, and in theory implement something similar to netperf. OK, I just thought of another "poor man's" way to at least do some sanity testing between C6 and C7: scp. First generate a huge file. General rule of thumb is at least 2x the amount of RAM in the C7 host. You could create a tarball of /usr, for example (e.g. "tar czvf /tmp/bigfile.tar.gz /usr" assuming your /tmp partition is big enough to hold this). Then, first do this: "time scp /tmp/bigfile.tar.gz localhost:/tmp/bigfile_copy.tar.gz". This will literally make a copy of that big file, but will route through most of of the network stack. Make a note of how long it took. And also be sure your /tmp partition is big enough for two copies of that big file. Now, repeat that, but instead of copying to localhost, copy to the C6 box. Something like: "time scp /tmp/bigfile.tar.gz :/tmp/". Does the time reported differ greatly from when you copied to localhost? I would expect them to be reasonably close. (And this is another reason why you want a fairly large file, so the transfer time is dominated by actual file transfer, rather than the overhead.) Lastly, do the reverse test: log in to the C6 box, and copy the file back to C7, e.g. "time scp /tmp/bigfile.tar.gz :/tmp/bigfile_copy2.tar.gz". Again, the time should be approximately the same for all three transfers. If either or both of the latter two copies take dramatically longer than the first, then there's a good chance something is askew with the network config between C6 and C7. Oh... all this time I've been jumping to fancy tests. Have you tried the simplest form of testing, that is, doing by hand what your scripts do automatically? In other words, simply try copying files between C6 and C7 using the existing NFS config? Can you manually trigger the errors/timeouts you initially posted? Is it when copying lots of small files? Or when you copy a single huge file? Any kind of file copying "profile" you can determine that consistently triggers the error? That could be another clue. Good luck! ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
On Mon, Oct 24, 2016 at 6:09 PM, Larry Martellwrote: > The machines are on a local network. I access them with putty from a > windows machine, but I have to be at the site to do that. So that means when you are offsite there is no way to access either machine? Does anyone have a means to access these machines from offsite? > Yes, the C6 instance is running on the C7 machine. What could be > mis-configured? What would I check to find out? OK, so these two machines are actually the same physical hardware, correct? Do you know, is the networking between the two machines "soft", as in done locally on the machine (typically through NAT or briding)? Or is it "hard", in that you have a dedicated NIC for the host and a separate dedicated NIC for the guest, and actual cables going out of each interface and connected to a switch/hub/router? I would expect the former... If it truly is a "soft" network between the machines, then that is more evidence of a configuration error. Now, unfortunately, with what to look for: I have virtually no experience setting up C6 guests on a C7 host; at least not enough to help you troubleshoot the issue. But in general, you should be able to hit up a web search and look for howtos and other documents on setting up networking between a C7 host and its guests. That will allow you to (1) understand how it's currently setup, (2) verify if there is any misconfig, and (3) correct or change if needed. > Yes, that is potential solution I had not thought of. The issue with > this is that we have the same system installed at many, many sites, > and they all work fine. It is only this site that is having an issue. > We really do not want to have different SW running at just this one > site. Running the script on the C7 host is a change, but at least it > will be the same software as every place else. IIRC, you said this is the only C7 instance? That would mean it is already not the same as every other site. It may be conceptually the same, but "under the hood", there are a tremendous number of changes between C6 and C7. Effectively every single package is different, from the kernel all the way to trivial userspace tools. > netperf is not installed. Again, if you can use putty (which is ssh) to access these systems, you implicitly have the ability to upload files (i.e. packages) to the systems. A simple tool like netperf should have few (if any) dependencies, so you don't have to mess with mirroring the whole centos repo. Just grab the netperf rpm file from wherever, then use scp (I believe it's called pscp when part of the Putty package) to copy to your servers, yum install and start testing. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
Another alternative idea: you probably won't be comfortable with this, but check out systemd-nspawn. There are lots of examples online, and even I wrote about how I use it: http://raw-sewage.net/articles/fedora-under-centos/ This is unfortunately another "sysadmin" solution to your problem. nspawn is the successor to chroot, if you are at all familiar with that. It's kinda-sorta like running a system-within-a-system, but much more lightweight. The "slave" systems share the running kernel with the "master" system. (I could say the "guest" and "host" systems, but those are virtual machine terms, and this is not a virtual machine.) For your particular case, the main benefit is that you can natively share filesystems, rather than use NFS to share files. So, it's clear you have network capability between the C6 and C7 systems. And surely you must have ssh installed on both systems. Therefore, you can transfer files between C6 and C7. So here's a way you can use systemd-nspawn to get around trying to install all the extra libs you need on C7: 1. On the C7 machine, create a systemd-nspawn container. This container will "run" C6. 2. You can source everything you need from the running C6 system directly. Heck, if you have enough disk space on the C7 system, you could just replicate the whole C6 tree to a sub-directory on C7. 3. When you configure the C6 nspawn container, make sure you pass through the directory structure with these FTP'ed files. Basically you are substituting systemd-nspawn's bind/filesystem pass-through mechanism in place of NFS. With that setup, you can "probably" run all the C6 native stuff under C7. This isn't guaranteed to work, e.g. if your C6 programs require hooks into the kernel, it could fail, because now you're running on a different kernel... but if you only use userspace libraries, you'll probably be OK. But I was actually able to get HandBrake, compiled for bleeding-edge Ubuntu, to work within a C7 nspawn container. That probably trades one bit of complexity (NFS) for another (systemd-nspawn). But just throwing it out there if you're completely stuck. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
On Mon, Oct 24, 2016 at 2:42 PM, Larry Martell <larry.mart...@gmail.com> wrote: >> At any rate, what I was looking at was seeing if there was any way to >> simplify this process, and cut NFS out of the picture. If you need >> only to push these files around, what about rsync? > > It's not just moving files around. The files are read, and their > contents are loaded into a MySQL database. On what server does the MySQL database live? > This site is not in any way connected to the internet, and you cannot > bring in any computers, phones, or media of any kind. There is a > process to get machines or files in, but it is onerous and time > consuming. This system was set up and configured off site and then > brought on site. But clearly you have a means to log in to both the C6 and C7 servers, right? Otherwise, how would be able to see these errors, check top/sar/free/iostat/etc? And if you are logging in to both of these boxes, I assume you are doing so via ssh? Or are you actually physically sitting in front of these machines? If you have ssh access to these machines, then you can trivially copy files to/from them. If ssh is installed and working, then scp should also be installed and working. Even if you don't have scp, you can use tar over ssh to the same effect. It's ugly, but doable, and there are examples online for how to do it. Also: you made a couple comments about these machines, it looks like the C7 box (FTP server + NFS server) is running bare metal (i.e. not a virtual machine). The C6 instance (NFS client) is virtualized. What host is the C6 instance? Is the C6 instance running under the C7 instance? I.e., are both machines on the same physical hardware? If that is true, then your "network" (at least the one between C7 and C6) is basically virtual, and to have issues like this on the same physical box is certainly indicative of a mis-configuration. > To run the script on the C7 NFS server instead of the C6 NFS client > many python libs will have to installed. I do have someone off site > working on setting up a local yum repo with what I need, and then we > are going to see if we can zip and email the repo and get it on site. > But none of us are sys admins and we don't really know what we're > doing so we may not succeed and it may take longer then I will be here > in Japan (I am scheduled to leave Saturday). Right, but my point is you can write your own custom script(s) to copy files from C7 to C6 (based on rsync or ssh), do the processing on C6 (DB loading, whatever other processing), then move back to C7 if necessary. You said yourself you are a programmer not a sysadmin, so change the nature of the problem from a sysadmin problem to a programming problem. I'm certain I'm missing something, but the fundamental architecture doesn't make sense to me given what I understand of the process flow. Were you able to run some basic network testing tools between the C6 and C7 machines? I'm interested specifically in netperf, which does round trip packet testing, both TCP and UDP. I would look for packet drops with UDP, and/or major performance outliers with TCP, and/or any kind of timeouts with either protocol. How is name resolution working on both machines? Do you address machines by hostname (e.g., "my_c6_server"), or explicitly by IP address? Are you using DNS or are the IPs hard-coded in /etc/hosts? To me it still "smells" like a networking issue... -Matt ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
On Sun, Oct 23, 2016 at 8:02 AM, Larry Martellwrote: >> To be clear: the python script is moving files on the same NFS file >> system? E.g., something like >> >> mv /mnt/nfs-server/dir1/file /mnt/nfs-server/dir2/file >> >> where /mnt/nfs-server is the mount point of the NFS server on the >> client machine? > > Correct. > >> Or are you moving files from the CentOS 7 NFS server to the CentOS 6 NFS >> client? > > No the files are FTP-ed to the CentOS 7 NFS server and then processed > and moved on the CentOS 6 NFS client. I apologize if I'm being dense here, but I'm more confused on this data flow now. Your use of "correct" and "no" seems to be inconsistent with your explanation. Sorry! At any rate, what I was looking at was seeing if there was any way to simplify this process, and cut NFS out of the picture. If you need only to push these files around, what about rsync? > The problem doing that is the files are processed and loaded to MySQL > and then moved by a script that uses the Django ORM, and neither > django, nor any of the other python packages needed are installed on > the server. And since the server does not have an external internet > connection (as I mentioned in my reply to Mark) getting it set up > would require a large amount of effort. ...right, but I'm pretty sure rsync should be installed on the server; I believe it's default in all except the "minimal" setup profiles. Either way, it's trivial to install, as I don't think it has any dependencies. You can download the rsync rpm from mirror.centos.org, then scp it to the server, then install via yum. And Python is definitely installed (requirement for yum) and Perl is probably installed as well, so with rsync plus some basic Perl/Python scripting you can create your own mover script. Actually, rsync may not even be necessary, scp may be sufficient for your purposes. And scp should definitely be installed. > Also, we have this exact same setup on over 10 other systems, and it > is only this one that is having a problem. The one difference with > this one is that the sever is CentOS7 - on all the other systems both > the NFS server and client are CentOS6. >From what you've described so far, with what appears to be a relatively simple config, C6 or C7 "shouldn't" matter. However, under the hood, C6 and C7 are quite different. > The python script checks the modification time of the file, and only > if it has not been modified in more then 2 minutes does it process it. > Otherwise it skips it and waits for the next run to potentially > process it. Also, the script can tell if the file is incomplete in a > few different ways. So if it has not been modified in more then 2 > minutes, the script starts to process it, but if it finds that it's > incomplete it aborts the processing and leaves it for next time. This script runs on C7 or C6? > The hardware is new, and is in a rack in a server room with adequate > and monitored cooling and power. But I just found out from someone on > site that there is a disk failure, which happened back on Sept 3. The > system uses RAID, but I don't know what level. I was told it can > tolerate 3 disk failures and still keep working, but personally, I > think all bets are off until the disk has been replaced. That should > happen in the next day or 2, so we shall see. OK, depending on the RAID scheme and how it's implemented, there could be disk timeouts causing things to hang. > I've been watching and monitoring the machines for 2 days and neither > one has had a large CPU load, not has been using much memory. How about iostat? Also, good old "dmesg" can suggest if the system with the failed drive is causing timeouts to occur. > None on the client. On the server it has 1 dropped Rx packet. > >> Do >>> "ethtool " on both machines to make sure both are linked up >>> at the correct speed and duplex. > > That reports only "Link detected: yes" for both client and server. OK, but ethtool should also say something like: ... Speed: 1000Mb/s Duplex: Full ... For a 1gbps network. If Duplex is reported as "half", then that is definitely a problem. Using netperf is further confirmation of whether or not your network is functioning as expected. > sar seems to be running, but I can only get it to report on the > current day. The man page shows start and end time options, but is > there a way to specify the stand and end date? If you want to report on a day in the past, you have to pass the file argument, something like this: sar -A -f /var/log/sa/sa23 -s 07:00:00 -e 08:00:00 That would show you yesterday's data between 7am and 8am. The files in /var/log/sa/saXX are the files that correspond to the day. By default, XX will be the day of the month. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS help
On Fri, Oct 21, 2016 at 4:14 AM, Larry Martellwrote: > We have 1 system ruining Centos7 that is the NFS server. There are 50 > external machines that FTP files to this server fairly continuously. > > We have another system running Centos6 that mounts the partition the files > are FTP-ed to using NFS. > > There is a python script running on the NFS client machine that is reading > these files and moving them to a new dir on the same file system (a mv not > a cp). To be clear: the python script is moving files on the same NFS file system? E.g., something like mv /mnt/nfs-server/dir1/file /mnt/nfs-server/dir2/file where /mnt/nfs-server is the mount point of the NFS server on the client machine? Or are you moving files from the CentOS 7 NFS server to the CentOS 6 NFS client? If the former, i.e., you are moving files to and from the same system, is it possible to completely eliminate the C6 client system, and just set up a local script on the C7 server that does the file moves? That would cut out a lot of complexity, and also improve performance dramatically. Also, what is the size range of these files? Are they fairly small (e.g. 10s of MB or less), medium-ish (100s of MB) or large (>1GB)? > Almost daily this script hangs while reading a file - sometimes it never > comes back and cannot be killed, even with -9. Other times it hangs for 1/2 > hour then proceeds on. Timeouts relating to NFS are the worst. > Coinciding with the hanging I see this message on the NFS server host: > > nfsd: peername failed (error 107) > > And on the NFS client host I see this: > > nfs: V4 server returned a bad sequence-id > nfs state manager - check lease failed on NFSv4 server with error 5 I've been wrangling with NFS for years, but unfortunately those particular messages don't ring a bell. The first thing that came to my mind is: how does the Python script running on the C6 client know that the FTP upload to the C7 server is complete? In other words, if someone is uploading "fileA", and the Python script starts to move "fileA" before the upload is complete, then at best you're setting yourself up for all kinds of confusion, and at worst file truncation and/or corruption. Making a pure guess about those particular errors: is there any chance there is a network issue between the C7 server and the C6 client? What is the connection between those two servers? Are they physically adjacent to each other and on the same subnet? Or are they on opposite ends of the globe connected through the Internet? Clearly two machines on the same subnet, separated only by one switch is the simplest case (i.e. the kind of simple LAN one might have in his home). But once you start crossing subnets, then routing configs come into play. And maybe you're using hostnames rather than IP addresses directly, so then name resolution comes into play (DNS or /etc/hosts). And each switch hop you add requires that not only your server network config needs to be correct, but also your switch config needs to be correct as well. And if you're going over the Internet, well... I'd probably try really hard to not use NFS in that case! :) Do you know if your NFS mount is using TCP or UDP? On the client you can do something like this: grep nfs /proc/mounts | less -S And then look at what the "proto=XXX" says. I expect it will be either "tcp" or "udp". If it's UDP, modify your /etc/fstab so that the options for that mountpoint include "proto=tcp". I *think* the default is now TCP, so this may be a non-starter. But the point is, based purely on the conjecture that you might have an unreliable network, TCP would be a better fit. I hate to simply say "RTFM", but NFS is complex, and I still go back and re-read the NFS man page ("man nfs"). This document is long and very dense, but it's worth at least being familiar with its content. > The first client message is always at the same time as the hanging starts. > The second client message comes 20 minutes later. > The server message comes 4 minutes after that. > Then 3 minutes later the script un-hangs (if it's going to). In my experience, delays that happen on consistent time intervals that are on the order of minutes tend to smell of some kind of timeout scenario. So the question is, what triggers the timeout state? > Can anyone shed any light on to what could be happening here and/or what I > could do to alleviate these issues and stop the script from hanging? > Perhaps some NFS config settings? We do not have any, so we are using the > defaults. My general rule of thumb is "defaults are generally good enough; make changes only if you understand their implications and you know you need them (or temporarily as a diagnostic tool)". But anyway, my hunch is that there might be a network issue. So I'd actually start with basic network troubleshooting. Do an "ifconfig" on both machines: do you see any drops or interface errors? Do "ethtool " on both
[CentOS] Kerberized NFS client and slow user write performance
We seem to be increasingly hit by this bug: https://access.redhat.com/solutions/2040223 "On RHEL 6 NFS client usring kerberos (krb5), one user experiences slow write performance, another does not" You need a RH subscription to see that in its entirety. But the subject basically says it all: randomly, one or more users will be subjected to *terrible* NFS write performance that persists until reboot. There is a root cause shown, but that is cryptic to non-kernel devs; it doesn't explain from a user perspective what triggers this state. (That's why it appears to be random to me.) There is no solution or workaround given. This appears to be on a per-user + per-server basis, so a crude workaround is to migrate the user to a different server. And we do regular reboots, which somewhat hides the problem. My question to the list: has anyone else dealt with this? The link says "Solution in Progress", but that was last updated nearly a year ago. We don't have any support contracts with upstream, just the website access subscription, so I doubt RH will offer any help. Appreciate any suggestions! Thanks, Matt ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 7 and Software Raid Minimal Install
>> I am trying to install Centos 7 on a couple 4TB drives with software >> raid. In the Supermicro bios I set UEFI/BIOS boot mode to legacy. I >> am using the Centos 7 minimal install ISO flashed to a USB thumb >> drive. >> >> So I do custom drive layout something like this using sda and sdb. >> >> Create /boot as 512 MB XFS raid1 array. >> >> Create SWAP as 32 GB SWAP raid1 array. >> >> Create / on 3.xxx TB XFS raid1 array. >> >> I then get error. >> >> "Your BIOS-based system needs a special partition to boot from a GPT >> disk label. To countinue, please create a 1MiB 'biosboot' type >> partition." >> >> I cannot seem to create the biosboot partition so it is mirrored to >> both drives for redundancy. Any ideas how to do this? I want >> everything mirrored to both drives. > > Yeah, you need to create the 1M partition, preferably the first, and its > type, like ext4 or xfs or swap, is, in the dropdown, biosboot. Is there a way to get this partition mirrored on both drives? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 7 and Software Raid Minimal Install
I am trying to install Centos 7 on a couple 4TB drives with software raid. In the Supermicro bios I set UEFI/BIOS boot mode to legacy. I am using the Centos 7 minimal install ISO flashed to a USB thumb drive. So I do custom drive layout something like this using sda and sdb. Create /boot as 512 MB XFS raid1 array. Create SWAP as 32 GB SWAP raid1 array. Create / on 3.xxx TB XFS raid1 array. I then get error. "Your BIOS-based system needs a special partition to boot from a GPT disk label. To countinue, please create a 1MiB 'biosboot' type partition." I cannot seem to create the biosboot partition so it is mirrored to both drives for redundancy. Any ideas how to do this? I want everything mirrored to both drives. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Suggestions for Config Management Tool
As others have said, in the end, it's a matter of personal preference (e.g. vim or emacs). You could spend a week reading articles and forum discussions comparing all the different tools; but until you've really used them, it will mostly be an academic exercise. Of course, the particulars of your environment might naturally lend itself to one tool or the other, so it's certainly worth spending some time getting an overview of the "idiom" of each tool. That said, we are working on moving away from dozens of little homegrown management scripts to Ansible. It just feels "right" to me, like how I would have designed such a system. I like that it's built on top of ssh. Any sysadmin should be fairly intimate with ssh, so why not build your CMS on top of a familiar tool? (But, of course, Ansible is flexible enough that you don't have to use ssh.) I might even go so far as to call it a "platform" rather than a tool. Out of the box, you can quickly get going having it do useful work by reading the docs/tutorials on the website. And just going through those exercises, you'll start to see that there's a ton of flexibility available, which is your option to exercise or not. And that perhaps is one of the drawbacks. We're actually somewhat in "analysis paralysis" mode with Ansible right now. Because there is so much flexibility, we are constantly second-guessing ourselves the best way to implement our fairly complex and diverse environments. In particular, how to group configuration "profiles". E.g., this server needs to be a DNS master, this server needs to be a DNS slave, this server needs MySQL + DNS slave, this server needs these packages installed, this server needs those packages but not these, etc etc. But I always prefer a tool with too much flexibility over something that forces you in to a specific way of doing things: that makes it our problem, not the tool's. The only other one I have any experience with is CFEngine. I tried---and I mean really tried---to get something going with CFEngine3. I just couldn't get my head around it. The wacky DSL it uses for expressing configs just wasn't intuitive to me; the whole bootstrapping processes seemed to be overly-complex; I found the documentation managed to be lengthy yet still lack real substance. By contrast: everything I've wanted to do in Ansible I was able to do quickly (and usually in several ways); on the client side, the only thing needed for an Ansible bootstrap is ssh; and the docs for Ansible have met or exceeded all expectations. My colleague and I were even able to quickly hack on some of the Ansible Python code to add some functionality we wanted. At least the pieces we looked at appeared to be quite straightforward. I have 15 years of C/C++ programming experience and wouldn't even consider messing with the CFEngine code. Maybe it's fine, but the complexity of the rest of the system is enough to scare me away from looking at the source. To be fair, it was *many* years ago that I looked at CFE3; maybe many of my issues have since been addressed. But, at this point, Ansible checks all my boxes, so that's where we're staying. Again, that's just my taste/experience. If you have the time, I'd spin up some VMs and play with the different tools. Try to implement some of your key items, see how hard/easy they are. On Thu, May 12, 2016 at 8:27 AM, Fabian Arrotinwrote: > On 12/05/16 10:21, James Hogarth wrote: >> On 12 May 2016 at 08:22, Götz Reinicke - IT Koordinator < >> goetz.reini...@filmakademie.de> wrote: >> >>> Hi, >>> >>> we see a growing need for a better Configuration management for our >>> servers. >>> >>> Are there any known good resources for a comparison of e.g. Puppet, >>> Chef, Ansible etc? >>> >>> What would you suggest and why? :) >>> >>> >>> >> >> Puppet is great for central control with automatic runs making systems >> right and keeping them in line, it's not an orchestration tool though - >> however it's commonly supplemented with something like rundeck and/or >> mcollective to assist here. >> >> Chef is great for a ruby house - you'll need to brush up on your ruby as >> writing cookbooks is heavily tied to the language. Historically it was very >> debian focused with issues like selinux problems. I believe these have been >> generally resolved though. >> >> Ansible is a great orchestration tool and excellent for going from base to >> a configured system. It is less of a tool to keep things inline with a base >> however with no central automated runs (ignoring Tower which is not FOSS >> yet). >> >> Ansible is also much simpler to get into given the tasks are just like >> following through a script for defining how to make a system, as opposed to >> learning an actual DSL like required for understanding puppet modules. >> >> There's a growing pattern of using ansible for orchestration alongside >> puppet for definitions as well (there's a specific ansible module to carry >> out a puppet
[CentOS] tune2fs: Filesystem has unsupported feature(s) while trying to open
I have an ext4 filesystem for which I'm trying to use "tune2fs -l". Here is the listing of the filesystem from the "mount" command: # mount | grep share /dev/mapper/VolGroup_Share-LogVol_Share on /share type ext4 (rw,noatime,nodiratime,usrjquota=aquota.user,jqfmt=vfsv0,data=writeback,nobh,barrier=0) When I try to run "tune2fs" on it, I get the following error: # tune2fs -l /dev/mapper/VolGroup_Share-LogVol_Share tune2fs 1.41.12 (17-May-2010) tune2fs: Filesystem has unsupported feature(s) while trying to open /dev/mapper/VolGroup_Share-LogVol_Share Couldn't find valid filesystem superblock. This filesystem was created on this system (i.e. not imported from another system). I have other ext4 filesystems on this server, and they all work with "tune2fs -l". Basic system info: # rpm -qf `which tune2fs` e2fsprogs-1.41.12-18.el6.x86_64 # cat /etc/redhat-release CentOS release 6.5 (Final) # uname -a Linux lnxutil8 2.6.32-504.12.2.el6.x86_64 #1 SMP Wed Mar 11 22:03:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux I did a little web searching on this, most of the hits were for much older systems, where (for example) the e2fsprogs only supported up to ext3, but the user had an ext4 filesystem. Obviously that's not the case here. In other words, the filesystem was created with the mkfs.ext4 binary from the same e2fsprogs package as the tune2fs binary I'm trying to use. Anyone ever seen anything like this? Thanks! ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Just need to vent
I haven't used gnome3, or any Linux desktop in earnest for a long time... But I used to be semi-obsessed with tweaking and configuring various Linux desktops. And back when I was doing that, there were dozens of desktop programs available, from super lightweight bare bones window managers, to full blown desktop environments that do everything under the sun (and of course, everything in between). So my question is: while gnome3 might not float your boat, why not try one of the countless other desktops? It's all open source... FWIW, I was never a fan of full blown desktop environments like kde/gnome simply because I had a preference for lightweight, standalone window managers. My favorites were fluxbox and icewm. Besides those, off the top of my head, I know of: blackbox, openbox, Joe's wm, window maker, and enlightenment 16 in the simple/lightweight window manager category. Xfce has already been mentioned, and there's also lde and the latest enlightenment in the full-on desktop environment category. A little elbow grease may be required, but, I'm certain there's *a* Linux gui out there for everyone. On Jan 24, 2016 12:20, "Joacim Melin"wrote: > > > On 24 Jan 2016, at 17:45, Peter Duffy wrote: > > > > On Sat, 2016-01-23 at 20:27 -0600, Frank Cox wrote: > >> On Sat, 23 Jan 2016 20:05:02 -0500 > >> Mark LaPierre wrote: > >> > >>> The main reason I'm still using, nearly obsolete, CentOS 6 is because I > >>> don't want to have to deal with Gnome 3. > >> > >> Install Mate on Centos 7 and you never have to touch Gnome 3. I did, > >> and my desktops don't look or work any different today than they did > >> under Centos 6. > >> > > > > Trouble is that when you go from 6 to 7, you also have the delights of > > systemd and grub 2 to contend with. > > > > I'm also still using CentOS 6, and currently have no desire to > > "upgrade". I'm still in shock after trying to upgrade to Red Hat 7 at > > work, and after the upgrade (apart from being faced with the gnome3 > > craziness) finding that many of the admin commands either didn't work, > > or only worked partially via a wrapper. (And the added insult that when > > I shut down the box, it gave a message something like: "shutdown status > > asserted" and then hung, so that it had to be power-cycled. Then when it > > came back up, it went through all the fs checks as though it had shut > > down ungracefully.) I allowed some of the senior developers to try the > > box themselves for a while, and based on their findings, it was decided > > to switch to Ubuntu (which (at least then) didn't use systemd,) together > > with Mate and XFCE. > > > > Similarly with others who have commented, I simply cannot understand why > > the maintainers of crucial components in linux have this thing about > > making vast changes which impact (usually adversely) on users and > > admins, without (apparently) any general discussion or review of the > > proposed changes. What happened to RFCs? Maybe it's a power thing - we > > can do it, so we're gonna do it, and if ya don't like it, tough! > > > > It would be very interesting to know how many other users are still on > > CentOS/Red Hat 6 as a result of reluctance to enjoy all the - erm - > > improvements in 7. Maybe it's time to fork CentOS 6 and make it look and > > behave like 7 without systemd (or even better, with some way of > > selecting the init methodology at install-time and afterwards), and with > > gnome2 (or a clear choice between 2 and 3). Call it DeCentOS. > > > > > > I'm still on 6.7 and have no plans to upgrade my 20+ servers running it. > KVM runs fine, all my services runs fine. > Everything is stable, fast enough and I can find my way around a CentOS > 6.x system like the palm of my hand. > > I tried installing CentOS 7 when it was released without knowing about all > the changes. I spent about an hour trying to understand what had happened > and where things where located. And with "trying" I mean searching, > googling and just feeling really frustrated. > > I then realised that it was simply not for me - lots of (IMHO unnecessary) > changes had been made and I guess when the time comes to really upgrade my > servers I will go with Ubuntu, FreeBSD or whatever seems to be the the best > option. > > I'm sure there are technical reasons to upgrade to CentOS 7, I'm yet to be > bothered to find out though since it's damn near impossible to actually get > work done with it installed. > > A fork of CentOS 6 would be very, very, very interesting to run from my > point of view. > > Joacim > > > > > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos > ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HDD badblocks
That's strange, I expected the SMART test to show some issues. Personally, I'm still not confident in that drive. Can you check cabling? Another possibility is that there is a cable that has vibrated into a marginal state. Probably a long shot, but if it's easy to get physical access to the machine, and you can afford the downtime to shut it down, open up the chassis and re-seat the drive and cables. Every now and then I have PCIe cards that work fine for years, then suddenly disappear after a reboot. I re-seat them and they go back to being fine for years. So I believe vibration does sometimes play a role in mysterious problems that creep up from time to time. On Mon, Jan 18, 2016 at 5:39 AM, Alessandro Baggiwrote: > Il 18/01/2016 12:09, Chris Murphy ha scritto: >> >> What is the result for each drive? >> >> smartctl -l scterc >> >> >> Chris Murphy >> ___ >> CentOS mailing list >> CentOS@centos.org >> https://lists.centos.org/mailman/listinfo/centos >> . >> > SCT Error Recovery Control command not supported > > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HDD badblocks
Have you ran a "long" smart test on the drive? Smartctl -t long device I'm not sure what's going on with your drive. But if it were mine, I'd want to replace it. If there are issues, that long smart check ought to turn up something, and in my experience, that's enough for a manufacturer to do a warranty replacement. On Jan 17, 2016 11:00, "Alessandro Baggi"wrote: > Hi list, > I've a notebook with C7 (1511). This notebook has 2 disk (640 GB) and I've > configured them with MD at level 1. Some days ago I've noticed some > critical slowdown while opening applications. > > First of all I've disabled acpi on disks. > > > I've checked disk for badblocks 4 consecutive times for disk sda and sdb > and I've noticed a strange behaviour. > > On sdb there are not problem but with sda: > > 1) First run badblocks reports 28 badblocks on disk > 2) Second run badblocks reports 32 badblocks > 3) Third reports 102 badblocks > 4) Last run reports 92 badblocks. > > > Running smartctl after the last badblocks check I've noticed that > Current_Pending_Sector was 32 (not 92 as badblocks found). > > To force sector reallocation I've filled the disk up to 100%, runned again > badblocks and 0 badblocks found. > Running again smartctl, Current_Pending_Sector 0 but Reallocated_Event > Count = 0. > > Why each consecutive run of badblocks reports different results? > Why smartctl does not update Reallocated_Event_Count? > Badblocks found on sda increase/decrease without a clean reason. This > behaviuor can be related with raid (if a disk had badblocks this badblock > can be replicated on second disk?)? > > What other test I can perform to verify disks problems? > > Thanks in advance. > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos > ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Intel SSD
I always tell vendors I'm using RHEL, even though we're using CentOS. If you say CentOS, some vendors immediately throw up their hands and say "unsupported" and then won't even give you the time of day. A couple tricks for fooling tools into thinking they are on an actual RHEL system: 1. Modify /etc/redhat-release to say RedHat Enterprise Linux or whatever the actual RHEL systems have 2. Similarly modify /etc/issue Another tip that has proven successful: run the vendor tool under strace. Sometimes you can get an idea of what it's trying to do and why it's failing. This is exactly what we did to determine why a vendor tool wouldn't work on CentOS. We had modified /etc/redhat-release (as in (1) above), but forgot about /etc/issue. Strace showed the program existing immediately after an open() call to /etc/issue. Good luck! On Wed, Nov 18, 2015 at 9:24 AM, Michael Hennebrywrote: > On Wed, 18 Nov 2015, Birta Levente wrote: > >> I have a supermicro server, motherboard is with C612 chipset and beside >> that with LSI3108 raid controller integrated. >> Two Intel SSD DC S3710 200GB. >> OS: Centos 7.1 up to date. >> >> My problem is that the Intel SSD Data Center Tool (ISDCT) does not >> recognize the SSD drives when they connected to the standard S-ATA ports on >> the motherboard, but through the LSI raid controller is working. >> >> Does somebody know what could be the problem? >> >> I talked to the Intel support and they said the problem is that Centos is >> not supported OS ... only RHEL 7. >> But if not supported should not work on the LSI controlled neither. > > > Perhaps the tool looks for the string RHEL. > My recollection is that when IBM PC's were fairly new, > IBM used that trick with some of its software. > To work around that, some open source developers used the string "not IBM". > I think this was pre-internet, so google might not work. > > If it's worth the effort, you might make another "CentOS" distribution, > but call it "not RHEL". > > -- > Michael henne...@web.cs.ndsu.nodak.edu > "Sorry but your password must contain an uppercase letter, a number, > a haiku, a gang sign, a heiroglyph, and the blood of a virgin." > -- someeecards > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Running Fedora under CentOS via systemd-nspawn?
I actually built HandBrake 0.10.2 (the latest) under C7 (using a CentOS 7 nspawn container so as not to pollute the main system with the dozens of deps I installed). Full details here if you're interested: http://raw-sewage.net/articles/fedora-under-centos/ The problem with the newer version of HandBrake is that it requires (a very recent version of) gtk3, which in turn has several other deps that need to be upgraded on C7. But I worked through all that, and can provide all the spec files if anyone wants. Anyway, the HandBrake problem is solved for me (in possibly multiple ways). But I'm just fascinated by the possibilities of nspawn, and wondering how far one can take it before instabilities are introduced. Consider how many people out there have similar problems as me: want to run CentOS for stability/reliability/vendor support, but also want some bleeding-edge software that's only available on Fedora (or Ubuntu or Arch). If it's "safe" to run these foreign distributions under CentOS via nspawn, then I think that's a simple solution. Virtual Machines are of course a possible solution, but they seem overkill for this class of problem. And not to mention. possibly inefficient---something like HandBrake should benefit from running on bare metal, rather than under a virtualized CPU. On Wed, Nov 18, 2015 at 1:11 PM, Lamar Owen <lo...@pari.edu> wrote: > On 11/17/2015 12:39 PM, Matt Garman wrote: >> >> Now I have a need for a particular piece of software: HandBrake. I >> found this site[1] that packages it for both Fedora and CentOS. But >> the CentOS version is a little older, as the latest HandBrake requires >> gtk3. The latest version is available for Fedora however. >> > Hmm, Nux Dextop (li.nux.ro) has HandBrake 0.9.9 for C7, but not yet 0.10.2. > Nux! is around this list and might be able to shed light on what is needed > for 0.10.2. > > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Running Fedora under CentOS via systemd-nspawn?
tl;dr - Is anybody "running" a Fedora system via systemd-nspawn under CentOS? Long version: Before CentOS 7, I used chroot to create "lightweight containers" where I could cleanly add extra repos and/or software without the risk of "polluting" my main system (and potentially ending up in dependency hell). The primary driver for this was MythTV, which has dozens of deps that span multiple repos. Without "containing" the MythTV installation within a chroot environment, I would inevitably lead to conflicts when doing a yum update. When I upgraded to CentOS 7, I found out that systemd-nspawn is "chroot on steroids". After figuring it all out, I replicated my MythTV "container", and things were great. Now I have a need for a particular piece of software: HandBrake. I found this site[1] that packages it for both Fedora and CentOS. But the CentOS version is a little older, as the latest HandBrake requires gtk3. The latest version is available for Fedora however. So I thought, what if I could "run" Fedora under systemd-nspawn. Well, I definitely *can* do it. I copied the base Fedora filesystem layout off the Live CD, then booted into it via systemd-nspawn. I was able to add repos (including the one for HandBrake), and actually install then run the HandBrake GUI. So while this does work, I'm wondering if it's safe? I'm thinking that at least some of the Fedora tools assume that they are running under a proper Fedora kernel, whereas in my scheme, they are running under a CentOS kernel. I'm sure there have been changes to the kernel API between the CentOS kernel and the Fedora kernel. Am I risking system stability by doing this? Anyone have any thoughts or experience doing something like this, i.e. running "foreign" Linux distros under CentOS via systemd-nspawn? What if I tried to do this with Debian or Arch or Gentoo? [1] http://negativo17.org/handbrake/ ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Screen
If you're just getting starting with a screen multiplexer, I'd suggest starting with tmux. My understanding is that GNU screen has effectively been abandoned. I used GNU screen for at least 10 years, and recently switched to tmux. As someone else said, in GNU screen, if you want to send ctrl-a to your application (e.g. shell or emacs), you can do ctrl-a followed by a "naked" a. I found this becomes so second nature, for the rare time I'm not in screen/tmux, I habitually do the Ctrl-a a sequence! tmux's default "action" sequence is Ctrl-b. Even without my history of Ctrl-a muscle memory, I think I'd find Ctrl-b awkward. I briefly tried to get used to it so I could live without a custom tmux config file, but just couldn't do it. So, here's my small ~/.tmux.conf file: # remap Ctrl-b to Ctrl-a (to emulate behavior of GNU screen) unbind C-b set -g prefix C-a bind C-a send-prefix # use vi-like keybindings set-window-option -g mode-keys vi # emulate GNU screen's Ctrl-a a sequence to jump to beginning of # line bind a send-prefix On Fri, Oct 30, 2015 at 6:39 AM, xaoswrote: > Andrew, > > Don't do it man. Don't remap screen key sequences. > > I had the same issue. This is how I ultimately solved it. > I mentally trained myself to think of screen > as a room that I need to do a Ctrl-A in order to get in there. > > So, for bash, It is NOT a big deal anyway. Train your fingers to do a > Ctrl-A then a > > It is just one extra keystroke. > > I got used to it within a week. > > -George > On 10/30/15 7:13 AM, Scott Robbins wrote: >> >> On Fri, Oct 30, 2015 at 10:53:29AM +0100, Andrew Holway wrote: >>> >>> Hey >>> >>> I like to use Ctrl+A and Ctrl+E a lot to navigate my insane big bash one >>> liners but this is incompatible with Screen which has a binding to >>> Ctrl-A. >>> Is it possible to move the screen binding so I can have the best of both >>> worlds? >> >> If you only make simple use of screen, then there's always tmux. It uses >> ctl+b by default, and one of the reasons is the issue you mention. >> >> (If you have a lot of complex uses of screen, then it becomes a bigger >> deal >> to learn the new keyboard shortcuts, but many people just use it's attach >> and deteach feature, and relearning those in tmux takes a few minutes.) >> >> If you are interested in trying it, I have my own very simple page with >> links to a better page at http://srobb.net/screentmux.html >> > > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Software RAID1 Drives
I have 3 4TB WD drives I want to put in a RAID1 array. Two WD4000FYYZ and One WD4000F9YZ All enterprise class but two are WD Re and one is WD Se. I ordered the first two thinking 2 drives in the raid array would be sufficient but later decided its a long drive to the server so I would rather have 3 drives and ordered a third in accidentally did not get EXACT same thing. Would there be ANY issues mixing these? ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 6 gcc is a bit old
Take a look at Devtoolset, I think this will give you what you want: https://www.softwarecollections.org/en/scls/rhscl/devtoolset-3/ On Mon, Jun 29, 2015 at 1:56 PM, Michael Hennebry henne...@web.cs.ndsu.nodak.edu wrote: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) is a bit old. There have been major changes since then. I'd like a newer version. If I have to, I expect that I can install from source. I'd rather not. Is there a CentOS 6-compatible repository from which I can get a newer version? Does a standard CentOS 7 repository have a newer version? Does a CentOS 7-compatible repository have a newer version? It's my understanding that to compile from source, I will need to keep the gcc I have. Otherwise I would have nothing to compile the source. I expect that providing the right options will let old and new co-exist. Is ensuring that I get the right gcc when I type gcc just a matter of having the right search path for gcc? Will I need to do anything interesting to ensure that the resulting executables run using the right libraries? I've installed from source before, but never to replace an existing compiler. My concern is that if I louse things up, the mess could be very hard to fix. -- Michael henne...@web.cs.ndsu.nodak.edu SCSI is NOT magic. There are *fundamental technical reasons* why it is necessary to sacrifice a young goat to your SCSI chain now and then. -- John Woods ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] managing logins for different classes of servers
Our environment has several classes of servers, such as development, production, qa, utility, etc. Then we have all our users. There's no obvious mapping between users and server class. Some users may have access to only one class, some may span multiple classes, etc. And for maximum complexity, some classes of machines use local (i.e. /etc/passwd, /etc/shadow) authentication, others use Kerberos. With enough users and enough classes, it gets to be more than one can easily manage with a simple spreadsheet or other crude mechanism. Plus the ever-growing risk of giving a user access to a class he shouldn't have. Is there a simple centralized solution that can simplify the management of this? One caveat though is that our production class machines should not have any external dependencies. These are business-critical, so we try to minimize any single point of failure (e.g. a central server). Plus the production class machines are distributed in multiple remote locations. Any thoughts? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Redistributing pre-installed CentOS
Hello List, I've got a project that will involve distributing pre-configured CentOS servers to customer sites. No packages have been modified, and it's a pretty straightforward content caching server. The packages are all from CentOS repositories, nginx from EPEL, and Puppet packages from Puppet Labs. The only shell on these boxes is pretty locked down however, and I know some licenses, like Apache for example, require displaying the license. Any tips on the least cumbersome way to avoid violating any of the involved licenses? I remember many years ago, I think Debian had a command to display various licenses in a consolidated way. Is there a similar facility available, or a page we could link to? -Matt ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
On Thu, Apr 30, 2015 at 7:31 AM, Peter van Hooft ho...@natlab.research.philips.com wrote: You may want to try reducing sunrpc.tcp_max_slot_table_entries . In CentOS 5 the number of slots is fixed: sunrpc.tcp_slot_table_entries = 16 In CentOS 6, this number is dynamic with a maximum of sunrpc.tcp_max_slot_table_entries which by default has a value of 65536. We put that in /etc/sysconfig/modprobe.d/sunrpc.conf: options sunrpc tcp_max_slot_table_entries=128 Make that /etc/modprobe.d/sunrpc.conf, of course. This appears to be the smoking gun we were looking for, or at least a significant piece of the puzzle. We actually tried this early on in our investigation, but were changing it via sysctl, which apparently has no effect. Your email convinced me to try again, but this time configuring the parameters via modprobe. In our case, 128 was still too high. So we dropped it all the way down to 16. Our understanding is that 16 is the CentOS 5 value. What we're seeing is now our apps are starved for data, so looks like we might have to nudge it up. In other words, there's either something else at play which we're not aware of, or the meaning of that parameter is different between CentOS 5 and CentOS 6. Anyway, thank you very much for the suggestion. You turned on the light at the end of the tunnel! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Cron Issues
Check selinux context for directory? This is Centos 7 minimal running in an openvz container. As far as I can tell selinux is not present. sestatus returns command not found. I have noanacron installed on a fresh centos 7 install. I added this too settings. nano /etc/cron.d/0hourly */5 * * * * root run-parts /etc/cron.fiveminutes */1 * * * * root run-parts /etc/cron.minute 0,30 * * * * root run-parts /etc/cron.halfhour and then created the directories for it. Now I keep getting these errors in secure log. pam_systemd(crond:session): Failed to create session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Can anyone tell me how to resolve it? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
On Wed, Apr 29, 2015 at 10:51 AM, m.r...@5-cent.us wrote: The server in this case isn't a Linux box with an ext4 file system - so that won't help ... What kind of filesystem is it? I note that xfs also has barrier as a mount option. The server is a NetApp FAS6280. It's using NetApp's filesystem. I'm almost certain it's none of the common Linux ones. (I think they call it WAFL IIRC.) Either way, we do the NFS mount read-only, so write barriers don't even come into play. E.g., with your original example, if we unzipped something, we'd have to write to the local disk. Furthermore, in low load situations, the NetApp read latency stays low, and the 5/6 performance is fairly similar. It's only when the workload gets high, and it turn this aggressive demand is placed on the NetApp, that we in turn see overall decreased performance. Thanks for the thoughts! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Cron Issues
I have noanacron installed on a fresh centos 7 install. I added this too settings. nano /etc/cron.d/0hourly */5 * * * * root run-parts /etc/cron.fiveminutes */1 * * * * root run-parts /etc/cron.minute 0,30 * * * * root run-parts /etc/cron.halfhour and then created the directories for it. Now I keep getting these errors in secure log. pam_systemd(crond:session): Failed to create session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Can anyone tell me how to resolve it? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
We have a compute cluster of about 100 machines that do a read-only NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on these boxes are analysis/simulation jobs that constantly read data off the NAS. We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5. We did a piecemeal upgrade, usually upgrading five or so machines at a time, every few days. We noticed improved performance on the CentOS 6 boxes. But as the number of CentOS 6 boxes increased, we actually saw performance on the CentOS 5 boxes decrease. By the time we had only a few CentOS 5 boxes left, they were performing so badly as to be effectively worthless. What we observed in parallel to this upgrade process was that the read latency on our NetApp device skyrocketed. This in turn caused all compute jobs to actually run slower, as it seemed to move the bottleneck from the client servers' OS to the NetApp. This is somewhat counter-intuitive: CentOS 6 performs faster, but actually results in net performance loss because it creates a bottleneck on our centralized storage. All indications are that CentOS 6 seems to be much more aggressive in how it does NFS reads. And likewise, CentOS 5 was very polite, to the point that it basically got starved out by the introduction of the 6.5 boxes. What I'm looking for is a deep dive list of changes to the NFS implementation between CentOS 5 and CentOS 6. Or maybe this is due to a change in the TCP stack? Or maybe the scheduler? We've tried a lot of sysctl tcp tunings, various nfs mount options, anything that's obviously different between 5 and 6... But so far we've been unable to find the smoking gun that causes the obvious behavior change between the two OS versions. Just hoping that maybe someone else out there has seen something like this, or can point me to some detailed documentation that might clue me in on what to look for next. Thanks! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] nfs (or tcp or scheduler) changes between centos 5 and 6?
On Wed, Apr 29, 2015 at 10:36 AM, Devin Reade g...@gno.org wrote: Have you looked at the client-side NFS cache? Perhaps the C6 cache is either disabled, has fewer resources, or is invalidating faster? (I don't think that would explain the C5 starvation, though, unless it's a secondary effect from retransmits, etc.) Do you know where the NFS cache settings are specified? I've looked at the various nfs mount options. Anything cache-related appears to be the same between the two OSes, assuming I didn't miss anything. We did experiment with the noac mount option, though that had no effect in our tests. FWIW, we've done a tcpdump on both OSes, performing the same tasks, and it appears that 5 actually has more chatter. Just looking at packet counts, 5 has about 17% more packets than 6, for the same workload. I haven't dug too deep into the tcpdump files, since we need a pretty big workload to trigger the measurable performance discrepancy. So the resulting pcap files are on the order of 5 GB. Regarding the cache, do you have multiple mount points on a client that resolve to the same server filesystem? If so, do they have different mount options? If so, that can result in multiple caches instead of a single disk cache. The client cache can also be bypassed if your application is doing direct I/O on the files. Perhaps there is a difference in the application between C5 and C6, including whether or not it was just recompiled? (If so, can you try a C5 version on the C6 machines?) No multiple mount points to the same server. No application differences. We're still compiling on 5, regardless of target platform. If you determine that C6 is doing aggressive caching, does this match the needs of your application? That is, do you have the situation where the client NFS layer does an aggressive read-ahead that is never used by the application? That was one of our early theories. On 6, you can adjust this via /sys/class/bdi/X:Y/read_ahead_kb (use stat on the mountpoint to determine X and Y). This file doesn't exist on 5. But we tried increasing and decreasing it from the default (960), and didn't see any changes. Are C5 and C6 using the same NFS protocol version? How about TCP vs UDP? If UDP is in play, have a look at fragmentation stats under load. Yup, both are using tcp, protocol version 3. Are both using the same authentication method (ie: maybe just UID-based)? Yup, sec=sys. And, like always, is DNS sane for all your clients and servers? Everything (including clients) has proper PTR records, consistent with A records, et al? DNS is so fundamental to everything that if it is out of whack you can get far-reaching symptoms that don't seem to have anything to do with DNS. I believe so. I wouldn't bet my life on it. But there were certainly no changes to our DNS before, during or since the OS upgrade. You may want to look at NFSometer and see if it can help. Haven't seen that, will definitely give it a try! Thanks for your thoughts and suggestions! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 7 NFS client problems
What does your /etc/idmapd.conf look like on the server side? I fought with this quite a bit a while ago, but my use case was a bit different, and I was working with CentOS 5 and 6. Still, the kicker for me was updating the [Translation] section of /etc/idmapd.conf. Mine looks like this: [Translation] Method = nsswitch GSS-Methods = nsswitch,static You said you're not using Kerberos or LDAP, so I'm guessing you can leave out the GSS-Methods line entirely, and make your Method line nsswitch,static. Furthermore, in my /etc/idmapd.conf file, I have a [Static] section which, according to my comments, maps GSS-authenticated names to local user names. So mine looks kind of like this: [Static] someuser@REALM = localuser Again, since you're not using GSS, I'm not sure if you can get away with something like [Static] joe = joe But it's probably worth trying/experimenting. I hope that can be of some help! On Thu, Apr 23, 2015 at 3:11 PM, Devin Reade g...@gno.org wrote: #define TL;DR Despite idmapd running, usernames/IDs don't get mapped properly. Looking for a workaround. #undef TL;DR I'm trying to get a new CentOS 7.1 workstation running, and having some problems with NFS filesystems. The server is a fully patched CentOS 6 server. On the NFS filesystem, there are two subdirectories owned by a regular user (joe). (There are actually more and by multiple users, but I'll just show the two.) That user exists on both the NFS server and this CentOS 7 NFS client. However, the user on the client machine is unable to perform various operations. (The operations work when logged into the server.) $ whoami joe $ cd /nfs $ ls -l drwx--. 6 joejoe 4096 Apr 23 11:20 one drwxr-xr-x. 4 joejoe 4096 Dec 14 2011 two $ cd one one: Permission denied. $ cd two $ ls subdir1 subdir2 $ touch testfile touch: cannot touch testfile: Permission denied mount(1) shows that the filesystem is mounted rw. The server has it exported rw to the entire subnet. Other machines (CentOS 5) mount the same filesystems without a problem. Looks a lot like an idmapd issue, right? On the server: # id joe uid=501(joe) gid=501(joe) groups=501(joe) Back on the client: $ ps auxww | grep idmap | grep -v grep $ id joe uid=1000(joe) gid=1000(joe) groups=1000(joe) $ cd /nfs $ ls -n drwx--. 6 1000 1000 4096 Apr 23 11:20 one drwxr-xr-x. 4 1000 1000 4096 Dec 14 2011 two So it looks like even though the name/UID mapping is correct even though the idmapd daemon isn't running on the client. (It looks like CentOS7 only starts idmapd when it's running an NFS *server*.) # systemctl list-units | grep nfs nfs.mountloaded active mounted /nfs proc-fs-nfsd.mount loaded active mounted NFSD configuration filesystem var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System nfs-config.service loaded active exitedPreprocess NFS configuration nfs-client.targetloaded active activeNFS client services The behavior was tested again with SELinux in permissive mode; no change. Splunking a bit more shows some similar behavior for other distros: https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/966734 https://bugzilla.linux-nfs.org/show_bug.cgi?id=226 Yep, this is a situation where LDAP and Kerberos aren't in play. And the CentOS 5, CentOS 6, and other UNIXen boxes are using consistent UID/GID mappings. However, CentOS7 (well, RHEL7) changed the minimum UID/GID for regular accounts, so when the account was created on the latter, the UID is out of sync. So much for idmapd (without the fixes involved in the above URLs). Has anyone else run into this and have a solution other than forcing UIDs to match? Devin ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] centos kernel changelog?
I'm probably overlooking something simple, but I can't seem to find a concise changelog for the rhel/centos kernel. I'm on an oldish 6.5 kernel (2.6.32-431), and I want to look at the changes and fixes for every kernel that has been released since, all the way up to the current 6.6 kernel. Anyone have a link to this? Thanks! Matt ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] centos kernel changelog?
On Thu, Apr 9, 2015 at 8:49 AM, Johnny Hughes joh...@centos.org wrote: rpm -qp --changelog rpm-name | less NOTE: This works for any kernel RPM in any version of CentOS ... you can download the latest 6 RPM from here: http://mirror.centos.org/centos/6/updates/x86_64/Packages/ (currently kernel-2.6.32-504.12.2.el6.x86_64.rpm) Thank you Johnny, that was exactly what I needed, and immensely helpful! One more quick question: what does the number in brackets at the end of most lines represent? For example: - [fs] nfs: Close another NFSv4 recovery race (Steve Dickson) [1093922] What does the 1093922 mean? Thanks again! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Cronjob and sudo
I need to remove empty files out of a directory that are over 6 hours old so I created this script and put it in cron.hourly. #!/bin/sh cd /var/list sudo -u matt find /var/list -mmin +360 -empty -user matt -exec rm {} \; I want to run it as matt rather than root for just an added bit of safety. Problem is I get this. sudo: sorry, you must have a tty to run sudo Is there another way to do this? As I understand the reason for this is requiretty in sudo config. If that improves security I would rather not change that setting. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Cronjob and sudo
I need to remove empty files out of a directory that are over 6 hours old so I created this script and put it in cron.hourly. For what it's worth, we no longer have requiretty in the package in Fedora, so eventually that change will probably make it down to CentOS. Overall, security benefit vanishingly small and inconvenience high. https://bugzilla.redhat.com/show_bug.cgi?id=1020147#c9 Surprised its still in Centos 7 actually. I do think that the suggestion of using /etc/cron.d and cron's own user feature is better in this case, though. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Making custom USB install media
Almost. I've read that, and I can get it to work. I guess I could phrase my question as Given that the default image for 6.6 and 7.0 do this, how do I make custom media that does it to? When I make custom media, it only works off an actual DVD. I need help making custom media that has that USB/EFI magic in it. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Making custom USB install media
Well I feel like a fool, but I owe you one anyways. As noted in my message, isohybrid was one of the first things I tried, and it seemed to work, but the installer would bail saying can't find file, seemingly referring to the squashfs file. Now I'm pretty sure it was saying it couldn't find my kickstart file. I'd modified the menu to say ks=cdrom:. I restored and tried the default install option, works fine. Setting my ks line to ks=hd:LABEL . . . also works fine. My custom ks and packages and everything work straight from the initial menu. If anyone could fill me in on exactly what that partition structure on the official images is doing though, I'd be curious to know. -Matt On Tue, Jan 13, 2015 at 10:29 PM, Alfred von Campe alf...@von-campe.com wrote: On Jan 13, 2015, at 23:16, Matt m...@mattlantis.com wrote: I guess I could phrase my question as Given that the default image for 6.6 and 7.0 do this, how do I make custom media that does it to? When I make custom media, it only works off an actual DVD. Have you tried /usr/bin/isohybrid on the ISO file? Alfred ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Making custom USB install media
Hey List, I apologize for how broad this may be, but hopefully someone here can help me out. I need to make custom kickstart installers to be used in the field to install microservers (Intel NUCs, EFI boot) that don't have optical drives and may not have internet connections. I can make a custom ISO just fine using the pretty common steps, specifically following what's here: http://smorgasbork.com/component/content/article/35-linux/151-building-a-custom-centos-7-kickstart-disc-part-1 My custom kickstart is on the disk, and along with all my packages, and everything works great on a VM as a CD image. The problem is I can't get this to install from USB. I can get it to boot from USB using the isohybrid tool, but the installer fails, can't find the squashfs image. I don't think this is the right way to do it. I know the CentOS 7 (and I believe 6.5) images have a special EFI partition that I believe is why USB works with the base image. The problem is I can't really find any information on how this works or how to recreate it. When I run fdisk -l on the image (as a loopback device) I see that the main image is on a partition marked as none, and then there's an EFI partition with start and end points actually inside the first partition? I'm afraid I don't really know what this is doing, or how to go about recreating. Can anyone explain this to me or point me at some documentation? Am I missing something obvious, like say the old Revisor tool that I should be using to make my life much easier? Any help would be much appreciated, -Matt ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS-virt] virsh list hangs / guests not starting automatically
I followed the wiki[1] to create a KVM virtual machine using bridged network on CentOS 6.5. It seemed to work fine on initial setup. (FWIW I'm trying to run a MythBuntu guest.) However, after a reboot, it doesn't auto-start the VMs. Shortly after boot, if I go into virsh, then do a list, it just hangs. Likewise, if I go into virt manager, it just hangs with the message connecting. Kernel version is: 2.6.32-431.29.2.el6.x86_64 Relevant package versions: libvirt.x86_64 0.10.2-29.el6_5.12 libvirt-client.x86_640.10.2-29.el6_5.12 libvirt-python.x86_640.10.2-29.el6_5.12 python-virtinst.noarch 0.600.0-18.el6 virt-manager.x86_64 0.9.0-19.el6 virt-top.x86_64 1.0.4-3.15.el6 virt-viewer.x86_64 0.5.6-8.el6_5.3 qemu-img.x86_64 2:0.12.1.2-2.415.el6_5.14 qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.14 CPU is a Xeon E3-1230v3. I have the virtualization setting enabled in the BIOS. I googled on this, and saw a bunch of talk about two years ago regarding issues with the libvirt packages having a deadlock bug. But I think the versions of the relevant packages that I have installed are new enough to have fixes for that. I also happened across an earlier post to this list[2], where it seemed someone was having a similar problem. I was previously attempting to use balance-rr and 802.3ad bonding modes on my host. However, I just changed to using active-backup and the problem remains. I have in /etc/libvirt/libvirtd.conf the following three lines (the rest is stock, i.e. all comments): log_level = 2 log_filters= log_outputs=1:file:/var/log/libvirt/libvirt.log Below I posted the contents of the libvirt log file after doing a service libvirt start. Anyone ever fought this before? Thanks! [1] http://wiki.centos.org/HowTos/KVM [2] http://lists.centos.org/pipermail/centos-virt/2014-March/003722.html /var/log/libvirt/libvirt.log output: 2014-10-14 16:47:11.150+: 4657: info : libvirt version: 0.10.2, package: 29.el6_5.12 (CentOS BuildSystem http://bugs.centos.org, 2014-09-01-13:44:02, c6b8.bsys.dev.centos.org) 2014-10-14 16:47:11.150+: 4657: info : virNetlinkEventServiceStart:517 : starting netlink event service with protocol 0 2014-10-14 16:47:11.151+: 4657: info : virNetlinkEventServiceStart:517 : starting netlink event service with protocol 15 2014-10-14 16:47:11.154+: 4668: info : dnsmasqCapsSetFromBuffer:667 : dnsmasq version is 2.48, --bind-dynamic is NOT present, SO_BINDTODEVICE is NOT in use 2014-10-14 16:47:11.157+: 4668: info : networkReloadIptablesRules:1925 : Reloading iptables rules 2014-10-14 16:47:11.157+: 4668: info : networkRefreshDaemons:1287 : Refreshing network daemons 2014-10-14 16:47:11.278+: 4668: info : networkStartNetwork:2422 : Starting up network 'default' 2014-10-14 16:47:11.290+: 4668: info : virStorageBackendVolOpenCheckMode:1085 : Skipping special dir '.' 2014-10-14 16:47:11.290+: 4668: info : virStorageBackendVolOpenCheckMode:1085 : Skipping special dir '..' 2014-10-14 16:47:11.352+: 4668: info : qemudStartup:754 : Unable to create cgroup for driver: No such device or address 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'cpu' 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'cpuacct' 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'cpuset' 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'memory' 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'devices' 2014-10-14 16:47:11.353+: 4668: info : qemudLoadDriverConfig:411 : Configured cgroup controller 'blkio' 2014-10-14 16:47:11.509+: 4668: info : virDomainLoadAllConfigs:14696 : Scanning for configs in /var/run/libvirt/qemu 2014-10-14 16:47:11.527+: 4668: info : virDomainLoadAllConfigs:14696 : Scanning for configs in /etc/libvirt/qemu 2014-10-14 16:47:11.527+: 4668: info : virDomainLoadAllConfigs:14718 : Loading config file 'mythbuntu.xml' 2014-10-14 16:47:11.529+: 4668: info : qemuDomainSnapshotLoad:484 : Scanning for snapshots for domain mythbuntu in /var/lib/libvirt/qemu/snapshot/mythbuntu ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] virsh list hangs / guests not starting automatically
I just wanted to follow-up to add that eventually, the virtual machine did start, and now virsh list works as expected. But it took nearly 30 minutes. The updated libvirt.log is shown below. Notice the huge jump in time, from 16:47 to 17:14. (Side question: it appears the timestamps are UTC, rather than my local time, any way to address that?) 2014-10-14 16:47:11.527+: 4668: info : virDomainLoadAllConfigs:14718 : Loading config file 'mythbuntu.xml' 2014-10-14 16:47:11.529+: 4668: info : qemuDomainSnapshotLoad:484 : Scanning for snapshots for domain mythbuntu in /var/lib/libvirt/qemu/snapshot/mythbuntu 2014-10-14 17:14:41.751+: 4668: info : virNetDevProbeVnetHdr:94 : Enabling IFF_VNET_HDR 2014-10-14 17:14:41.805+: 4668: info : virSecurityDACSetOwnership:296 : Setting DAC user and group on '/home/kvm/mythbuntu.img' to '107:107' 2014-10-14 17:14:41.806+: 4668: info : virSecurityDACSetOwnership:296 : Setting DAC user and group on '/mnt/mythtv1/mythbackend_recordings' to '107:107' 2014-10-14 17:14:42.084+: 4668: info : lxcSecurityInit:1380 : lxcSecurityInit (null) 2014-10-14 17:14:42.084+: 4668: info : virDomainLoadAllConfigs:14696 : Scanning for configs in /var/run/libvirt/lxc 2014-10-14 17:14:42.084+: 4668: info : virDomainLoadAllConfigs:14696 : Scanning for configs in /etc/libvirt/lxc 2014-10-14 17:14:42.089+: 4659: error : virFileReadAll:462 : Failed to open file '/proc/4836/stat': No such file or directory 2014-10-14 17:14:42.090+: 4660: error : virFileReadAll:462 : Failed to open file '/proc/8017/stat': No such file or directory 2014-10-14 17:26:34.679+: 4661: info : remoteDispatchAuthList:2398 : Bypass polkit auth for privileged client pid:11343,uid:0 On Tue, Oct 14, 2014 at 12:08 PM, Matt Garman matthew.gar...@gmail.com wrote: I followed the wiki[1] to create a KVM virtual machine using bridged network on CentOS 6.5. It seemed to work fine on initial setup. (FWIW I'm trying to run a MythBuntu guest.) However, after a reboot, it doesn't auto-start the VMs. Shortly after boot, if I go into virsh, then do a list, it just hangs. Likewise, if I go into virt manager, it just hangs with the message connecting. Kernel version is: 2.6.32-431.29.2.el6.x86_64 Relevant package versions: libvirt.x86_64 0.10.2-29.el6_5.12 libvirt-client.x86_640.10.2-29.el6_5.12 libvirt-python.x86_640.10.2-29.el6_5.12 python-virtinst.noarch 0.600.0-18.el6 virt-manager.x86_64 0.9.0-19.el6 virt-top.x86_64 1.0.4-3.15.el6 virt-viewer.x86_64 0.5.6-8.el6_5.3 qemu-img.x86_64 2:0.12.1.2-2.415.el6_5.14 qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.14 CPU is a Xeon E3-1230v3. I have the virtualization setting enabled in the BIOS. I googled on this, and saw a bunch of talk about two years ago regarding issues with the libvirt packages having a deadlock bug. But I think the versions of the relevant packages that I have installed are new enough to have fixes for that. I also happened across an earlier post to this list[2], where it seemed someone was having a similar problem. I was previously attempting to use balance-rr and 802.3ad bonding modes on my host. However, I just changed to using active-backup and the problem remains. I have in /etc/libvirt/libvirtd.conf the following three lines (the rest is stock, i.e. all comments): log_level = 2 log_filters= log_outputs=1:file:/var/log/libvirt/libvirt.log Below I posted the contents of the libvirt log file after doing a service libvirt start. Anyone ever fought this before? Thanks! [1] http://wiki.centos.org/HowTos/KVM [2] http://lists.centos.org/pipermail/centos-virt/2014-March/003722.html /var/log/libvirt/libvirt.log output: 2014-10-14 16:47:11.150+: 4657: info : libvirt version: 0.10.2, package: 29.el6_5.12 (CentOS BuildSystem http://bugs.centos.org, 2014-09-01-13:44:02, c6b8.bsys.dev.centos.org) 2014-10-14 16:47:11.150+: 4657: info : virNetlinkEventServiceStart:517 : starting netlink event service with protocol 0 2014-10-14 16:47:11.151+: 4657: info : virNetlinkEventServiceStart:517 : starting netlink event service with protocol 15 2014-10-14 16:47:11.154+: 4668: info : dnsmasqCapsSetFromBuffer:667 : dnsmasq version is 2.48, --bind-dynamic is NOT present, SO_BINDTODEVICE is NOT in use 2014-10-14 16:47:11.157+: 4668: info : networkReloadIptablesRules:1925 : Reloading iptables rules 2014-10-14 16:47:11.157+: 4668: info : networkRefreshDaemons:1287 : Refreshing network daemons 2014-10-14 16:47:11.278+: 4668: info : networkStartNetwork:2422 : Starting up network 'default' 2014-10-14 16:47:11.290+: 4668: info : virStorageBackendVolOpenCheckMode:1085 : Skipping special dir '.' 2014-10-14 16:47:11.290+: 4668: info : virStorageBackendVolOpenCheckMode:1085 : Skipping special dir
Re: [CentOS] centos 6.5 input lag
Update on this problem: From another system, I initiated a constant ping on my laggy server. I noticed that every 10--20 seconds, one or more ICMP packets would drop. These drops were consistent with the input lag I was experiencing. I did a web search for linux periodically hangs and found this Serverfault post that had a lot in common with my symptoms: http://serverfault.com/questions/371666/linux-bonded-interfaces-hanging-periodically I in fact have bonded interfaces on the laggy server. When I checked the bonding config, I realized a while ago I had changed from balance-rr / mode 0, to 802.3ad / mode 4. (I did this because I kept getting bond0: received packet with own address as source address when using balance-rr with a bridge interface. The bridge interface was for using KVM.) For now, I simply disabled one of the slave interfaces, and the lag / dropped ICMP packets problem has gone away. Like the Serverfault poster, I have an HP TrueCurve 1800-24g switch. The switch is supposed to support 802.3ad link aggregation. It's not a managed switch, so I (perhaps incorrectly) assumed that 802.3ad would magically just work. Either there is more required to make it work, or it's implementation is broken. Curiously, however, running my bond0 in 802.3ad mode did work without any issue for over a month. Anyway, hopefully this might help someone else struggling with a similar problem. On Fri, Oct 10, 2014 at 4:17 PM, Matt Garman matthew.gar...@gmail.com wrote: On Fri, Oct 10, 2014 at 4:11 PM, Joseph L. Brunner j...@affirmedsystems.com wrote: If this is a server - is it possible your raid card battery died? It is a server, but a home file server. The raid card has no battery backup, and in fact has been flashed to pure HBA mode. Actual RAID'ing is done at the software level. The only other thing on the hardware side that comes to mind is actual bad sectors if this is not a raided virtual drive. The system has eight total drives: two SSDs in raid-1 for the OS, five 3.5 spinning drives in RAID-6, and a single 3.5 drive normally used for mythtv recordings (though mythtv has been stopped for a long time now to try to debug the issue). From the OS side can you keep the box up long enough to do a yum update? Yes, I updated everything except packages beginning with l (el / lowercase 'L') due to that generating a number of conflicts that I haven't have time to resolve. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] centos 6.5 input lag
On Thu, Oct 9, 2014 at 11:20 PM, Joseph L. Brunner j...@affirmedsystems.com wrote: Is it under some type of ddos attack? What's running on this machine? In front of it? A DDOS attack seems unlikely, though I suppose it's possible. Sitting between the lagging machine and the Internet is a pfSense box. All the other machines in the house have no issues, and they all route through the pfSense system. Right now, the only stuff running on it: - CrashPlan (java backup application) - Munin - Apache (only for Munin, no external access [i.e. no port forwarding from pfSense]) - mpd (music player daemon) Thanks, Matt ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] centos 6.5 input lag
On Fri, Oct 10, 2014 at 4:11 PM, Joseph L. Brunner j...@affirmedsystems.com wrote: If this is a server - is it possible your raid card battery died? It is a server, but a home file server. The raid card has no battery backup, and in fact has been flashed to pure HBA mode. Actual RAID'ing is done at the software level. The only other thing on the hardware side that comes to mind is actual bad sectors if this is not a raided virtual drive. The system has eight total drives: two SSDs in raid-1 for the OS, five 3.5 spinning drives in RAID-6, and a single 3.5 drive normally used for mythtv recordings (though mythtv has been stopped for a long time now to try to debug the issue). From the OS side can you keep the box up long enough to do a yum update? Yes, I updated everything except packages beginning with l (el / lowercase 'L') due to that generating a number of conflicts that I haven't have time to resolve. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] centos 6.5 input lag
I have a CentOS 6.5 x86_64 system that's been running problem-free for quite a while. Recently, it's locked-up hard several times. It's a headless server, but I do have IP KVM. However, when it's locked up, all I can see are a few lines of kernel stack trace. No hints to the problem in the system logs. I even enabled remote logging of syslog, hoping to catch the errors that way. No luck. I ran memtest86+ for about 36 errors, no problems. I've tried to strip away just about all running services. It's just a home file server. I haven't had a crash in a while, but I also haven't had it running very long. But even while it's up, I have severe input lag in the shell. I'll type a few characters, and two to 10 or so seconds pass before anything echoes to the screen. I've checked top, practically zero CPU load. It's not swapping - 16 GB of RAM, 0 swap used. Most memory heavy process is java (for CrashPlan backups). iostat shows 0% disk utilization. Anyone seen anything like this? Where else can I check to try to determine the source of this lag (which I suspect might be related to the recent crashes)? Thanks, Matt ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] virsh list hangs
I followed the wiki[1] to create a KVM virtual machine using bridged network on CentOS 6.5. It seemed to work fine on initial setup. However, after a boot, it doesn't auto-start the VMs, or at least, something has to timeout (a *very* long time, on the order of 15--30 minutes) before they can be started. Shortly after boot, if I go into virsh, then do a list, it just hangs. Likewise, if I go into virt-manager, it just hangs connecting. Kernel version is: 2.6.32-431.29.2.el6.x86_64 Relevant package versions: libvirt.x86_64 0.10.2-29.el6_5.12 libvirt-client.x86_640.10.2-29.el6_5.12 libvirt-python.x86_640.10.2-29.el6_5.12 python-virtinst.noarch 0.600.0-18.el6 virt-manager.x86_64 0.9.0-19.el6 virt-top.x86_64 1.0.4-3.15.el6 virt-viewer.x86_64 0.5.6-8.el6_5.3 qemu-img.x86_64 2:0.12.1.2-2.415.el6_5.14 qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.14 CPU is a Xeon E3-1230v3. I have the virtualization setting enabled in the BIOS. I googled on this, and saw a bunch of talk about two years ago regarding issues with the libvirt packages having a deadlock bug. But I think the versions of the relevant packages that I have installed are new enough to have fixes for that. Anyone ever fought this before? Thanks! [1] http://wiki.centos.org/HowTos/KVM ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Centos 6 Software RAID 10 Setup
I am setting up a Centos 6.5 box to host some Openvz containers. I have a 120gb SSD I am going to use for boot, / and swap. Should allow for fast boots. Have a 4TB drive I am going to mount as /backup and use to move container backups too etc. The remaining four 3TB drives I am putting in a software RAID 10 array and mount as /vz and all the containers will go there. It will have by far the most activity. Does this layout look ok? RAID Devices md0 (/dev/md0) 5722917 /vz ext4 Hard Drives sda (/dev/sda) sda1 (/dev/sda1) sda1 500 /boot ext4 sda2 64000 swap sda3 49972 / ext4 sdb (/dev/sdb) sdb1 3815446 /backup ext4 sdc (/dev/sdc) sdc1 2861587 md0 software RAID sdd (/dev/sdc) sdd1 2861587 md0 software RAID sde (/dev/sdc) sdce 2861587 md0 software RAID sdf (/dev/sdc) sdf1 2861587 md0 software RAID ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] 2.5 to 3.5 Conversion Tray
Anyone know of a 2.5 to 3.5 converter so I can put a 2.5 SSD drive in a Supermicro 3.5 SATA hot swap bay? The one I purchased does not seem to work. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] lost packets - Bond
On Wed, Sep 17, 2014 at 11:28 AM, Eduardo Augusto Pinto edua...@eapx.com.br wrote: I'm using in my bond interfaces as active backup, in theory, should assume an interface (or work) only when another interface is down. But I'm just lost packets on the interface that is not being used and is generating packet loss on bond. My suspicion is that the bonding may be irrelevant here. You can drop packets with our without bonding. There are many reasons why packets can be dropped, but one common one is a too-slow consumer of those packets. For example, say you are trying to watch a streaming ultra-high-definition video on a system with low memory and a slow CPU: the kernel can only buffer so many packets before it has to start dropping them. It's hard to suggest a solution without knowing the exact cause. But one thing to try (as much for debugging as an actual solution) is to increase your buffer sizes. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] KVM Remote
Have a few Supermicro based CentOS boxes at remote date center. Is there anyway to do a remote KVM over TCP to them for the case when they do not seem to come back after a reboot? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] cron.weekly
If I have multiple files in cron.weekly and one script takes hours to finish. Will it block other scripts in cron.weekly? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] USB 3.0 Driver
http://www.inateck.com/inateck-kt4005-4-port-usb-3-0-pci-express-card-no-additional-power-connection-needed/ Will these work under Centos 6.x? Can I just boot my home system with a CentOS 6.x live CD to test? Above usb 3.0 card is based on NEC d720201 701 chip. I used this card in my home PC Win7 64 bit. Had to run install disk but it works fine. I then tried it on a server at work running latest Centos 6 64 bit. After reboot I plugged a usb drive into it. Did a ls -la /dev/s* and it did not show up. Plugged usb drive into old usb 2.0 port and it did show up. Assumed drivers were not in Centos 6 kernel for it. I then tried a Centos 6 live DVD on my home PC. When I plugged a usb drive into the usb 3.0 port it shows up. So apparently the drivers are there for it. Will it show somewhere else besides ls -la /dev/s*? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] USB 3.0 Driver
On Fri, Sep 5, 2014 at 4:36 PM, Matt matt.mailingli...@gmail.com wrote: Tried it and CentOS 6 did not seem to find it. Anyone know of a USB 3.0 card that does work with Centos 6.x? I've used a variety of no-name cards with the NEC (now Renesas) uPD72020x series host adapter chips, and they've all worked fine. The unit I tried had the NEC d720201 701 chip and CentOS 6 did not seem to find it. Any chance CentOS 7 will? I'd steer clear of the no additional power connection needed cards; in my experience they can't supply the maximum power the ports may require (5V@900mA per port, i.e., 4.5W per port, 18W total for a four-port card). Proper USB 3.0 cards have a disk drive power connector for the power required. In principle it's possible for a PCIe USB host adapter to have a switching regulator to provide sufficient USB power from the +12V supply rail, but, I have yet to see one that does. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] USB 3.0 Driver
http://www.inateck.com/inateck-kt4005-4-port-usb-3-0-pci-express-card-no-additional-power-connection-needed/ Will these work under Centos 6.x? Can I just boot my home system with a CentOS 6.x live CD to test? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] USB 3.0 Driver
Tried it and CentOS 6 did not seem to find it. Anyone know of a USB 3.0 card that does work with Centos 6.x? On Fri, Sep 5, 2014 at 3:12 PM, Matt matt.mailingli...@gmail.com wrote: http://www.inateck.com/inateck-kt4005-4-port-usb-3-0-pci-express-card-no-additional-power-connection-needed/ Will these work under Centos 6.x? Can I just boot my home system with a CentOS 6.x live CD to test? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Random Disk I/O Tests
I have two openvz servers running Centos 6.x both with 32GB of RAM. One is an Intel Xeon E3-1230 quad core with two 4TB 7200 SATA drives in software RAID1. The other is an old HP DL380 dual quad core with 8 750GB 2.5 SATA drives in hardware RAID6. I want to figure out which one has better random I/O performance to host a busy container. The DL380 currently has one failed drive in the RAID6 array until I get down to replace it, will that degrade performance? Is there an easy way to test disk I/O? On a plain Gigabyte file copy the software RAID1 box seems to be twice as fast. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] USB Boot
I noticed that the Supermicro X9SCL has a USB type-a port right on the motherboard. http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCL.cfm Has anyone used a port like this to boot the core OS and used the physical drives for OpenVZ and KVM containers? I figure a 64GB thumb drive would work. Anyone done this or will a USB thumb drive not stand up too the load? Seems much easier then using a SATA SSD drive but I imagine you still have to find a more durable USB drive. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] USB Boot
I noticed that the Supermicro X9SCL has a USB type-a port right on the motherboard. http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCL.cfm Has anyone used a port like this to boot the core OS and used the physical drives for OpenVZ and KVM containers? I figure a 64GB thumb drive would work. Anyone done this or will a USB thumb drive not stand up too the load? Seems much easier then using a SATA SSD drive but I imagine you still have to find a more durable USB drive. works great with VMware ESXI, or FreeNAS... neither of those treats the boot device as a read/write file system. FreeNAS does have one master configuration file it updates when you make configuration changes, but no operational data is written to it. Hmm, my CentOS install is still do some log file writing. Most of the traffic is on /vz though. Wander how much a USB thumb drive can take? Know of any better ones? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] HP ProLiant DL380 G5
I have CentOS 6.x installed on a HP ProLiant DL380 G5 server. It has eight 750GB drives in a hardware RAID6 array. Its acting as a host for a number of OpenVZ containers. Seems like every time I reboot this server which is not very often it sits for hours running a disk check or something on boot. The server is located 200+ miles away so its not very convenient to look at. Is there anyway to tell if it plans to run this or tell it not too? Right now its reporting one of the drives in array is bad and last time it did this a reboot resolved it. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HP ProLiant DL380 G5
Hate to change the conversation here but that's why I hate hardware RAID. If it was software RAID, Linux would always tell you what's going on. Besides, Linux knows much more about what is going on on the disk and what is about to happen (like a megabyte DMA transfer). BTW, check if something is creating: /forcefsck These exist: -rw-r--r--1 root root 0 Jul 7 10:03 .autofsck -rw-r--r--1 root root 0 Jul 7 10:03 .autorelabel What does that mean? That would make the fsck run every time. GKH Matt wrote: I have CentOS 6.x installed on a HP ProLiant DL380 G5 server. It has eight 750GB drives in a hardware RAID6 array. Its acting as a host for a number of OpenVZ containers. Seems like every time I reboot this server which is not very often it sits for hours running a disk check or something on boot. The server is located 200+ miles away so its not very convenient to look at. Is there anyway to tell if it plans to run this or tell it not too? Right now its reporting one of the drives in array is bad and last time it did this a reboot resolved it. You need to know what it's running. If it's doing an fsck, that will take a lot of time. If it's firmware in the RAID controller, that's different. You can run tune2fs /dev/whatever and see how often it wants to run fsck. For that matter, what's the entry in /etc/fstab? mark ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HP ProLiant DL380 G5
Hate to change the conversation here but that's why I hate hardware RAID. If it was software RAID, Linux would always tell you what's going on. Besides, Linux knows much more about what is going on on the disk and what is about to happen (like a megabyte DMA transfer). BTW, check if something is creating: /forcefsck These exist: -rw-r--r--1 root root 0 Jul 7 10:03 .autofsck -rw-r--r--1 root root 0 Jul 7 10:03 .autorelabel What does that mean? ARRRGGGHGHGHGHGHGHHGHG!!! First, delete /.autofsck. That will stop it from fsckin'g *everything* every reboot. Second, is selinux in enforcing mode? In any case, have you recently done major changes? If not, delete /.autorelabel, since an selinux relabel takes a *while*, esp. if you have *lots* of files. mark The directions for installing OpenVZ on Centos 6 stated to disable selinux, on this box I missed that step, whoops. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] HP ProLiant DL380 G5
Hate to change the conversation here but that's why I hate hardware RAID. If it was software RAID, Linux would always tell you what's going on. Besides, Linux knows much more about what is going on on the disk and what is about to happen (like a megabyte DMA transfer). BTW, check if something is creating: /forcefsck These exist: -rw-r--r--1 root root 0 Jul 7 10:03 .autofsck -rw-r--r--1 root root 0 Jul 7 10:03 .autorelabel What does that mean? ARRRGGGHGHGHGHGHGHHGHG!!! First, delete /.autofsck. That will stop it from fsckin'g *everything* every reboot. Second, is selinux in enforcing mode? In any case, have you recently done major changes? If not, delete /.autorelabel, since an selinux relabel takes a *while*, esp. if you have *lots* of files. mark No, /.autofsck is not harmful and will cause nothing unless /etc/sysconfig/autofsck exists and has something specific defined. The /.autofsck is automatically created at each boot by the system. /.autorelabel is as well only a control file and does not cause a full SELiux relabeling at each boot. If you don't believe me, please see /etc/rc.sysinit. Alexander So I just need to SELINUXTYPE=disabled and ignore .autofsck and .autorelabel? Was the targeted selinux causing the slow reboots? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Instaling LAMP on CentOS 7.x
Have a Centos 7 minimal openvz container I need to install a LAMP setup on. Does anyone recommend anything and have a link too it? I am guessing Mariadb is the new standard? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Instaling LAMP on CentOS 7.x
Just did one myself. Don't forget PHPmyadmin I would also enable the Epel repo, for things like Filezilla, fail2ban and phpmyadmin. They are worthwhile add-ins Your link is http://www.cyberciti.biz/faq/howto-install-linux-apache-mariadb-php-lamp-stack-on-centos7-rhel7/ john Was looking at that link. Also found this: https://www.liberiangeek.net/2014/07/install-apache2-mariadb-php5-support-centos-7/ Looks like phpmyadmin is not in the stock repositories so if I want it I need to use epel or rpmforge? On 8/14/2014 10:04 AM, Matt wrote: Have a Centos 7 minimal openvz container I need to install a LAMP setup on. Does anyone recommend anything and have a link too it? I am guessing Mariadb is the new standard? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Instaling LAMP on CentOS 7.x
So EPEL is preferred over rpmforge now days? In past to get clamav and some other packages seemed like I had to use rpmforge. On Thu, Aug 14, 2014 at 10:38 AM, Always Learning cen...@u62.u22.net wrote: On Thu, 2014-08-14 at 11:26 -0400, John Plemons wrote: Here is a link to enable the epel repo http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ ( It was updated to include CentOS 7 ) This is probably a better link as it provides more choices, for example * EPEL 7 beta: x86_64, ppc64, sources * EPEL 6: i386, x86_64, ppc64, sources * EPEL 5: i386, x86_64, ppc, sources https://fedoraproject.org/wiki/EPEL -- Regards, Paul. England, EU. Centos, Exim, Apache, Libre Office. Linux is the future. Micro$oft is the past. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] CentOS SSH Session Logging
Have a OpenVZ Centos 7 Minimal instance running. Normally SSH sessions are logged too /var/log/secure. There is no such file. Where are they put then? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Instaling LAMP on CentOS 7.x
Have a Centos 7 minimal openvz container I need to install a LAMP setup on. Does anyone recommend anything and have a link too it? I am guessing Mariadb is the new standard? For mysql in past I always added bind-address=127.0.0.1 to my.cnf for bit additional security. This server is dual stacked with IPv4 and IPv6, should I put something else in there? How do I tell it ::1 or 127.0.0.1? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS SSH Session Logging
Fixed it. yum install rsyslog Thanks. On Thu, Aug 14, 2014 at 3:29 PM, Alexander Dalloz ad+li...@uni-x.org wrote: Am 14.08.2014 um 21:06 schrieb Matt: Have a OpenVZ Centos 7 Minimal instance running. Normally SSH sessions are logged too /var/log/secure. There is no such file. Where are they put then? Check /etc/rsyslog.conf Alexander ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS SSH Session Logging
That's not a fix. A fix is finding out where the logs are being written, not installing another package. Though, having said that, I realise that I am assuming that the minimal install contains *some* logging package, and that may possibly be incorrect. There appeared to be no logging package. This was a minimal openvz template for Centos 7 though. On Fri, Aug 15, 2014 at 12:08 PM, Matt matt.mailingli...@gmail.com wrote: Fixed it. yum install rsyslog Thanks. On Thu, Aug 14, 2014 at 3:29 PM, Alexander Dalloz ad+li...@uni-x.org wrote: Am 14.08.2014 um 21:06 schrieb Matt: Have a OpenVZ Centos 7 Minimal instance running. Normally SSH sessions are logged too /var/log/secure. There is no such file. Where are they put then? Check /etc/rsyslog.conf Alexander ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Cron
Do you need cron installed for the files in /etc/cron.daily/ to execute? Did a Centos 6.x minimal openvz install and noticed cron is not installed by default and after installing mlocate cant help but wander if it will be updated without it. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] grubby fatal error: unable to find a suitable template
I did a bulk yum update -y of several servers. As a sanity check after the upgrade, I ran a grep of /etc/grub.conf across all updated servers looking to ensure the kernel I expected was installed. Two servers came up saying /etc/grub.conf did not exist! I logged into the servers, and /etc/grub.conf was a broken link. (It points to /boot/grub/grub.conf). My systems are all setup with a dedicated /boot partition. Sure enough, /boot was not mounted. Furthermore, I saw no /boot entry in /etc/fstab (which all my other servers contain). So I mounted /boot, and the the grub.conf file was not consistent: it did not have a stanza for the kernel I wanted installed. So I did a yum remove kernel ; yum install -y kernel. Both the remove and the install resulted in this message getting printed: grubby fatal error: unable to find a suitable template Just for kicks, I renamed both the /etc/grub.conf symlink as well as the actual /boot/grub/grub.conf file, and repeated the kernel remove/install. This did NOT produce the above error; however, no symlink or actual grub.conf file was created. I did a little web searching on the above error, and one common cause is that there is no valid title... stanza in the grub.conf file for grubby to use as a template. But my file does in fact contain a valid stanza. I even copied a valid grub.conf file from another server, and re-ran the kernel remove/install: same error. Clearly, something is broken, but I'm not sure what. Anyone seen anything like this? By the way, these machines were all 5.something, being upgraded to 5.7. Thanks! ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] grubby fatal error: unable to find a suitable template
On Wed, Jun 25, 2014 at 4:47 PM, m.r...@5-cent.us wrote: ? Why not to 5.10, the current release of CentOS 5.x? Off topic for the question, but, briefly, changing *anything* in our environment involves extensive testing and validation due to very precise performance requirements (HFT, microsecond changes make or break us). For our particular application, we've seen significant performance changes with minor kernel revisions. We've been putting this testing and validation effort into CentOS 6.5, and will hopefully be moving to off 5.x completely before too long. But in the short-term, 5.7 it is for us. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] copying user accounts...
I've used usermod -p encrypted password username successfully many times. Just be careful with escaping of the '$' field separators that appear in the encrypted password string from /etc/shadow. On Tue, Jun 10, 2014 at 4:28 PM, John R Pierce pie...@hogranch.com wrote: I want to copy a few user accounts to a new system... is there a more elegant way to copy /etc/shadow passwords other than editing the file? for instance, is there some way I can give the password hash to /usr/bin/passwd ? -- john r pierce 37N 122W somewhere on the middle of the left coast ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Where to change login screen options
On Fri, Jun 6, 2014 at 10:34 AM, Wes James compte...@icloud.com wrote: I've looked around in the menus and googled this, but I can't find a way to make the login require a username instead of just showing the available users to select from. Where do I change this? I'm using CentOS 6.5. Thanks, -wes ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos We've added the following line to our postinstall scripts. /usr/bin/gconftool-2 --direct --config-source \ xml:readwrite:/etc/gconf/gconf.xml.mandatory \ --type bool --set \ /apps/gdm/simple-greeter/disable_user_list true -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Mother board recommendation
On Fri, May 16, 2014 at 7:21 AM, Joseph Hesse joehe...@gmail.com wrote: I want to build a lightweight server and install centos. Does anyone have a recommendation for a suitable motherboard? What will the role of the server be? How lightweight? How many users, what kinds of services, what (if any) performance requirements, etc? Room for future growth/expansion? Budget? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] chromium-33.0.1750.152-3.el6
Is this legit? Anyone try this build? On Fri, Apr 4, 2014 at 6:03 PM, ngeorgop ngeor...@gmail.com wrote: *New version of chromium (33.0.1750.152)*Build by me :-)i686 chromium-33.0.1750.152-3.el6.i686.rpm https://docs.google.com/file/d/0B9RlkKQB1POSWmFiT0REeG1QS28/ SRPM chromium-33.0.1750.152-3.el6.src.rpm https://docs.google.com/file/d/0B9RlkKQB1POSQlZ1OUljT3p6U1U/ -- View this message in context: http://centos.1050465.n5.nabble.com/CentOS-chromium-33-0-1750-152-3-el6-tp5725772.html Sent from the CentOS mailing list archive at Nabble.com. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] about new Centos SIG distro releases
Initial reaction: Crap! One of the best things about CentOS, in my opinion, was not having to deal with all the different RHEL builds/releases/whatever they called them, and just having ONE distribution. So much for that. It didn't take long for Red Hat to get their mitts all over CentOS, huh? On Mon, Mar 31, 2014 at 5:56 AM, Edward M edwardu...@live.com wrote: Hello, Kinda confused, will CentOS new SIGs: CentOS Storage, CentOS Cloud, and CentOS Virtualization, CentOS Core,etc be a developmental path to future RHEL releases, or will they continue be an exact clone of RHEL, like Centos currently is? http://www.zdnet.com/red-hat-reveals-centos-plans-727812/ ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] about new Centos SIG distro releases
On Mon, Mar 31, 2014 at 8:58 AM, Jim Perrin jper...@centos.org wrote: On 03/31/2014 07:28 AM, Phelps, Matt wrote: Initial reaction: Crap! One of the best things about CentOS, in my opinion, was not having to deal with all the different RHEL builds/releases/whatever they called them, and just having ONE distribution. This doesn't change. It's the core sig. But the current core distribution has KVM/libvirt, and all the desktop stuff, and apache, etc. etc, each of which sounds like it will be broken out into a separate SIG. Please, please don't do this. Let us do our jobs and pick what we need from the same install depending on what kind of machine we're installing. I don't want to have to change our whole installation environment, which we're take years of work to get the way we want it, based on someone else's arbitrary rearranging of what's needed for Storage or Virtualization, etc. So much for that. It didn't take long for Red Hat to get their mitts all over CentOS, huh? We were already doing this sort of thing with the Xen4CentOS build, and the plus repo before the RH agreement. We're simply able to expand on this type of effort now. Both of those are additions to CentOS. Please don't break up CentOS arbitrarily into separate products like Red Hat needlessly did. Let us pick and choose what we need easily. The whole point of an Enterprise environment is to minimize the change so we don't have to re-tool everything. (Sorry for all the sarcastic quotes, but I'm upset. This is exactly the sort of meddling I was afraid of when Red Hat took over.) -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] about new Centos SIG distro releases
On Mon, Mar 31, 2014 at 9:36 AM, Jim Perrin jper...@centos.org wrote: On 03/31/2014 08:16 AM, Phelps, Matt wrote: On Mon, Mar 31, 2014 at 8:58 AM, Jim Perrin jper...@centos.org wrote: On 03/31/2014 07:28 AM, Phelps, Matt wrote: Initial reaction: Crap! One of the best things about CentOS, in my opinion, was not having to deal with all the different RHEL builds/releases/whatever they called them, and just having ONE distribution. This doesn't change. It's the core sig. But the current core distribution has KVM/libvirt, and all the desktop stuff, and apache, etc. etc, each of which sounds like it will be broken out into a separate SIG. No. The SIGs are community efforts where a newer or different version is needed. Core stays core. Please, please don't do this. Let us do our jobs and pick what we need from the same install depending on what kind of machine we're installing. This is exactly the intent. Right now there are a load of admins who want or need newer versions of things, be it php, python, libvirt, ruby, whatever. We're not changing up the core. We're trying to provide a better way to get updated features if they're needed. I don't want to have to change our whole installation environment, which we're take years of work to get the way we want it, based on someone else's arbitrary rearranging of what's needed for Storage or Virtualization, etc. You won't have to. Stick with core, and you'll be fine. So much for that. It didn't take long for Red Hat to get their mitts all over CentOS, huh? We were already doing this sort of thing with the Xen4CentOS build, and the plus repo before the RH agreement. We're simply able to expand on this type of effort now. Both of those are additions to CentOS. Please don't break up CentOS arbitrarily into separate products like Red Hat needlessly did. Let us pick and choose what we need easily. The whole point of an Enterprise environment is to minimize the change so we don't have to re-tool everything. Yep, and the C has been for 'Community', which has been a driving force in this. Xen was in el5, and when it was dropped in el6 we had a large hosting user-base who suddenly had no upgrade path to the new version. By adding Xen support back in, we've provided a method for them to update without re-tooling. We're trying to keep the need to change minimal, exactly as you want. (Sorry for all the sarcastic quotes, but I'm upset. This is exactly the sort of meddling I was afraid of when Red Hat took over.) This seems a bit the sky is falling to me. We're not changing what we've done in the past. We're adding (entirely optional) functionality to meet the demands of the community. You don't have to change a thing if you don't want to. OK, I'll calm down. Perhaps what you've said could have been communicated by the article. This line is what troubled me: So what the newly united Red Hat and CentOShttp://www.zdnet.com/red-hat-incorporates-free-red-hat-clone-centos-724907 is planning on are multiple CentOS releases. How will these SIGs be handled? Just additional yum repos? What if I want to use parts of the Storage SIG and Virtualization SIG together on the same installation? Thanks. Oh, does this mean we'll get a working chrome/chromium past version 31? :) -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] about new Centos SIG distro releases
On Mon, Mar 31, 2014 at 10:02 AM, Matthew Miller mat...@mattdm.org wrote: Do keep in mind that the article in question was written by a tech journalist and includes independent analysis and opinion. It isn't direct communication from Red Hat or CentOS. Yes, I see that now. Is the talk by Karsten Wade online somewhere so we can check the source material? -- Matt Phelps System Administrator, Computation Facility Harvard - Smithsonian Center for Astrophysics mphe...@cfa.harvard.edu, http://www.cfa.harvard.edu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] High load average, low CPU utilization
On Fri, Mar 28, 2014 at 9:01 AM, Mr Queue li...@mrqueue.com wrote: On Thu, 27 Mar 2014 17:20:22 -0500 Matt Garman matthew.gar...@gmail.com wrote: Anyone seen anything like this? Any thoughts or ideas? Post some data.. This public facing? Are you getting sprayed down by packets? Array? Soft/hard? Someone have screens laying around? Write a trap to catch a process list when the loads spike? Look at crontab(s)? User accounts? Malicious shells? Any guest containers around? Possibilities are sort of endless here. Not public facing (no Internet access at all). Linux software RAID-1. No screen or tmux data. No guest access of any kind. In fact, only three logged in users. I've reviewed crontabs (there are only a couple), and I don't see anything out of the ordinary. Malicious shells or programs: possibly, but I think that is highly unlikely... if someone were going to do something malicious, *this* particular server is not the one to target. What kind of data would help? I have sar running at a five second interval. I also did a 24-hour run of dstat at a one second interval collecting all information it could. I have tons of data, but not sure how to distill it down to a mailing-list friendly format. But a colleague and I reviewed the data, and don't see any correlation with other system data before, during, or after these load spike events. I did a little research on the loadavg number, and my understanding is that it's simply a function of the number of tasks on the system. (There's some fancy stuff thrown in for exponential decay and curve smoothing and all that, but it's still based on the number of system tasks.) I did a simple run of top -b top_output.txt for a 24-hour period, which captured another one of these events. I haven't had a chance to study it in detail, but I expected the number of tasks to shoot up dramatically around the time of these load spikes. The number of tasks remained fairly constant: about 200 +/- 5. How can the loadavg shoot up (from ~1 to ~20) without a corresponding uptick in number of tasks? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos