Re: Who Uses Scientific Linux, and How/Why?
Didn't plan on chiming in but Larry's post tugged. I started with Slackware in '93, kernel 1.3, looking for an cheap X11 workstation alternative to the then $15k a pop SunOS workstations, of which we could only afford 2. I proposed to my division director to let me buy 12 Pentium 90's at $2k a pop to deploy this new thing called a linux workstation. I recall a committee of two, one from our scientific computing division, who advised against it, saying there was no vendor backing. Lucky for me, my division director took a chance. Well, the rest, they say, is history. Like Larry I switched to RedHat when they came in boxes. I stumbled on SL when I started working on ROOT. Version 0.6 then. There was this thing called FermiLinux and when Redhat stopped selling boxes and wanted a subscription for RHEL, I switched us over to SL. Remember Connie's photos when SL started installing? The SL mailing list is fantastic resource. I suspect like all good things it will also come to an end. I hope it lasts a little longer, at least till I retire, so we can all bitch about CentOS 8 and commiserate together the loss of SL. Lol. On 2/25/20 1:56 PM, P. Larry Nelson wrote: Brett Viren wrote on 2/25/20 8:15 AM: "Peter Willis" writes: Perhaps, if it’s not too much trouble, people on the list might give a short blurb about how they use it and why. Not quite a short blurb, but not too long either. I am retired now (nearly 4 years) after nearly 50 years in the IT biz - 44 of those at UIUC and 20 of those as an IT Admin for our local HEP group, and I can tell you that there are two people who made my life immeasurably better. So I just want to toot their horn. Troy Dawson and Connie Sieh of FermiLab. Here's a great interview with Troy that will answer a lot of questions as well as elucidate why we went with SL. (I suspect the following will get transmogrified by Fermi's Proof Point URL secret encoder ring) https://urldefense.proofpoint.com/v2/url?u=http-3A__old.montanalinux.org_interview-2Dtroy-2Ddawson-2Dscientific-2Dlinux-2Djune2011.html=DwIDaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=p76IJCxmwsNBSv-yK1gjd90aDixiH0QGmAOt17f6Gf0=_1X0fjomFwROuoTUSK43cCqxlIRTvLj6oFiyBnixFAE= Alas, much to my initial dismay, Troy announced in 2011 he was going to work for RedHat, but Pat Riehecky jumped in to those big shoes (Thanks Pat!). I would be remiss if I didn't also mention Urs Beyerle and his work on the SL Live CDs/DVDs. And, of course, the (then) smallish but amazingly helpful SL user community on this list. After infuriatingly frustrating and hapless encounters with RHEL support on even the simplest of issues, being able to have one-on-one interactions with Troy, Connie, and Urs (and other users on the list) was like stepping out of a cold dark cave onto a warm sun drenched beach. [not hyperbole] Our journey (in case you're interested and still reading) went something like: Late 90's and early 2000's - SunOS (expensive hardware, expensive maintenance contracts, expensive licensing). Start playing with this new toy Redhat 2.0. (spare desktop hardware, almost free software, no licensing). Then Redhat 3, then 4 - now seeing that we can replicate all services from SunOS to RH. No longer a toy. Then RH 5 and 6, 7. 8, 9 and End-of-Life. LHC was ramping up and about to spew petabytes of ATLAS experiment data. Time to start building racks of storage farms and compute clusters. Switch to RHEL. But with that came confusing and frustrating licensing plus the aforementioned support snafus. Then an epiphany - one of our engineers was collaborating with another institution on loading linux onto embedded processors as part of the Dark Energy Survey telescope and came to me for linux advice. They were using a free linux installation from CERN called Scientific Linux (SLC). "Really!" He said FermiLab had a similar version (SLF) but that they chose SLC for whatever reason. He said it's the same as RHEL. "Really!" (again) I found FermiLab's website for SLF and the rest, they say, is history! We started with RHEL3, moved to SL4, then SL5 (my favorite) and wound up at SL6. SL7 was out and the HEP community was transitioning to it when I retired so I didn't have to deal with it. :-) Anyways, now back to retirement. - Larry
Re: Is Scientfic Linux Still Active as a Distribution?
+1 On 2/22/20 5:41 PM, Keith Lofstrom wrote: I'm an independent electronics inventor, heavily dependent on both competent software and competent laboratory science, both for the knowledge I depend on and the tools I use to transform that knowledge into products and services for my customers. SL has been a very good tool for that. Thanks to all who have contributed. I depend on "benign neglect" for a stable computing platform - just enough funding and staffing to fix urgent problems, but not continuously mutate the platform to conform to ephemeral fashion or management whim. I moved /from/ Windows to gain that stability, even if that limits the choice of new widgets I can attach to my (older) computers. I have plenty of replacement-spare old widgets, and I don't need the distraction of a rapidly mutating platform optimized for market churn and planned-obsolescence sales. I'm actually glad that Microsoft, Apple, and IBM are busily churning those markets, because it keeps their customers distracted and not bothering me with those distractions while I think and work. The hardware cast off by the fashion-chasers is still abundant on eBay, and I have enough of it to last me for life (except for the batteries and backlights for my old Thinkpads). I presume there are enough like me, some of whom are on this list, that we can continue to carve out a community space on top of CentOS, focused on inquiry and reliability. If CentOS 9 or 10 or 11 goes off the rails, there are enough of us here to tweak CentOS 7 or 8 into something we can continue to use, just like Linux was "in the good old days". While "security by obscurity" is not optimum, I presume a smaller community of impoverished science geeks is a less tempting target for professional software criminals than million-dollar IT departments for billion-dollar corporations and governments, or billions of hapless consumers. We are part of the global target, but we are unlikely to attract specific attention from the bad guys. And while we still benefit from the use of servers at Fermilabs for our "static" distro and our active mailing list, perhaps we should have a backup plan for migration in case some bureaucrat decides to pull the plug on us. That has /always/ been a risk for what we do here; we are one presidential tweet away from Saint Louis USDA exile. As a community of scientific, like-minded Linux users, let's begin to prepare a rudimentary plan B, and hope that we never need to implement it. Keith
comps, primary, other, filelists
I've been using the comps file to construct the packages list for my kickstart installs, ignoring the other files (primary, other, and filelists) that are in repodata folder. Can someone explain what the other files are and if I need to review them for "other" packages? I notice that for centos 8, the comps file is called comps-baseos, suggesting that there are more (?) package groups listed elsewhere. Thanks.
Re: EL 8
I've switched to using Fedora for myself and my users. If you are prepared for its short lifecycle, it is actually very usable. I've found upgrading to be quite painless. I don't use Fedora on servers for obvious reasons. I used ubuntu briefly a decade ago on a laptop and struggled with it. Not a put down on ubuntu but more a statement about myself. I was used to redhat's conventions. I wrote a lot of code then and knew how to find development rpms, where the files are located after installed. I struggled with the dpkg/apt/synaptics for me. Rpm/dnf is a lot easier for me probably because I am used to it. Dnf is pretty much yum so there is no problem there. And I knew a bit of how to build rpms myself. I was loathe to learn dpkg. We've gone with Fedora on the desktop and CentOS on servers and desktops Hope this helps. On 1/30/20 7:57 PM, Yasha Karant wrote: At this point in terms of application support for EL 7 (including SL 7) from external entities (such as Calibre -- there are others), I am going soon to be forced to go to another Linux. The options appear to be drop EL entirely and go to Ubuntu LTS ("stable") current, or to stay with EL and use Springdale (Princeton) EL8 when (if?) it is available, or Oracle 8 EL. Thus far, everyone I have contacted who did a clean install of Oracle 8 (and then copied back files, directory trees, etc., from the non-systems areas of an EL 7 working system) have had no issues. However, I am very concerned about support for Oracle 8 other than purchasing support from Oracle. Do the various professional repositories for SL 7 (and EL 7 in general) such as EPEL have an EL 8 version that work seamlessly with Oracle 8 (or Springdale for that matter)? In the best of all possible worlds, I or my students would have time to build applications from source -- but there are too many and not enough time, forcing the use of repositories with pre-built RPMs (or DEBs if we switch to Ubuntu). Note that we run the same base OS on servers (including HPC compute servers with Nvidia CUDA GPUs) as well as desktop and laptop machines, all presently X86-64 based (this may change for at least some of the servers). Any advice would be appreciated. Yasha Karant
Re: Red Hat on the Desktop - was Re: Calibre current
Haha! I like this one. On 1/30/20 7:48 PM, Konstantin Olchanski wrote: - removal of support for NIS (LDAP is "light weight" is like Titanic is a row boat).
Re: systemd tftp xinetd
I finally figured out my problem. (1) You don't need xinetd. The tftp-server package is enough. Iow, systemd supersedes xinetd. (2) Although the tftp-server rpm installs /etc/xinetd.d/tftp, there is no need to change `disable = yes` in this file. (3) The command `systemctl enable tftp` will enable tftp.socket. On reboot, the socket will be "listening". The tftp.service will still be dead. (4) If the tftp client has a firewall, it needs to do: # firewall-cmd --zone=public --add-port=7130-7140/udp $ tftp -R 7130:7140 mytftpserver.org tftp> ... Then, all works. My problem was actually step 4 which I did to test the server. In my application this is never necessary as I'm using tftp for pxebooting. On 9/11/18 9:30 AM, Ken Teh wrote: I need help with how to enable tftp service. I am trying to get something done and I have no patience for systemd's convoluted logic. The tftp-server installs (1) /etc/xinetd.d/tftp (2) tftp.socket (what's this?) (3) tftp.service Manually, I can start the service and everything works. But enabling the service stays disabled or indirect. Enabling the socket does not start the service on reboot. Do I need xinetd or does systemd deprecate xinetd? Geez! I miss the old days when Unix was simple.
Re: systemd tftp xinetd
I've done all that. But after I reboot the system, I cannot tftp a file from the server. But if I start tftp.service manually, I can get the file. If a service is never available on reboot after you've enabled it, what does 'systemctl enable' mean? Is there some magic sequence of steps I need to take to "really" enable the tftp service? Thanks for the tip on retiring. I think you've got something there. ;) On 9/11/18 10:03 AM, R P Herrold wrote: On Tue, 11 Sep 2018, Ken Teh wrote: I need help with how to enable tftp service. I am trying to get something done and I have no patience for systemd's convoluted logic. Time then, to retire from modern Unix, perhaps. Change and the tide of systemd will not be reversing The tftp-server installs (1) /etc/xinetd.d/tftp Old way: Please examine this file, and as needed, edit to enable the service (normally services are / were shipped disabled, pre-systemd, as part of a hardening push back at RHL 7.2, back at the turn of the century). Particularly the line: disable = yes Alternatively (the old and) LSB specified way was: try as root: chkconfig tftp on - or the 'systemd way is: - systemctl enable tftp - View what is enabled, or not, thus. 'grep' will work with this form: systemctl list-unit-files --no-pager viz: [herrold@centos-7 ~]$ systemctl list-unit-files --no-pager | \ grep tftp tftp.service indirect tftp.socket enabled -- Russ herrold
Re: systemd tftp xinetd
What you described works manually. Basically, the service is not started on reboot even though I've enabled it. So I don't know what 'enabling' a service means. Since tftp-server installs /etc/xinetd.d/tftp, is it hinting that it should be started via xinetd? Do I need to install xinetd or is systemd so do-it-all, know-it-all that it's taken over xinetd's functions? I've tried the obvious steps. I'm working my way through all the permutations (however illogical) to see what works. Obviously, enabling a systemd service does not necessarily start the service on reboot. When does enabling a service not enable it? I wanted to test an application I wrote and I've spent 3 hours trying to configure the system so it will let me. Systemd is really too much. On 9/11/18 9:35 AM, Hinz, David (GE Healthcare) wrote: If you're asking what I think you're asking: systemctl enable tftp # This adds a symlink for tftp into the (target? Milestone? One of those), equivalent to saying "/etc/rc2.d is done, now let's go to rc3.d". systemctl start tftp # This tries to start it systemctl status tftp # This gives you success, or debug information if it didn't work. If I missed your question entirely, then can you word it differently? Dave Hinz On 9/11/18, 9:32 AM, "owner-scientific-linux-us...@listserv.fnal.gov on behalf of Ken Teh" wrote: I need help with how to enable tftp service. I am trying to get something done and I have no patience for systemd's convoluted logic. The tftp-server installs (1) /etc/xinetd.d/tftp (2) tftp.socket (what's this?) (3) tftp.service Manually, I can start the service and everything works. But enabling the service stays disabled or indirect. Enabling the socket does not start the service on reboot. Do I need xinetd or does systemd deprecate xinetd? Geez! I miss the old days when Unix was simple.
systemd tftp xinetd
I need help with how to enable tftp service. I am trying to get something done and I have no patience for systemd's convoluted logic. The tftp-server installs (1) /etc/xinetd.d/tftp (2) tftp.socket (what's this?) (3) tftp.service Manually, I can start the service and everything works. But enabling the service stays disabled or indirect. Enabling the socket does not start the service on reboot. Do I need xinetd or does systemd deprecate xinetd? Geez! I miss the old days when Unix was simple.
Re: [SCIENTIFIC-LINUX-USERS] apply automatic updates
Thank you. I think I will set it to yes on centos as well for the desktops. On 08/10/2018 11:23 AM, Pat Riehecky wrote: That is a bit of a complex question. From the SL side I can point you towards: https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.scientificlinux.org_linux_scientific_7x_x86-5F64_release-2Dnotes_-23-5Fsl-5Fprovides-5Fautomatic-5Fupdates=DwIDaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=K8OI2FBUTfS2DJY_nKBYUz670OgyZoSjzKdkKOjnB4c=uP6UUFqG3gSSVE0DgltBCsHzW9UzudPCU9IlKsLzor0= On 08/10/2018 11:11 AM, Ken Teh wrote: I noticed that apply_updates in /etc/yum/yum-cron.conf is set to No on a centos 7 system while it is set to Yes on SL7x. Is there any reason not to set to yes for centos? I have cron job that emails the user assigned to the desktop to reboot their machine when updates have been installed. With apply_updates set to No, the job fails to detect the install with 'yum history'. I can fix this but I was wondering if there is some reason why I shouldn't configure the daily yum cron the same way, ie, apply_updates=yes. Thanks.
apply automatic updates
I noticed that apply_updates in /etc/yum/yum-cron.conf is set to No on a centos 7 system while it is set to Yes on SL7x. Is there any reason not to set to yes for centos? I have cron job that emails the user assigned to the desktop to reboot their machine when updates have been installed. With apply_updates set to No, the job fails to detect the install with 'yum history'. I can fix this but I was wondering if there is some reason why I shouldn't configure the daily yum cron the same way, ie, apply_updates=yes. Thanks.
Re: 389-ds fastbugs or epel
Never mind. I was confused by the yum list output. I thought I was looking at only the epel listing as I thought I had disabled all other repos except epel. On 11/28/2017 11:48 AM, Ken Teh wrote: Epel has many more 389-ds rpms as opposed to the main SL repos (specifically, fastbugs) which only has 3 rpms. Can someone advise which ones to install? From epel or SL?
389-ds fastbugs or epel
Epel has many more 389-ds rpms as opposed to the main SL repos (specifically, fastbugs) which only has 3 rpms. Can someone advise which ones to install? From epel or SL?
SL signing keys
On the first update of a newly installed system, there is SL signing keys that have to be installed. Yum prompts for confirmation. Is there a way to install the keys before the first yum update? Are they in an rpm somewhere? Thanks.
Re: tip: Secondary Selection clipboard
Time to hang it up? I use the clipboard all the time especially when I'm coding. Multiple terminals each running a copy of vim. I notice that many young programmers also use terminals and vim (or neovim) on Macs. At least on videos of programming topics I'm interested in. Do they not use the clipboard? On a more philosophical note: I recall reading the X11 was all about capability and not policy. People who design software nowadays seem to be all about policy and not capability. This is how you should do things. F**k you if you don't get it. Very un-unix if you ask me. The one feature I love about unix is the countless permutations one can use its command line utilities to solve problems. Feeds the creative side of me, methinks. That's why I never got much into GUIs. Oh well, the future belongs to the young. Maybe it is time to hang it up. On 06/27/2017 08:05 AM, Tom H wrote: On Tue, Jun 27, 2017 at 8:19 AM, Andrew C Aitchisonwrote: On Tue, 27 Jun 2017, Tom H wrote: On Tue, Jun 20, 2017 at 4:38 PM, ToddAndMargo wrote: I have been using UNIX and Linux for over 25 years and did not realize X11 has four clipboards. I recently discovered the Secondary Selection keyboard. It really saves a bunch of time when I am programming as I don't lose my cursor's hot spot. Here is a great 8 minute video demonstrating all four clipboards. It is must learn for anyone using Linux. http://www.cs.man.ac.uk/~chl/Secondary-Selection.mp4 To support this clipboard, your program has to use the GTK Toolkit. Thanks. I didn't know about this secondary clipboard. I've just tried it on my laptop running Ubuntu 17.10 but it didn't work. I suspect that it's been deep-sixed in Gnome Shell and Unity. I was interested in the secondary clipboard too, and looked at http://www.cs.man.ac.uk/~chl/secondary-selection.html which makes clear that this is not a standard gtk feature; there are experimental modified gtk3 libraries which support secondary selection (no source yet). gtk3 means it doesn't run on SL6, so I haven't been able to explore further. The author of "Secondary-Selection.mp4" asked about it on the gtk development list https://mail.gnome.org/archives/gtk-devel-list/2016-August/msg00036.html and the answer was https://mail.gnome.org/archives/gtk-devel-list/2016-August/msg00037.html Part of the response: We still (optionally) support the PRIMARY selection on the X11 backend, and some compatibility layer for it on Wayland, but we have no plans on adding support for the SECONDARY selection, as it's both barely specified and, like the PRIMARY, highly confusing for anybody who is not well-versed in 20+ years of use of textual interfaces on the X Windows System. Personally, I would have jettisoned the PRIMARY selection a long time ago as well, but apparently a very vocal minority is still holding tight to that particular Easter egg. Adding support for the even more esoteric SECONDARY selection on the X11 backend when we're trying to move the Linux world towards the more modern and less legacy-ridden Wayland display system would be problematic to say the least, and an ill fit for the majority of graphical user experiences in use these days.
Re: tip: Secondary Selection clipboard
Haha. I've been on fedora for almost a year and learning to unlearn everything I've learnt about Unix and X11 over 25 years. On 06/27/2017 06:23 AM, Tom H wrote: On Tue, Jun 20, 2017 at 4:38 PM, ToddAndMargowrote: I have been using UNIX and Linux for over 25 years and did not realize X11 has four clipboards. I recently discovered the Secondary Selection keyboard. It really saves a bunch of time when I am programming as I don't lose my cursor's hot spot. Here is a great 8 minute video demonstrating all four clipboards. It is must learn for anyone using Linux. http://www.cs.man.ac.uk/~chl/Secondary-Selection.mp4 To support this clipboard, your program has to use the GTK Toolkit. Thanks. I didn't know about this secondary clipboard. I've just tried it on my laptop running Ubuntu 17.10 but it didn't work. I suspect that it's been deep-sixed in Gnome Shell and Unity.
Re: nmcli question
On 04/10/2017 10:59 AM, Tom H wrote: The lead NM developer's replied to you on fedora-devel@ or fedora-user@ in the past that he and his fellow NM developers have worked hard to add to NM configuration options for complex server setups as well as a cli tool for managing settings. Sadly, NM seems to be a project that can do nothing right in the eyes of its users even though it's left the flakiness of its early years behind. I'm not sure why I'm jumping into the fray but this paragraph struck me as to exactly why network manager is anathema to so many of us. Even if it is not as flaky as it used to be. I'm living with it but I can't you the number of times nmcli and firewall-cmd have made my blood pressure go up. The latter is even worse with its option style subcommands and is near impossible to remember choices. Is it list-all, zone-get-info, zone-list-all? Wtf?
Re: yum update initial setup hang up
I solved my problem by doing the install "interactively" and updating initial-setup separately. I must have missed something thinking I could do an install, reboot, remote login, and update. On 02/27/2017 11:02 AM, Ken Teh wrote: I encountered this error twice now. Initial install of 7x via netinst.iso works. Then, I log in remotely to run yum update. Update proceeds but during cleanup, yum hangs. Apparently cleaning up initial-setup. I need some instructions on how to debug this problem. Thanks.
Re: Connie Sieh, founder of Scientific Linux, retires from Fermilab
Congratulations, Connie. I recall your gallery of pictures in earlier versions of the SL installer. I wish you the very best light for the many shots to come in your retirement. Ken On 02/24/2017 03:52 PM, Bonnie King wrote: Friends, The Scientific Linux team is at once happy and sad to announce Connie Sieh's retirement after 23 years. Today is her last full-time day at Fermilab. Connie Sieh founded the Fermi Linux and Scientific Linux projects and has worked on them continuously. She has sometimes preferred to toil behind the scenes and leave public announcements to others, but has always been a driving force behind the projects. The Scientific Linux story started in the late 1990s when Connie's group explored using commodity PC hardware and Linux as an alternative to commercial servers with proprietary UNIX operating systems. From the distributions available at the time, Red Hat Linux was chosen. In 1998, Connie announced Fermi Linux at HEPiX, a semi-annual meeting of High Energy Physics IT staff. Fermi Linux was a customized and re-branded version of Red Hat Linux with some tweaks for integration with the Fermilab environment. It also introduced an installer modification called Workgroups, a framework to customize package sets for use at different sites and for different purposes. The Workgroups concept lives on today in the form of Contexts for SL7. In October 2003 TUV changed their product model and introduced Red Hat Enterprise Linux. Enterprise Linux was no longer freely distributed in binary form, but sources remained available. Connie and her colleagues started building from these sources, creating one of the first Enterprise Linux rebuilds. A preview, dubbed HEPL, was presented at spring HEPiX 2004. In May 2004, the rebuild was released as Scientific Linux. The name was chosen to reflect the goals and user base of the product. Our colleagues at CERN collaborated, customizing and using Scientific Linux as Scientific Linux CERN (SLC). SL became a standard OS for Scientific Computing in High Energy Physics at Fermilab, CERN and beyond. SL is freely available to the general public, and is a popular Enterprise Linux rebuild. As a result, it has built a community outside of Fermilab and HEP. With gratitude, the Scientific Linux team would like to recognize Connie's many years of service and her immense contribution to the project she founded. Connie's outstanding technical and non-technical judgement are the foundation of Scientific Linux. Her legacy will continue to inform the way we run SL and we hope she'll remain as a collaborator. All the best to Connie in her well-earned retirement. She will be dearly missed!
Re: systemd/journald teething pains
Never mind. A bare syslog(3) works. Problem is elsewhere. On 01/24/2017 08:32 AM, Ken Teh wrote: I'm debugging some code that logs messages via syslog. I was under the assumption that syslog messages would display with journalctl. But I'm not seeing them. I tried using logger and it also does not display unless I explicitly say 'logger --journald' which suggests that I still need syslogd. Is this true? What gives?
systemd/journald teething pains
I'm debugging some code that logs messages via syslog. I was under the assumption that syslog messages would display with journalctl. But I'm not seeing them. I tried using logger and it also does not display unless I explicitly say 'logger --journald' which suggests that I still need syslogd. Is this true? What gives?
Re: srpms?
Thanks Connie, I looked on our mirror. Apparently, they've not bother to host the SRPMS folder. Another instance of "use the source, Ken!". Thanks again! On 01/12/2017 02:59 PM, Connie Sieh wrote: There was a discussion on the issue of srpms or lack of in SL7. I don't build packages much so did not pay attention to it. I am now in need of an srpm that shows how a particular rpm was built. Specifically, the stuff in the spec file. I need the package for a Fedora 25 system where it is not available except as a tarball. Is there a quick write-up on how one handles this nowadays setting aside the pros and cons of this issue? thanks There is no lack of srpms for SL7 because we create srpms out of the "src" code that RedHat provides. RedHat does not provide publicly downloadable srpms for RHEL 7 like they used to. RHEL 7 srpms are available if one has a RHEL 7 subscription. SL 7 srpms are available at ftp://ftp.scientificlinux.org/linux/scientific/7x/SRPMS/
Re: firewalld help
Ok. I see the problem now. Default routes have always been a bit of a mystery to me. Based on your reply, I manually deleted the default route for enp3s0 to confirm it works. Then, I edited the connection with nmcli to remove the default permanently across reboots. For everyone's benefit, the property setting is ipv4.never-default in nmcli. On 11/10/2016 09:02 AM, Stephan Wiesand wrote: On 10 Nov 2016, at 15:41, Ken Teh <t...@anl.gov> wrote: Default routes on the failing system. [root@saudade ~]# ip --details route unicast default via 192.168.203.1 dev enp3s0 proto static scope global metric 100 unicast default via 146.139.198.1 dev enp4s0 proto static scope global metric 101 unicast 146.139.198.0/23 dev enp4s0 proto kernel scope link src 146.139.198.23 metric 100 unicast 192.168.203.0/24 dev enp3s0 proto kernel scope link src 192.168.203.39 metric 100 This suggests tat saudade will send the response packages through enp3s0, unless the request originates from "the same subnet" (146.139.198.0/23). Is that expected to work? You could check this with tcpdump. On 11/10/2016 08:27 AM, Stephan Wiesand wrote: On 10 Nov 2016, at 15:09, Ken Teh <t...@anl.gov> wrote: I'm trying to isolate a network problem and I need some debugging help. Frustrating when I am not fluent in the new sys admin tools. Symptom is as follows: I have a machine running Fedora 24 with its firewall zone set to work. I cannot ping the machine except from the same subnet. I don't have this problem with a second machine running the same OS/rev with the same firewall setup. I'm not sure where to look. I've dumped out both machines iptables. See attachment. I did a diff -y and they look almost identical. The machine that does not work has 2 nics, one which is connected to a 192.168 network. It has additional rules in the various chains but they are all "from anywhere to anywhere". I'm assuming the additional rules come from the second interface. I've put a query to my networking folks to see if the problem is further upstream. But I thought I'd ask if I have missed something obvious. What's the default route on the "failing" system? I know it's not SL7 but they use the same tools: nmcli and firewall-cmd.
Re: firewalld help
Default routes on the failing system. [root@saudade ~]# ip --details route unicast default via 192.168.203.1 dev enp3s0 proto static scope global metric 100 unicast default via 146.139.198.1 dev enp4s0 proto static scope global metric 101 unicast 146.139.198.0/23 dev enp4s0 proto kernel scope link src 146.139.198.23 metric 100 unicast 192.168.203.0/24 dev enp3s0 proto kernel scope link src 192.168.203.39 metric 100 On 11/10/2016 08:27 AM, Stephan Wiesand wrote: On 10 Nov 2016, at 15:09, Ken Teh <t...@anl.gov> wrote: I'm trying to isolate a network problem and I need some debugging help. Frustrating when I am not fluent in the new sys admin tools. Symptom is as follows: I have a machine running Fedora 24 with its firewall zone set to work. I cannot ping the machine except from the same subnet. I don't have this problem with a second machine running the same OS/rev with the same firewall setup. I'm not sure where to look. I've dumped out both machines iptables. See attachment. I did a diff -y and they look almost identical. The machine that does not work has 2 nics, one which is connected to a 192.168 network. It has additional rules in the various chains but they are all "from anywhere to anywhere". I'm assuming the additional rules come from the second interface. I've put a query to my networking folks to see if the problem is further upstream. But I thought I'd ask if I have missed something obvious. What's the default route on the "failing" system? I know it's not SL7 but they use the same tools: nmcli and firewall-cmd.
firewalld help
I'm trying to isolate a network problem and I need some debugging help. Frustrating when I am not fluent in the new sys admin tools. Symptom is as follows: I have a machine running Fedora 24 with its firewall zone set to work. I cannot ping the machine except from the same subnet. I don't have this problem with a second machine running the same OS/rev with the same firewall setup. I'm not sure where to look. I've dumped out both machines iptables. See attachment. I did a diff -y and they look almost identical. The machine that does not work has 2 nics, one which is connected to a 192.168 network. It has additional rules in the various chains but they are all "from anywhere to anywhere". I'm assuming the additional rules come from the second interface. I've put a query to my networking folks to see if the problem is further upstream. But I thought I'd ask if I have missed something obvious. I know it's not SL7 but they use the same tools: nmcli and firewall-cmd. Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere INPUT_direct all -- anywhere anywhere INPUT_ZONES_SOURCE all -- anywhere anywhere INPUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere FORWARD_direct all -- anywhere anywhere FORWARD_IN_ZONES_SOURCE all -- anywhere anywhere FORWARD_IN_ZONES all -- anywhere anywhere FORWARD_OUT_ZONES_SOURCE all -- anywhere anywhere FORWARD_OUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination OUTPUT_direct all -- anywhere anywhere Chain FORWARD_IN_ZONES (1 references) target prot opt source destination FWDI_work all -- anywhere anywhere[goto] FWDI_work all -- anywhere anywhere[goto] FWDI_work all -- anywhere anywhere[goto] Chain FORWARD_IN_ZONES_SOURCE (1 references) target prot opt source destination Chain FORWARD_OUT_ZONES (1 references) target prot opt source destination FWDO_work all -- anywhere anywhere[goto] FWDO_work all -- anywhere anywhere[goto] FWDO_work all -- anywhere anywhere[goto] Chain FORWARD_OUT_ZONES_SOURCE (1 references) target prot opt source destination Chain FORWARD_direct (1 references) target prot opt source destination Chain FWDI_work (3 references) target prot opt source destination FWDI_work_log all -- anywhere anywhere FWDI_work_deny all -- anywhere anywhere FWDI_work_allow all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere Chain FWDI_work_allow (1 references) target prot opt source destination Chain FWDI_work_deny (1 references) target prot opt source destination Chain FWDI_work_log (1 references) target prot opt source destination Chain FWDO_work (3 references) target prot opt source destination FWDO_work_log all -- anywhere anywhere FWDO_work_deny all -- anywhere anywhere FWDO_work_allow all -- anywhere anywhere Chain FWDO_work_allow (1 references) target prot opt source destination Chain FWDO_work_deny (1 references) target prot opt source destination Chain FWDO_work_log (1 references) target prot opt source destination Chain INPUT_ZONES (1 references) target prot opt source destination IN_workall -- anywhere anywhere[goto] IN_workall -- anywhere anywhere[goto] IN_workall -- anywhere
nfsroot post install configure checklist
I've made a minimal install with the --installroot option into a folder that I want to export as an nfs root. I'm wondering if someone has a checklist on what I should do post-install to configure the files in the install. I can think of some obvious ones. But it looks like painstaking work to go through all in the files in etc. One thought I had was kickstart. Since a kickstart install configures as well, it should be possible to trace the kickstart install to see what it does, which files it updates, etc. I use kickstart all the time but have never bothered to poke around how it does things. I'd appreciate any pointers, suggestions, alternatives, etc. Thanks.
Re: SL 7.2 on a HP Zbook
If you grep'd the rc init files for hwclock, you will find it in halt. You can't grep systemd. All you can do is read the man page and there's a lot of man pages to read. :( On 10/07/2016 01:28 PM, stod...@pelletron.comwrote: - Original Message - From: "Bill Askew"To: scientific-linux-us...@listserv.fnal.gov Sent: Friday, October 7, 2016 1:14:04 PM Subject: SL 7.2 on a HP Zbook Hi everyone I am using SL 7.2 on a HP Zbook. So far the only issue that I have is setting the date and time does not set the Zbook's hardware clock. It does change the time for the duration of the session but when the ZBook is rebooted the time goes back to what it was before plus the amount of time I spent during the session. Does anyone have a fix for this? Thanks There is probably a more "systemd" type method, but this as root has always worked: hwclock --systohc
Re: Python 2.7 OS requirements
I suggest trying anaconda from continuum analytics. It installs into /opt and provides its own ecosytem, ie, all the support libraries it needs. Because of this, it will run on an SL6 machine. The install script does give you the option of installing it under a different root. It provides numpy, scipy, matplotlib, pandas, jupyter, etc., for data analysis, and pretty much whatever else you need. It comes in a python 2.7 and python 3.5 version. I used the 3.5 version and can vouch for it. If it doesn't have a package you can simply install it by running its version of pip and it will install it into its ecosystem. But, I've actually never had it do it because its included packages is quite complete. On 07/30/2016 09:01 PM, P. Larry Nelson wrote: To the two Stevens, Thanks for the possible solutions to this! However, I did hear back from the grad student and his response was: "I'm installing some python packages and need a higher version of numpy, which asks for python 2.7. I'll try on CERN system. Thanks!" Hopefully that's the last I'll hear of it :-) I have 4 weeks left with the U of I, I'm totally consumed working on another project involving Docker and Shifter, and don't really have the time nor the wherewithal to deal with it. - Larry Steven J. Yellin wrote on 7/30/16 8:20 PM: Another way is to get Python-2.7.12.tar.xz from https://www.python.org/downloads/, extract into directory Python-2.7.12 with 'tar -xJf Python-2.7.12.tar.xz', and see its README file for what to do next to get it in /usr/local. Steven Yellin On Sun, 31 Jul 2016, Steven Haigh wrote: You can look at virtualenv from EPEL. You can install a separate python environment in a users home directory. On 31/07/16 09:36, P. Larry Nelson wrote: Hi all, Please don't shoot the questioner (me), as I have no experience with Python, other than knowing "what" it is and that my SL6.8 systems have version 2.6.6 installed. I have been asked by one of our Professors that one of his grad students apparently needs Python 2.7.x installed on our cluster (optimally in /usr/local, which is an NFS mounted dir everywhere). In my brief Googling, I have not found OS requirements for 2.7.x, but have inferred that it probably needs SL7.x. Can anyone confirm that? Or has anyone installed Python 2.7.x (and which .x?) on an SL6.8 system without replacing 2.6.x? I'm guessing this can be quite a morass to delve into as when I do a 'rpm -qa|grep -i python|wc' It returns with 67 rpms with python in the rpm name! If the solution is indeed simple, I might proceed, otherwise, I'm of a tendency to reply to the Professor and student, "No way - won't work." I think the student probably has access to CERN systems that probably have what he's looking for. I've followed up with that inquiry to the student and waiting to hear back. Thanks! - Larry -- Steven Haigh Email: net...@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897
Re: printing a man page
Thanks for the tip. Grog outputs groff -t -man which is what 'man -t' does. So, the error is still there. The error description explicitly says how to fix the problem. error: page 11: table will not fit on one page; use .TS H/.TH with a supporting macro package If I read this right, it says to add troff markup to the man page source and to rerun groff with one or more additional packages. Chasing this down will take me too far afield. The firewalld site has the man pages in html and they print nicely via the browser. On 06/24/2016 11:23 PM, James Cloos wrote: "KT" == Ken Teh <t...@anl.gov> writes: KT> Does anyone know enough groff to help me print this man page? KT> # man -t firewall-cmd > /tmp/firewall-cmd.ps Copy the source man page to someplace like /tmp, run grog on it to see what options are required (for things like tables, equations and the like) and then add -Txhtml or -Thtml to the arguments. grohtml specifies an extremely long page length (infinite is unavailable) so the table should work. And then use a browser to print the x?html. -JimC
Re: printing a man page
I *am* working on a headless server. But I'll keep yelp in mind. I decided to try the firewalld project home page. They have the manual pages on the web. Prints very nicely via the browser. On 06/24/2016 04:33 PM, Jim Campbell wrote: What about trying this: yelp man:firewall-cmd . . . and then using yelp to find and print the appropriate page? I am pretty sure that yelp can be used to print (and I know that you can use it to at least view man pages). Of course, this is all moot if you are working from a server or don't have yelp installed. Jim On Fri, Jun 24, 2016, at 04:28 PM, Mark Stodola wrote: On 06/24/2016 03:30 PM, Ken Teh wrote: Does anyone know enough groff to help me print this man page? # man -t firewall-cmd > /tmp/firewall-cmd.ps :397: warning [p 4, 4.4i]: can't break line :434: warning [p 4, 6.8i]: can't break line :446: warning [p 4, 7.9i]: can't break line error: page 11: table will not fit on one page; use .TS H/.TH with a supporting macro package I haven't tried generating a postscript in a while, but there is a man2html that does a pretty decent job. Unfortunately it doesn't like gzipped man pages it seems, so it might be easiest to copy the firewall-cmd.1.gz (or whatever section it is) to /tmp, gunzip it, then run man2html on it. You could also get fancy with 'man' options and piping things together if you want. Digging in an old conversion script I have, I have done: groff -mandoc source_file > dest_file.ps The default is ps output, so it looks like the same command as your 'man -t'. Experience from compiling docbook/xml contents, the warnings/errors you are seeing are familiar. Rewriting an entire table structure just to get a postscript seems a waste of time.
Re: what runs libvirt?
libvirt's website has instructions on how to run dnsmasq alongside their instance of dnsmasq. The trick is to add a 'bind-interfaces' in the dnsmasq.conf and to explicitly specify the listening address or interface. On 06/24/2016 10:12 AM, Mark Stodola wrote: On 06/24/2016 09:48 AM, Ken Teh wrote: I was trying to set up dnsmasq and discovered it's already running. Apparently as part of libvirt. Why is libvirt started? What starts it? I tried looking through systemd output but the only thing about systemd that I can understand are its services. Everything else is so far gobbledy-gook. I ran into this recently on my Fedora laptop. It was quite annoying/frustrating to find out about this default configuration. I issued a 'systemctl stop libvirtd' and 'systemctl disable libvirtd' to disable it. It is used for the virtualization system, which relies on dnsmasq for the virtual lan these days... It uses an alternate configuration file than the normal /etc/dnsmasq.d/ files or wherever they live these days. Aft4r that, I as able to configure it as I normally do and start it using 'systemctl start dnsmasq'. If you rely on it for virtualization, you probably have to go fiddle with libvirtd's alternate dnsmasq config files to add the options you need for other purposes. This wasn't the case for me.
Re: [SCIENTIFIC-LINUX-USERS] what runs libvirt?
Thanks for the tip. Very useful especially since a list-units dumps out a huge list. Many more than the list of files in /etc/init.d. On 06/24/2016 10:34 AM, Pat Riehecky wrote: On 06/24/2016 09:48 AM, Ken Teh wrote: I was trying to set up dnsmasq and discovered it's already running. Apparently as part of libvirt. Why is libvirt started? What starts it? I tried looking through systemd output but the only thing about systemd that I can understand are its services. Everything else is so far gobbledy-gook. Perhaps: systemctl status will help track it down. Pat
what runs libvirt?
I was trying to set up dnsmasq and discovered it's already running. Apparently as part of libvirt. Why is libvirt started? What starts it? I tried looking through systemd output but the only thing about systemd that I can understand are its services. Everything else is so far gobbledy-gook.
sl7 iptables firewalld
I'm trying to set up NAT on an SL7x machine. I know how to do it via iptables but am a little hesitant because of firewalld. It's obvious from the lack of /etc/sysconfig/iptables that iptables configuration is stored elsewhere probably in several xml files. I'm going to try to do it via 'firewall-cmd --direct' in the hopes that my reconfiguration is stored across reboots. I dumped out the nat table. There are several chains that did not exist in SL6x. They appear to be stubs. Does anyone know what their intended purpose is? For example, my default zone is 'work' and I see among others, POST_work, POST_work_log, POST_work_deny, POST_work_allow, etc. The POSTROUTING chain also contains several targets with explicit rules on 192.168.122.0/24. Googling says they are libvirt related. I suppose I could retain them Does anyone know if things will break if I delete them? It's a NAT gateway, not a virtualization server.
Re: how to troubleshoot a ups
It's not clear to me that the ups is at fault. There is no indication on the front panel that anything is wrong with it except for the multiple on-battery/on-line messages a day. I have other upses from apc and they don't do this except very occasionally. If it were a self-test, then I'd expect some regularity, say, once or twice a day when the ups switches to battery, then back to line power during the self-test. It seems to me either the circuit it is plugged into is really bad, ie, large swings in voltage or the ups itself is overly sensitive to it which suggests the ups is bad, or at least, cannot be relied upon to function properly in a real event. On 11/09/2015 08:31 AM, ONeal, Miles wrote: Ken, Why do you think the UPS is at fault? -Miles On Nov 9, 2015, at 07:13, Ken Teh <t...@anl.gov> wrote: I need some advice on how to troubleshoot an apc smart ups. I am getting pairs of onbatt/online messages from nut 3-4 times a day. No particular regularity. My first attempt at a fix was to set the power quality to fair to see if that would help. Nope. There is an advanced config option called the transfer setting which I gather is a specific value for triggering an on battery transition. I've not tried this. I'm thinking of trying an alternative circuit to see if that helps. But, all this is basically stabbing at the wind. I've tried to find a write-up on how one goes about diagnosing ups problems like this but no luck. I would appreciate any advice you have. Thanks.
installing devtoolset-3 on sl6x
I dont seem to be able to install devtoolset-3 on an sl6x x86_64 system. I've done it successfully with devtoolset-2 on an i686 system. My procedure is 1. Wget the yum-conf-devtoolset rpm and install it. 2. Then, when I do a # yum --enablerepo=devtoolset install devtoolset-3-toolchain it just comes back and says no such available package. Though the package is listed clear as day on the mirror. I've tried this on 2 different systems and the result is the same. I'm baffled. What am I missing? Thanks.
Re: help debugging a kickstart install
I looked up the documentation on %pre and its example. I see what you are saying. Thanks for the tip. I usually use kickstart via nfs so I have a copy of the kickstart that installed the machine. On 10/12/2015 09:34 AM, Nico Kadel-Garcia wrote: On Mon, Oct 12, 2015 at 10:14 AM, Ken Teh <t...@anl.gov> wrote: I'm having problems with an 6.7 install. Here are the relevant lines: # partitions #clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297 part /boot --fstype=ext4 --size=1024 --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 part pv.01 --size=1 --grow --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 volgroup sysvg pv.01 logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root Kickstart stops trying to create the swap logical volume. Claims there is no such sysvg volume. I did an alt-F2 and ran parted on the disk. The 'part' command never created the partitions. This is my first time using the 'disk/by-id/...' syntax. Also, first time with an SSD disk. I checked /dev/disk/by-id and the disk is listed with the correct id. Don't hurt yourself. That "disk-by-id" or using UUID, is not stable. If you need to ensure particular disk layouts, put in a '%pre' statement to partition things the way *you* want in a saveable, scriptable format, and use the resulting LABEL or LVM based volumes to hand off to the rest of the kickstart configuration. The anaconda disk configuration tools are powerful, but awfully confusing and very diffficult to get right if you try to do *anything* that is not bog standard. And the "system-config-kickstart" GUI for resetting kickstart files is not much help: it profoundly reformats the kickstart file you start with, and throws out multiple "%pre" or "%post" steps. And ooohh, if you're using kickstart files? Put in a %post --nochroot" to copy /tmp/ks.cfg to /mnt/sysimage/root/ks.cfg, so that you have an actual copy of the kickstart file you actually used on that particular system! Has anyone tried the ssh option with kickstart? I understand you can ssh to the machine and monitor it during the installation. The one advantage I can see is the saved lines on a terminal window instead of the 80x24 console. I've not tried that, I'm not sure the SSH binaries are even in the CD boot images: I don't see them in the "boot.iso" images.
help debugging a kickstart install
I'm having problems with an 6.7 install. Here are the relevant lines: # partitions #clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297 part /boot --fstype=ext4 --size=1024 --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 part pv.01 --size=1 --grow --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 volgroup sysvg pv.01 logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root Kickstart stops trying to create the swap logical volume. Claims there is no such sysvg volume. I did an alt-F2 and ran parted on the disk. The 'part' command never created the partitions. This is my first time using the 'disk/by-id/...' syntax. Also, first time with an SSD disk. I checked /dev/disk/by-id and the disk is listed with the correct id. I'm going to prep the partitions with a rescuecd and try again. I'd appreciate any suggestions you may have debugging this. Has anyone tried the ssh option with kickstart? I understand you can ssh to the machine and monitor it during the installation. The one advantage I can see is the saved lines on a terminal window instead of the 80x24 console.
Re: help debugging a kickstart install
Good grief! Vielen Dank! On 10/12/2015 09:27 AM, Stephan Wiesand wrote: On 12 Oct 2015, at 16:14, Ken Teh <t...@anl.gov> wrote: I'm having problems with an 6.7 install. Here are the relevant lines: # partitions #clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297 part /boot --fstype=ext4 --size=1024 --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 part pv.01 --size=1 --grow --asprimary --ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297 volgroup sysvg pv.01 logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root Kickstart stops trying to create the swap logical volume. Claims there is no such sysvg volume. Does it help to fix the typo in "--vgname=svsvg" ? I did an alt-F2 and ran parted on the disk. The 'part' command never created the partitions. This is my first time using the 'disk/by-id/...' syntax. Also, first time with an SSD disk. I checked /dev/disk/by-id and the disk is listed with the correct id. I'm going to prep the partitions with a rescuecd and try again. I'd appreciate any suggestions you may have debugging this. Has anyone tried the ssh option with kickstart? I understand you can ssh to the machine and monitor it during the installation. The one advantage I can see is the saved lines on a terminal window instead of the 80x24 console.
ld-linux.so.2
I have a user who has installed an executable built on a other Linux distro. Claims it was built on a 64-bit linux (doubtful). He has no problems running it on a 32-bit SL6.x machine but cannot run it on a 64-bit SL6.x machine. Chokes with the following: ...:/lib/ld-linux.so: bad ELF interpreter: No such file or directory. I'm wondering if it is safe to add a symbolic link to the ld-linux-x86_64.so.2 to fix this.
Re: about realtime system
I used to play with a realtime Linux system back in the 90s that had a sort of virtualization architecture. It had a realtime executive that could run realtime tasks. One of its tasks was the Linux kernel itself so this way the tasks could talk with linux processes and make use of linux capabilities it lacked. When I first worked with it, it ran the 1.3 kernel and it was really fast. A 6µsec latency. It got progressively worse with 2x kernels. But still much better than the 150µsec quoted earler. I can't remember what it was called. Somebody from New Mexico developed it. There was also an offshoot development by a group in Italy. On 08/27/2014 12:07 PM, Michael Duvall wrote: Hello, While I am thoroughly interested in SL topics, I rarely comment on threads. Today is an exception. I work for a real-time linux vendor. I concur with David Somerseth's summation. Real-time cannot be achieved under virtualization. Regards, -- *Michael Duvall* Systems Analyst, Real-Time michael.duv...@ccur.com mailto:michael.duv...@ccur.com (954) 973-5395 direct (954) 531-4538 mobile CONCURRENT | 2881 Gateway Drive | Pompano Beach, FL 33069 | www.real-time.ccur.com http://www.real-time.ccur.com/ -Original Message- *From*: David Sommerseth sl+us...@lists.topphemmelig.net mailto:david%20sommerseth%20%3csl+us...@lists.topphemmelig.net%3e *Reply-to*: scientific-linux-us...@listserv.fnal.gov scientific-linux-us...@listserv.fnal.gov *To*: John Lauro john.la...@covenanteyes.com mailto:john%20lauro%20%3cjohn.la...@covenanteyes.com%3e, Paul Robert Marino prmari...@gmail.com mailto:paul%20robert%20marino%20%3cprmari...@gmail.com%3e *Cc*: Brandon Vincent brandon.vinc...@asu.edu mailto:brandon%20vincent%20%3cbrandon.vinc...@asu.edu%3e, llwa...@gmail.com llwa...@gmail.com mailto:%22llwa...@gmail.com%22%20%3cllwa...@gmail.com%3e, SCIENTIFIC-LINUX-USERS@FNAL.GOV scientific-linux-users@fnal.gov mailto:%22scientific-linux-us...@fnal.gov%22%20%3cscientific-linux-us...@fnal.gov%3e, Nico Kadel-Garcia nka...@gmail.com mailto:nico%20kadel-garcia%20%3cnka...@gmail.com%3e *Subject*: Re: about realtime system *Date*: Wed, 27 Aug 2014 12:27:50 -0400 On 24/08/14 18:57, John Lauro wrote: Why spread FUD about Vmware. Anyways, to hear what they say on the subject: http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf Anyways, KVM will not handle latency any better than Vmware. You can currently not achieve true realtime characteristics when adding virtualization, no matter the technology. The reason is that realtime tasks must be able to preempt running tasks to be able to keep its deadlines. Consider running a virtualized realtime kernel running a realtime task, on a host VM with a realtime kernel. When the task gets CPU time, it preempts all other running tasks on the provided CPU core. But if the VM host is not aware of this happening, it may just as well not give enough runtime in the right time-window to the realtime guest OS. Thus increasing the latency quite noticeably. So for this to work, the guest OS kernel must be able to communicate to the host OS kernel that it has a task which needs attention right now. And AFAIK, this mechanism is not implemented anywhere. I know there has been done some research on this topic some years ago, and an interesting paper on it. But I don't know if this has come any further. http://lwn.net/images/conf/rtlws11/papers/proc/p18.pdf
sl7 systemd sysvinit
I read the following article on systemd http://ifwnewsletters.newsletters.infoworld.com/t/9625863/474699771/826094/14/ The comments suggested one could still revert to sysvinit. Is this just wishful thinking on my part?
advice on auto version upgrade with sl6x.repo
I've had 2 successful upgrades from 6.4 to 6.5 with the sl6x.repo enabled. In the past, I've never done upgrades, preferring to re-install. I'd like to know what folks are doing with respect to enabling the sl6x.repo. Is it just enable it!, it's ready from primetime or are you still disabling it, doing a test drive on a test machine before reenabling across all machines? thanks
Re: Creating Live CD
I run a data acquisition system I wrote under a minimal live SL system. About 250MB. I studied Urs' scripts, stole a bunch of his work, and wrote my own scripts to create my own live SL CD. My systems are still running SL5x since I've not had time to update the scripts. They are not as nice as Urs' live CDs but I was really after an appliance that I can cycle power on without worrying about saving data or corrupting an actual hard drive. I can definitely recommend Urs' www.livecd.ethz.ch site. If you need help, we can discuss this off-line. Ken On 03/15/2014 12:22 AM, Yogi A. Patel wrote: Hi - I develop a real-time electrophysiology platform (rtxi.org http://rtxi.org) using scientific linux with kernel 3.8 and the real-time layer, Xenomai (xenomai.org http://xenomai.org). I would like to create a LiveCD of my system to make it easier for users to adopt, however am having trouble. The standard scripts only make LiveCDs of the stocks Scientific Linux distribution+kernel. Any suggestions on how to accomplish this? Thank you in advance! Yogi
Re: XRandR + nVidia
I can really identify with the xkcd graphic. X11 graphics on Linux has really come a long way. It was a real struggle back in the early 90's. Florian's response is spot on. Having a xorg.conf actually messes things up. Dont know why. But when you get rid of it, some automagical happens. On 03/09/2013 02:59 PM, Joseph Areeda wrote: I need some advice on how to turn on RANDR. I have a few systems with nVidia GPU 5xx and 6xx series. Latest kmod drivers, multiple monitors with Xinerama enabled. Newer systems work fine but I have once that been upgraded since before the 2 were compatible. I have libXrandr installed but it doesn't seem to be enabled. This reminds me of: Would someone point me to a link that explains what I have to do? Thanks, Joe
sata0 is not sda
During a kickstart install, how are drives mapped? I notice that sata0 is not always sda. This is especially true when there are very large drives in the mix.