Re: [Discuss] large log file transfer
From memory, so there might be a typo or two Often gzip will reduce logfiles greatly in size. for that, all you need is: gzip -9 logfile.ext transfer: logfile.ext.gz on receiving end: gzip -d logfile.ext.gz Assuming you actually need to transfer in small chunks: split -10 logfile.ext logfilesplit for f in logfilesplit* ; do gzip -9 $f done transfer: logfilesplit*.gz On the receiving end: for f in logfilesplit*.gz ; do gzip -d $f done cat logfilesplit* logfile.ext - Original Message - From: John Malloy jomal...@gmail.com To: discuss@blu.org Sent: Tuesday, June 23, 2015 8:44:19 AM Subject: [Discuss] large log file transfer I have a log file (325MB) that I need to transfer from a restricted network, that I cannot plug a USB into. Is there an easy way to split up the log file into smaller chunks and Zip it to get it over the net? Thanks! John Malloy jomal...@gmail.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] memory management
My advise on Firefox is to close it down completely whenever you have finished using it. There seems to be a lot of memory leaking, I've seen Firefox grow into multi-gigabyte virtual size in a matter of hours. I have not experimented with shutting down Firefox with multiple windows and tabs, and then restarted it, using the Restore Previous Session option on the History pull-down. It would be interesting to see how much memory was saved, and for how long. Jerry Natowitz ===j.natowitz (at) rcn.com if rcn.com bounces, try gmail.com On 06/19/15 11:02, Steve Litt wrote: On Fri, 19 Jun 2015 10:01:57 -0400 Matthew Gillen m...@mattgillen.net wrote: I'm looking for some advice on tuning my linux box's memory management. I've got an older workstation that has merely 4GB of memory. If I try to run Firefox, and a few java apps (e.g., Eclipse), my machine thrashes about and effectively locks up because of out-of-memory issues. For example: the mouse will continue to move, but won't change it's icon contextually. If I hit cntl-alt-f2 and try to log in to a virtual console, mgetty will eventually ask for the username, but after I hit enter, it just hangs, not popping up the password prompt, and after 60 seconds the login times out. Trying to ssh into the machine from somewhere else ends up timing out. After going on like this for literally 10 minutes, OOM-killer sometimes kills the right thing (one of the two processes hogging the most memory: firefox or eclipse), and the machine becomes usable again sometime later. I have heftier workstations I can use, but this behavior is really frustrating to me, because I'd like to think linux does good memory management. I've tried using huge swap (2x physical memory). I've tried with virtually no swap (on the theory that without swap, there would be no thrashing and at least oom-killer would have to do its thing without locking up the machine for 10 minutes first). The problem there was oom-killer making bad decisions about what to kill (e.g., the window manager, and then whatever out-of-control process is sucking up memory just sucks up whatever got freed, and nothing gets better). At least with some swap oom-killer seems to make better guesses about who to murder. Does anyone have any tips on how to prevent linux from thrashing like that? The behavior when low on memory seems atrociously bad. Thanks, Matt Hi Matt, I haven't seen any stats quoted in your email, from the top program, that indicate it's a RAM problem. Firefox and its pet plugin-container use a heck of a lot of CPU. Until very recently I was using a 4GB machine, and when things got crawly, the top program indicated that both my cores were near 100%, but there was plenty more RAM. Today I have a 16GB RAM box, with dual core CPU (I wanted things to stay cool), and things still get crawly. When they do, I run the top command to see what's taking all the CPU, and kill it if necessary. It's usually one or more instances of Firefox and plugin-container. I typically killall plugin-container, and then start closing no-longer-needed tabs on the various Firefox windows. I'll often drag all the tags to *one* Firefox window, and kill the others. I like Firefox, but it's no doubt a pig. My recommendation when using Firefox is to close any tabs you're finished with. Often, good housekeeping with Firefox is the key to avoiding the crawlies. When the crawlies rear their ugly head, my first step on the path is the top command, to see who is consuming what resource, and what resource is becoming a choke point. Then I take care of the choke point, and if it involves Firefox at all, I'm ruthless in closing tabs I'm finished with. SteveT Steve Litt June 2015 featured book: The Key to Everyday Excellence http://www.troubleshooters.com/key ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Multiple submissions of resume by recruiters and hired.com
Hi, A friend has suggested that I try hired.com to find a new position. I looked into it and wondered how well I could control the process of submitting my resume to prevent what I have always heard is the kiss of death: two or more recruiters submitting your resume to the same client. My friend assured me that I would be able to control the dissemination of my resume by hired.com; and besides, given the state of the market, multiple submissions are common and handled by giving the commission (if hired) to the first one to submit. Is this really true? And has anyone had experience with hired.com? -- Jerry Natowitz ===j.natowitz (at) rcn.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Multiple submissions of resume by recruiters and hired.com
Because the recruiter who submitted my resume is entitled to an employer paid fee of anywhere from 7% to 30% or more of my starting salary. Employers do not want to be in a position where they can be sued by the recruiter(s) that don't get their fee, but believe they are entitled to it because they also submitted my resume. Or so the story goes ... Jerry Natowitz ===j.natowitz (at) rcn.com On 06/03/15 11:41, Mike Small wrote: Jerry Natowitz j.natow...@rcn.com writes: A friend has suggested that I try hired.com to find a new position. I looked into it and wondered how well I could control the process of submitting my resume to prevent what I have always heard is the kiss of death: two or more recruiters submitting your resume to the same client. I don't understand. Why does this matter (so much)? Someone won't hire you because they saw your résumé come at them from two sources? ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Libre Planet this weekend
Just a double check. Is that really 9:45 am EST or is it 9:45 am EDT? Jerry Natowitz ===j.natowitz (at) rcn.com On 03/20/15 13:42, Greg Rundlett (freephile) wrote: It's Libre Planet time... if you care deeply about technology freedom, but you can't make it to MIT, then watch the free live streaming setup at http://libreplanet.org/2015/live/ Keynote with Richard Stallman starts tomorrow at 9:45 am EST Also, I'm driving in solo from Salisbury, MA. If you're north or south of me and would rather ride-share then let me know. Greg Rundlett http://eQuality-Tech.com http://freephile.org ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] RCN dhcp problem fixed, root cause a mystery
So the problem I was having where neither of my two previously working wireless routers could get a DHCP response from RCN cable modem has been fixed. I needed two things: 1) Time. I really did need to power off everything for 20 minutes. 2) Isolation of the RCN provided Actiontec MoCa bridge. The later is of interest. When I was talking with RCN's tech support on Sunday, I made the mistake of not making a map of how things were interconnected (or the far easier step of taking a bunch of photos). One of the things I ended up doing was removing a little gigbit switch, because it only had two active ports: the MoCa bridge and the wireless router. Well, it seems that the MoCa bridge does something when plugged directly into either of my wireless routers that it does not do when a wired switch is placed between them. I told RCN about this, but the tech had no idea why that mattered. Anyone have any ideas? I could try seeing if wireshark can pick up anything on the switch. BTW the problem with the routers is not just dhcp on the WAN side, they are unable to deal with any traffic on the wired side. Wireless seems to be unaffected. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Very odd RCN behavior, PC can connect, but not routers
Every few years RCN seems to throw me a curve ball. Several years ago it was the mysterious bad provisioning that would keep getting applied to my modems. Now, I have a new puzzler. As of this morning, the Arris cable modem I rent from them refused to give my two month old Netgear R6300V2 a DHCP address on the WAN side. Went through the usual, disconnect, power down, power up in sequence. No luck. Switched the router with my old TP-LINK WR1043ND. Same behavior. Connect the PC directly to their modem and voila! DHCP worked. So they say there is no problem with their equipment. I don't have an easy way to capture traffic between the routers and the modem and compare it to the traffic between the PC and the modem. I've tried using IP unicast rather than multicast (on option on one of the routers) no luck. I tried having a router use the same MAC as the PC (with the PC's interface offline), no luck. Any suggestions? Jerr Natowitz ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Very odd RCN behavior, PC can connect, but not routers
No, because since I posted I have been able to connect. Both routers have the same behavior. They can connect once, but thereafter the modem will not respond to further dhcp requests. I found the only way to get that first response is to change the router's WAN MAC to a few value. Fortunately, I have a bunch of MAC addresses in /etc/ethers going back a number of years. This is clearly (to me, at least) some change in RCN provisioning and/or firmware. But they deny anything changed. - Original Message - From: Edward Ned Harvey (blu) b...@nedharvey.com To: Jerry Natowitz j.natow...@rcn.com, discuss@blu.org Sent: Sunday, November 23, 2014 7:19:22 PM Subject: RE: [Discuss] Very odd RCN behavior, PC can connect, but not routers From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss- bounces+blu=nedharvey@blu.org] On Behalf Of Jerry Natowitz Any suggestions? Maybe the WAN jack of your Netgear went bad? ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Is ext4 still dangerous for vmware client?
I'm thinking of taking the plunge and setting up a Linux client on Windows 7. A while ago I read that ext4 for certain, and possibly ext3 had problems with corruption when used within vmware clients. If that was true, is it still the case? Also, should I be looking at using LVM so that I can more easily migrate to larger disks? ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] When should a new thread be initiated?
Not talking about process threads, just wondering when a discussion thread is so off-topic that it should be renamed and promoted to a new high level thread. Not that there is a current discussion in that category :-) -- Jerry Natowitz ===j.natowitz (at) gmail.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] memtest86+ oddities
I'm actually partial to the old memtest86: http://www.memtest86.com/ The 4.0 version will run most tests multi-threaded on multi-CPU/core systems. I have found that some older systems will not boot 4.0, so 3.5 is also in the iso file. Jerry Natowitz ===j.natowitz (at) gmail.com On 12/27/12 21:50, Tom Metro wrote: While fixing up a Windows installation on some laptop hardware I was attempting to use the Widows installer, which consistently crashed with an error message that the Internet says is indicative of a memory error. So I grabbed an Ubuntu CD I had handy, which happened to be the 12.10 CD from the last BLU meeting, looked up how to get into memtest (you need to hit ESC when the splash screen first appears), and kicked off a test. Around the 130 M mark on test #7 it started spewing out thousands of errors. I swapped the order of the two 1 GB modules. Same result. Tried each module by itself. Ditto. Different slots. Same. And then I tried a 512 MB module I had. Same result. It was looking like the motherboard had a fault, but I happened to try an Ubuntu 12.04 CD I had handy. Magically that fixed it. It ran overnight testing the original 2 modules without error. Both CDs uses the same version of memtest86+, 4.20. I wonder what the difference is? Corrupt CD? (Both are official, mass produced Canonical CDs.) (The crashing installer was apparently due to the installer not recognizing the SATA controller, rather than a memory problem.) -Tom ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Help with soldering
Hi, I was given an iPod Nano 4th Gen with a dead battery. I bought a new battery and then learned that it needs to be soldered onto the systemboard. I never got the hang of soldering, especially 1 mm pads. Anyone not too far from South Brookline who would help? -- Jerry Natowitz ===j.natowitz (at) gmail.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Getting OS/HW details?
I'd take a look at the perl script memconf and see how it works. Even though it was written for Solaris, it does a decent job on Linux. It does like to be run as root, however. http://www.4schmidts.com/unix.html There is a package lshw on Fedora (among others) you could look at the source. decode-dimms, a perl script in lm-sensors, is another good source. It also wants to be run as root, and requires eeprom to be loaded. Jerry Natowitz ===j.natowitz (at) gmail.com On 11/02/12 13:09, Scott Ehrlich wrote: If I wanted to write a script to obtain distro flavor (Ubuntu, CentOS, RH, Mint, BSD, Solaris, etc), major/minor version (5.3, 10.6, etc), hardware brand/make/model, at least for starters, what would be the best way to attack it? This script may or may not assume being run as root. Environment is completely heterogeneous, so while I may be using an OEM system, my officemate might be using a white box system. I think the only assurance might be it be run as /bin/sh so we don't have to worry about shells. We cannot assume /etc/motd, /etc/issue, or anything else exists in its out-of-box state (they could have been replaced with other text). I thought about uname -a, but it does not indicate OS distro nor version. Arch can only assist with 32/64 bit. Thanks for leads and ideas. Scott ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] hosts.equiv
If you are confident in the security of your local net, and you don't want to use ssh, you have two choice: rsh or running a daemon process on one or both systems. To use rsh (works on Fedora and Slackware, can't vouch for anyone else): 1) enable xinetd (if not enabled) and then enable rsh 2) Create $HOME/.rhosts on all destination systems, adding + username 3) chmod 400 $HOME/.rhosts 4) add a line with rsh to the end of /etc/securetty To use a daemon process: 1) Create /etc/rsyncd.conf -- do an man on rsyncd.conf Here is mine: uid=jerry gid=jerry [backup] path = /usr4 strict modes = false use chroot = no max connections = 15 hosts allow = la-machine,opus,bnt,puma,ra read only = false 2) Enable xinetd and rsynchd 3) On server systems, specify target like valhalla::backup/la-machine Original message Date: Fri, 14 Sep 2012 08:40:44 -0400 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of dan moylan j...@moylan.us) Subject: [Discuss] hosts.equiv To: boston linux and unix (blu) discuss@blu.org i have a script to rsync a number of directories between two computers on my local net and would like to avoid having to enter my password for each one. i thought i could do this using hosts.equiv, but it's not working for me. i solved this once before a number of years ago, but i'm undoubtedly forgetting something now. any help would be appreciated. tia, ole dan j. daniel moylan 84 harvard ave brookline, ma 02446-6202 617-232-2360 (tel) j...@moylan.us www.moylan.us [death to html bloat!] ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] samba3x on Fedora 13?
I have Samba 3.5.8 running from Fedora 13 packages: # rpm -qa samba\* samba-winbind-clients-3.5.8-75.fc13.i686 samba-winbind-3.5.8-75.fc13.i686 samba-client-3.5.8-75.fc13.i686 samba-3.5.8-75.fc13.i686 samba-common-3.5.8-75.fc13.i686 I could upgrade to 3.5.11-79.fc14 or even 3.5.15-74.fc15.1 Is it possible that you are looking only at the fedora repo, and not at the updates? Original message Date: Wed, 8 Aug 2012 23:06:07 -0400 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of John Abreau abre...@gmail.com) Subject: Re: [Discuss] samba3x on Fedora 13? To: Scott Ehrlich srehrl...@gmail.com Cc: blug discuss@blu.org Installing *anything* recent on a system so many revisions out of date will likely take much more than minimal effort. The current revision, Fedora 17, comes with samba 3.6.1 and in a couple of months, Fedora 18 is expected to ship with samba 4. A pre-release version of samba4 is available for Fedora 17. On Wed, Aug 8, 2012 at 8:25 PM, Scott Ehrlich srehrl...@gmail.com wrote: Is it possible to install samb3x on Fedora 13 with minimal effort? It appears to be readily available for RHEL/SL/CentOS, but not Fedora. Thanks. Scott ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss -- John Abreau / Executive Director, Boston Linux Unix PGP KeyID: 32A492D8 / Email: abre...@gmail.com PGP FP: 7834 AEC2 EFA3 565C A4B6 9BA4 0ACB AD85 32A4 92D8 ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Using raw host hard disk in virtual client
I share a computer with my wife. She sticks to Windows 7, I generally use Linux. There are a lot of partitions on the systems, NTFS, VFAT, and ext4. I would like to have a virtual Linux client running on W7 that I can either ssh to, or ftp mount partitions from a laptop. I've used VirtualBox at work to run both Linux clients on Windows XP, or vice versa. Windows client on Linux works well, but the Linux client on Windows is very slow. Both use file based virtual disks. I looked into native disk partition support, but got the impression that it is not stable enough to use in a production/home environment. At best, one could copy a disk or partitions of a disk into a virtual disk to use on the guest. I used VMware a long time ago, but never used Xen. Any advise on how to proceed? -- Jerry Natowitz ===j.natowitz (at) gmail.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Using raw host hard disk in virtual client
I may not have been specific enough in what I want to do. I want the Linux client to be able to directly mount ext4 partitions, not to do raw I/O to partitions. Many decades ago (3) I worked on a project that did raw disk I/O. This was back in the days of washing machine sized drives, with 200 or 300 MB removable disks (RM02/3, RP06). On day an operator put the wrong disk in, and all our sources were over-written by that day's data. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Verizon and their mandatory data plan, redux
Looked into other carriers. Problem is, one of our phone is about 15 months into a 24 month contract. Would cost about $100 to cancel. Then there would be costs of new phones if we didn't stay on CDMA, and the fees for setting up new accounts, and then finally the monthly fees. We might have been able to save a few dollars, but the hassle factor didn't make it worth it. That, and T-Mobile, the logical choice, has really bad service in South Brookline. What really clinched the deal was finding that we can get a 22% discount on the $79.99 portion of our monthly bill because of my employer. That said, I did find something interesting: smartphones released before November 14th, 2008 can be activated without a data plan http://www.plazor.com/howto/how-to-activate-smartphone-on-verizon-without-data-plan/ Unfortunately my daughter didn't want any of them. So I bought a new Verizon Palm Pre 2 on amazon.com for about $90 and, aside from some weirdness like Verizon's Backup not working on it, she is quite happy. -- Jerry Natowitz ===j.natowitz (at) gmail.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Issuing the 'sync' command more than once (and a tangent on how not to run a high-tech company)
I suggest using a floppy disk or a slow USB flash drive as a test. You can write to it and then time how long a umount takes. You can then test it again, timing a sync or two (or three) and then the umount. Original message Date: Tue, 19 Jun 2012 14:11:39 -0500 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of Derek Martin inva...@pizzashack.org) Subject: Re: [Discuss] Issuing the 'sync' command more than once (and a tangent on how not to run a high-tech company) To: MBR m...@arlsoft.com Cc: L-blu Unix discuss@blu.org On Sat, Jun 16, 2012 at 12:59:44PM -0400, MBR wrote: On 6/12/2012 11:22 PM, Jack Coats wrote: In old SunOS days, we could issue the 'sync' command, twice, to ensure all system buffers had been written to disk. You could experiment to see if issuing it occasionally in your script helps. Or issue it outside the script, even in a chron might help. Actually, calling 'sync' multiple times from a script really won't help. To the best of my knowledge, no Unix kernel has ever contained code that counts the number of times sync() (the system call that the 'sync' command issues) has been called. The reason I was taught to do this differs from what you put forth, and regardless it's certainly true that no modern Unix should ever require a user to run sync manually, except possibly in very rare circumstances. I don't claim to know the veracity of this, but I was taught (by a college professor who taught Unix system adminsistration as a course, for whatever that's worth) that the reason to sync twice (not three times) is that, as you say, the first call to sync schedules the kernel to sync the buffers, but does not necessarily complete before the system call returns; however (as I was told) a SUBSEQUENT call to the sync() system call would block until any previously scheduled sync had completed. Thus, the completion of the SECOND sync command guarantees that the FIRST sync completed flushing the buffers to disk. Now, I certainly have not spent the time to look at the code to any antiquated Unix kernels to confirm whether this was ever actually true, anywhere. And I don't intend to. But it's at least plausible that it was true at one point in some popular Unix. As you yourself said, for quite a long while now on Linux (since August of 1995), sync() actually does wait until the buffers are flushed. But even that is mostly irrelevant as the kernel forces the buffers to be flushed periodically and flushes them prior to system shutdown (assuming it can, of course). -- Derek D. Martinhttp://www.pizzashack.org/ GPG Key ID: 0xDFBEAD02 -=-=-=-=- This message is posted from an invalid address. Replying to it will result in undeliverable mail due to spam prevention. Sorry for the inconvenience. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Versioning File Systems
You took my comment out of context: On 05/05/2012 03:13 PM, MBR wrote: In the context of software development, it is much more important to have a snapshot of ALL FILES at any point in time than one particular file, since they depend on each other so heavily. Versioning filesystems won't do that. I was responding to MBR. Versioning filesystems are not designed to allow you to see the state of the filesystem at any arbitrary point in the past. Aside from any other issues, I do not believe they have the ability to remove a file from the current view, but leave all the generations of that file available for constructing a veiw in the past. Original message Date: Mon, 07 May 2012 11:13:25 -0400 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of Richard Pieri richard.pi...@gmail.com) Subject: Re: [Discuss] Versioning File Systems To: discuss@blu.org On 5/7/2012 8:22 AM, Jerry Natowitz wrote: So in essence, you want a filesystem that does the equivalent of taking a snapshot every time the filesystem changes. No. Saving or modifying a file on a versioning file system is equivalent to RENAME FILE.TXT FILE.TXT;23, where 23 is the next available version number, and writing out a new FILE.TXT. There is an optional step of removing old versions if there is a set limit to the number of concurrent versions allowed. It's really that simple. -- Rich P. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Versioning File Systems
To quibble a bit: You would only have 11 copies if the versioning file system didn't support generation limits, or the generation limit was 11 or higher. I worked with RSX11M for most of the first decade of my career, and I found the following to be my friend: PIP *.*/PU:2 Original message Date: Thu, 03 May 2012 14:33:44 -0400 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of Richard Pieri richard.pi...@gmail.com) Subject: [Discuss] Versioning File Systems Cc: discuss@blu.org On 5/3/2012 12:13 PM, Gordon Ross wrote: No, but combined with an auto-snapshot service, I'd call it close. You would not get a new version on every file change, but one can make snapshots pretty frequently, i.e. every few minutes. Anyway, probably getting off topic here. Sorry. Not off topic for the list so I'll change the Subject. Snapshots aren't at all close to versioning. A versioning file system keeps (or can keep; one can usually configure how many versions to keep) every version of a file saved. File system snapshots get the file system state when the snapshots are made. For example: create a ZFS snapshot. Create a file. Edit it and save it. Repeat nine more times. Create another snapshot. How many versions of the file do you have? You would have just one on ZFS. You would have all eleven on a versioning file system. -- Rich P. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] streaming webcam
I've done some video acquisition at work. We have been using bt848 based PCI and PCIe cards for many years. They are cheap and fast, but since they are considered consumer grade, they are subject to change without notice. I'm not sure what we use as a camera with those boards, I only care that the pixels get delivered on time and in order :-) I suspect the security cameras may be a good choice. Jerry Original message Date: Tue, 24 Apr 2012 07:51:10 -0400 From: discuss-bounces+j.natowitz=rcn@blu.org (on behalf of Mark Woodward ma...@mohawksoft.com) Subject: Re: [Discuss] streaming webcam To: discuss@blu.org Also, I have a bunch of webcams, and the one main problem I see in them is that they are SLOW. With very little motion, they seem OK, but if you move them quickly they blur the image terribly. It is important that the effective shutter speed is quick enough that moderate motion of the camera does not blur excessively. Am I asking too much? Do you guys know of anything, I've been looking. I have more than half a dozen different webcams ranging from the built-in laptop cameras to various USB2 devices, which, when all is said and done, are basically logitech quickcams. On 04/24/2012 06:43 AM, Mark Woodward wrote: I am looking for a very low latency simple webcam program. I want to be able to see the video from my laptop on my android fairly snappy. Most of the things I've seen introduce about 1/2 to a full second latency. So when I move my hand, I see the delay. I want to get rid of that I need it to be mostly instantaneous. (1) is it possible with Linux and USB2 (2) If so what software (3) what camera. Anyone know? ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] Laptop memory card needs a good home
That may be the case for DDR3, given that you can buy 2GB DDR3 SODIMMs on eBay for about $8 including shipping, but the old DDR, which I guess we should call DDR1, topped out at 1GB, with the exceptions of some 2GB server DIMMS. The going price for a 1GB PC-3200 SODIMM is about $24, 512MB around $10. Of course you can question the wisdom to putting any time and effort into upgrading a laptops that are at least 8 years old (like my Thinkpad T40 and T42). Jerry Natowitz ===j.natowitz (at) gmail.com On 04/07/12 09:47, Edward Ned Harvey wrote: From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss- bounces+blu=nedharvey@blu.org] On Behalf Of Nathan Meyers I have a laptop memory card that needs a new home. It's a PC2700 (CL2.5) 512MB DDR SO-DIMM, manufactured by TwinMOS. It worked when I took it out of my laptop awhile ago to upgrade - as far as I know is still functional. If someone is willing to come pick it up in Woburn, let me know and it's yours. Eek... I regularly throw 1G and 2G modules in the garbage, as people upgrade their laptops to 4G or 8G... Even the users don't want the 1G or 2G modules for home, because buying new ones is so cheap... ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] What is my work title?
My current position is described as Engineer who wears many hats. Not a title I would use on my resume. A co-worker told that there is a very specific title for the work I do, but he can't remember it. What I do, and have been doing for two companies since 2000, is to manage the computer systems and operating systems that are incorporated into large manufactured systems. My first job involved working with Sun Microsystems and SunSoft. I managed overlapping migrations from UltraSPARC II and IIi based systems to UltraSPARC III, Solaris 2.5.1 to Solaris 8, and from OpenBoot 2.X to 3.X. Along with expected changes in software build procedures, software installation, and physical changes due to size differences, I also dealt with a number of systems issues: 1) We were experiences a much higher rate of UltraSPARC IIi failure than SMCC would believe. After months of working with them, I discovered that they tested the CPUs in 64 bit mode, but we only ran 32 bit mode. 2) There was a PCI card developed in-house that would not work on systems using OpenBoot 3.X. Turns out that I/O space for devices behind a bridge doesn't happen at boot time, but we were taking advantage of being the last card in the bus to help ourselves to memory. OpenBoot 3.X packed the PCI space, preventing our trickery. A redesigned card resulted. 3) Performance of certain floating point routines dropped by an order of magnitude on the UltraSPARC III. SunSoft provided un released compilers and libraries to help, but ultimately I realized that 32 bit floating point arithmetic took a lot longer than 64 bit. I re-wrote the routines to do 64 bit math on 32 bit operands and all was fine again. My current job started out similarly, but the issues are different. Migration from Solaris 2.5.1 to Solaris 8 and then to Solaris 10 and dealing with serious USB bugs in the SunFire series. Later on we migrated to GNU/Linux on x86. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Re: [Discuss] What is my work title?
Of course I know what HR calls me (my job title, as opposed to work title): Senior Applications Engineer. Why would I post here to get that information? Okay, maybe I should have said that my job is going away and I'd like something fairly descriptive on my resume that might cause a person or (more likely) data mining program to notice me. Jerry Natowitz ===j.natowitz (at) gmail.com On 03/29/12 16:25, Richard Pieri wrote: Your title has little bearing on the work you do. It's what your box on the org chart is called. It may also be a pay grade reference. Check with your HR people if you've forgotten your formal title. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] NFS client difference between Solaris and Linux
I work with a number of Linux systems (Fedora 12) and Solaris systems (8 and 10). I want to be able to monitor the status of NFS mounts - sometimes systems are taken down while another system is actively using an NFS export from that system. While writing a script to monitor the health of the NFS mounts on a system, I discovered that the following shell constructs work differently in Linux and Solaris 10 (haven't looked at Solaris 8 yet): csh: if ( -d /home/foo ) echo foo if ( -e /home/foo ) echo foo if ( ! -d /home/foo ) echo no foo if ( ! -e /home/foo ) echo no foo sh/bash: [ -d /home/foo ] echo foo [ -d /home/foo ] || echo no foo On Linux, the commands will cause the system to attempt to make the NFS mount, and return the status of that mount. If the server is offline, the mount attempt does not hang. On Solaris, the commands will check to see if /home/foo is defined in /etc/auto_home and return the status of that lookup. No mount is attempted, so that /home is defined the same on both systems: /home /etc/auto.home -nobrowse,retry=3,suid If I use this test: [ -f /home/foo/. ] || echo foo Linux continues to work as before. Solaris does actually attempt the mounts, but if the server is offline, the mount attempt hangs. ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
[Discuss] Dealing with RCN
Hi, You can either read my long tale of woe, or skip to SHORT: We've had RCN phone/TV/internet service for a number of years. Starting about 3 years ago, the internet service started to have periodic bouts of intermittent outages. 90+% of the outages were short enough that by the time I got through to a technician, the problem went away. Somewhere along the way I found the address 10.16.48.1 being used to check for network status. A single ping wasn't enough to establish network problems, but ping -c 8 -s 1472 10.16.48.1 would let me know if I was completely offline, or if their network was dropping packets. After a few months, the problems stopped, only to restart a year or so later when RCN discontinued the 7 Mbps service we had, and quietly upgraded us to 10 Mbps, adding $10 a month to our bill. Again, after a few months, the problems stopped. And then about 6 months ago they started again. This time they told me that the Motorola Surfboard SB5120 I was using was an obsolete piece of feces, and that I should replace it. So I went and bought a Toshiba PCX2600. But I didn't switch it in, I decided to wait until the next bout of problems. The next bout of problems was a few weeks ago. I called, they took the MAC address of the Toshiba, I hooked it up, and 5 minutes later I was online. They were also supposed to upgrade me to 20 Mbps. I never saw my throughput increase, and last Friday the Toshiba stops working. I call up, go through the usual power-cycle, power-cycle with RF59 disconnected. They tell me that they can see the MAC address, so the problem is on my side. They make an appointment for Thursday for a field tech to come look at things. SHORT In the mean time, the (new) Toshiba doesn't start working, so I decide to try the (old) Motorola. It comes right up. So my questions are: 1) Exactly what is done at the NOC to provision a new MAC address? 2) Should the two technicians (two separate calls) have realized that the MAC address of the Toshiba that they saw was not the address that was provisioned for my service? 3) I assume when they provision a new MAC address, they remove the old, but somehow that change went away which is why my new modem doesn't work, but the old one does. Is this a correct assumption? I am really looking for the correct terminology to use when I attempt to deal with a supervisory level at RCN. -- Jerry Natowitz j.natowitz (at) rcn.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss
Moving myself from root to non-root
After over 15 years of being root on Slackware at home, I've decided that it is time to abandon both Slackware and being root 100% of the time. I've been using Fedora at work and on my laptops, but my mainframe is where I have 15 years of dot files and dot directories to deal with, not to mention files in /etc I know some of them, such as .Xdefaults and .xinitrc will not make the transition. Others, such as /etc/X11/xorg.conf, have a few things ( HorizSync and VertRefresh) that I carry along just in case. .cshrc, .login, .tcshrc, .bashrc, .bash_profile, .aliases, and a bunch more will have to be merged. /etc/hosts is also good, since I've been using fixed IP addresses instead of letting the routers do DHCP and DNS. At some point, I'll get rid of the legacy equipment that can't reliably server DHCP and DNS. I'm most interested in Firefox 4 and Thunderbird 3. I'm using POP in Thunderbird, so sendmail, postfix, or another MTA is not necessary. I looked through my /root directory for subdirectories called root, here is what came up: ./.wine/drive_c/windows/profiles/root ./.mozilla/root I think .mozilla/root is from an earlier version (last file is in 2007), and .wine I'll recreate when I need it. I'm sure there will be some gotchas, anyone know of anything I'm forgetting? -- Jerry Natowitz j.natowitz (at) rcn.com ___ Discuss mailing list Discuss@blu.org http://lists.blu.org/mailman/listinfo/discuss