Re: New Nashua users group meeting January 8th

2012-12-23 Thread Ralph A. Mack
 I would like to start up a new Linux users group in Nashua, I have asked the 
 board members at makeit labs (I am a member), if I could use the space as a 
 meeting place and they were okay with the idea. 

Very, very cool. Finally a LUG meeting I can actually potentially get to! :)  
(The local ham radio club will have to do without me this month ;) )

Ralph


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: No-brainer backup from Linux to space on remote drive?

2012-02-15 Thread Ralph A. Mack
Thanks folks,

As usual, one size does not fit all so thanks for the variety of answers. 

I'll use DejaDup for my laptop, as Stephen mentioned. I deliberately keep it 
from having a lot of system configuration to duplicate on my day-to-day 
systems. I should be able to rebuild it from a raw OS reinstall pretty 
trivially. So DejaDup will keep all my home areas and the contents of my data 
drive safe.

I'm liking this rdiff-backup arrangement for my DNS server. Thanks, Lloyd. 
There, everything is about system configuration and recovery looks like - 
switch everybody else's systems to do DNS from the router temporarily and get 
the bugger back up fast. 

I set up my DNS server so I can maintain static address allocations and systems 
can find each other by their assigned host name. Its an authoritative server 
for local addresses, behind the router, invisible to the external world. It 
points to the router to find outside addresses. (grumpWhy doesn't my router's 
DNS config support this feature?/grump) Is there any reason I shouldn't make 
the NAS drive just get its DNS from the router rather than the DNS server 
everything else is using? It seems sensible not to make its operation dependent 
on any system whose data it stores. I think it's just using DNS to get to the 
time server and the site from which it finds out about updates. The router 
should be fine for that. Does a Samba server care about DNS naming for its 
clients? I didn't think so.

Ralph

On Feb 14, 2012, at 18:06, Lloyd Kvam wrote:

 On Tue, 2012-02-14 at 15:16 -0500, Ralph A. Mack wrote:
 Backup for me is a practical necessity rather than a life project, so
 I want something that just works, errs on the side of caution, doesn't
 require continuing attention and maintenance, etc.
 
 I use rdiff-backup.  The current files are in place on the backup along
 with a change history (rdiff's).  You'll need to resort to the
 rdiff-backup command line if you want to use the rdiff history to get an
 ancient version, however the current version can simply be copied.
 
 This is what I use to backup my laptop:
 
 # ionice -c 3 means idle -- do io when system is otherwise idle
 
 ionice -c 3 rdiff-backup --exclude-other-filesystems
 --exclude-special-files --exclude=**/tmp
 --exclude=**/var/tmp / /media/backup/venix-laptop
 
 My backup drive is mounted locally, but rdiff-backup will use ssh to
 backup over your network connection.  Change /media/backup to something
 like root@backup-server:/backup-dir
 
 I've convinced myself that ionice actually speeds up the backup by
 avoiding conflicts accessing the drive.  I did not make any careful
 measurements to back that up.
 
 -- 
 Lloyd Kvam
 Venix Corp
 DLSLUG/GNHLUG library
 http://dlslug.org/library.html
 http://www.librarything.com/catalog/dlslug
 http://www.librarything.com/catalog/dlslugsort=stamp
 http://www.librarything.com/rss/recent/dlslug
 


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: No-brainer backup from Linux to space on remote drive?

2012-02-15 Thread Ralph A. Mack
On Feb 15, 2012, at 14:25, Alan Johnson wrote:

 On Tue, Feb 14, 2012 at 5:55 PM, Ralph A. Mack ralphm...@comcast.net wrote:
 If backup (or any act of maintenance) is something I need to remember to do, 
 it will never happen. If it's something I can set up once and then forget 
 about for a few years, that'll work...
 
 With the other clarifications in this thread, I think you are on the right 
 path for your goals.  However, unless I am missing your hyperbole, you are 
 dooming yourself if you plan to forget about your backups for a few years.  
 At the very least, you need to check it once in a while to make sure it is 
 still running as expected, no matter what solutions you go with.  

Yeah, I probably didn't say exactly what I meant, just what I wished I could 
mean. :) The key thing is that I can be reactive at need rather than proactive. 
Email is a good tool to tell me I'd better take a look. That'll work. Like 
several years ago when my son was playing by himself in the other room and then 
things got a little _too_ quiet and I had to go see what he was up to. :) 

Since I'm setting up daily backup and I generally log myself out, I can have my 
systems tell me the date of the last successful backup when I log in, too. If 
the backups go down and I haven't logged in since then, I haven't been 
generating any new data to back up. I can fix them before I start working again 
and be ok. That's probably the lowest life-impact solution, but it'll take a 
little more work to set up, so I'll probably go the email route even though it 
means a bit more daily spew from several systems to glance at and toss out.

One characteristic of all us tech folks, I think - we'll put an amazing amount 
of effort into all sorts of Rube Goldberg devices to afford us the sheer luxury 
of being magnificently lazy. :) Here I'm protesting that I won't put a lot of 
effort into setting up backups but I'm already thinking about what I'd do for a 
shell script to scrape the logs, determine success or failure, and the flash it 
up on the screen at login. I think it's a form of madness.

Thanks,
Ralph

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


No-brainer backup from Linux to space on remote drive?

2012-02-14 Thread Ralph A. Mack
Hi folks,

I just had to replace my backup drive for my main development system here at 
the house and I replaced it with a 2 TB WD network drive. Now that all my 
systems can see it, I'd like to do reasonable backups for all the systems, 
including my Linux boxes. (With 2 TB, I'm not too concerned about running out 
of space. :) ) 

I don't want to take a lot of time studying the problem or fiddling with a lot 
of options. I'd rather do my creative stuff than spend my life doing IT. (I 
switched from Gentoo to Ubuntu for a reason. :) ) Backup for me is a practical 
necessity rather than a life project, so I want something that just works, errs 
on the side of caution, doesn't require continuing attention and maintenance, 
etc. So I turned on Time Machine from my Mac. What can I use that will provide 
comparable simplicity for my Linux boxes? Do any of them also have a reasonable 
Windows port? (My witless Atom netbook is running Windows 7 Essentials and my 
Mac has a bootcamp partition...)

Ralph


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: No-brainer backup from Linux to space on remote drive?

2012-02-14 Thread Ralph A. Mack

Thanks, all,

I looked through the suggestions. Remote backup turned out to be something 
different than what I had in mind. BackupPc is expected to sit on a centralized 
server and so, it would seem, is rsnapshot, or at least rsnapshot is using 
Linux file system properties to optimize storage.

I try to run my domestic LAN on a self-service basis so I don't have to come 
home from a day of programming to be the domestic systems admin, i.e. the 
bottleneck. I'm providing a NAS drive with private areas for the individuals in 
the house. My notion is that they can use any backup tool they like locally on 
their systems to push their data onto the provided NAS area. As long as the NAS 
drive doesn't become inaccessible, it doesn't become my problem. :) Of course, 
if they ask, I can suggest tools they might want to learn about and use. This 
is very different from an office, where its somebody's job to do this stuff.

So they've got three Windows machines between them to worry about. I've got a 
handful of boxes including two or three running Linux. For each Linux box, I'm 
just looking for a daemon that runs as a service that does periodic incremental 
backups of user data and system configuration behind the scenes, pushing the 
bits to a NAS drive and using the NAS storage area to keep track of where it is 
in the backup cycle. If it saves enough so I can reconstruct the system more or 
less as it was if the hard drive crashes, I'm happy. 

If backup (or any act of maintenance) is something I need to remember to do, it 
will never happen. If it's something I can set up once and then forget about 
for a few years, that'll work. I know that's not the attitude of an IT 
professional, but home is where I come to leave my profession behind for a few 
hours and use my computers to make art and music and stories and write essays 
and plan the revolution :), using open source tools wherever I can.

Can I get rsnapshot to do the kind of thing I'm talking about without writing a 
lot of additional scripting, or is there a better tool for this kind of 
operation?

Ralph

On Feb 14, 2012, at 16:38, Alan Johnson wrote:

 On Tue, Feb 14, 2012 at 3:57 PM, Shawn O'Shea sh...@eth0.net wrote:
  I've heard good things about BackupPC, but never personally tried it. It 
 supports Linux/Win/OSX, and is designed for backing up to servers (your 
 network drive in this case). It has a Web GUI.
 http://backuppc.sourceforge.net/info.html
 
 
 Use it at home and probably will at work soon.  Love it.  Does take a bit of 
 configuring, but the GUI works you through it.  It uses hardlinks to 
 de-duplicate any files that are identical (uses a hash database) so it can be 
 pretty hard on inodes.  Best to mkfs with small files settings or use a file 
 system that has infinite inodes.  I use reiserfs on my backup drive for this 
 reason.  Is it XFS that also has no inode issues?  I stick with reiser 
 because I don't want to piss him off.
 
 Another trick I've learned is to make sure you have stable/static IPAs: 
 manual-static code them in or DHCP-static code them by MAC address if your 
 DHCP server allows (much nicer IMHO).  This only matters if you are using one 
 of the transfer options that use SSH since it relies on key changes. I think 
 there is a way to tell SSH/rsync to ignore key errors, but then you might 
 mess up your clients.  I have not see a way to uniquely identify a server 
 other than by name/IPA.
 
 ___
 Alan Johnson
 a...@datdec.com
 
 
 

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: twitter vs identi.ca

2010-01-28 Thread Ralph A. Mack
One intriguing characteristic of short messages is that it forces 
non-live discussion into a give-and-take mode resembling live 
conversation rather than the lecture-followed-by-qa mode characteristic 
of blogs and most email.

I've been playing with the notion of writing a plugin for Firefox or 
Chrome that tracks the size of emails I'm writing in tweets (SMS 
160-character units) and slowly changes my screen background from white 
via various shades of pink and salmon to a very angry shade of red as 
the tweet-count increases, in order to curb my habit of putting every 
thought I have on a topic into every communication. :)

I'm bracing myself to encounter widespread encouragement from those who 
have read my prior posts to this list or have the misfortune of being in 
my address book.  ;)

Ralph
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: FWIW: The bigger picture... Or why I have been asking a lot of questions lately...

2009-10-13 Thread Ralph A. Mack
Bruce Labitt bruce.lab...@myfairpoint.net 
mailto:bruce.lab...@myfairpoint.net wrote:
 
  What I'm trying to do:  Optimizer for a radar power spectral density 
problem
 
  Problem:  FFTs required in optimization loop take too long on current
  workstation for the optimizer to even be viable.
 
  Attempted solution:  FFT engine on remote server to reduce overall
  execution time
 
  Builds client - server app implementing above solution.  Server uses
  OpenMP and FFTW to exploit all cores.
[...]
  Implements better binary packing unpacking in code.  Stuff works
 
  Nit in solution:  TCP transport time  FFT execution time, rendering
  attempted solution non-viable
 
[...]
  Hey, that is my bigger picture...  Any and all suggestions are
  appreciated.  Undoubtedly, a few dumb questions will follow.  I appear
  to be good at it.  :P  Maybe this context will help list subscribers
  frame their answers if they have any, or ask insightful question.

I don't understand anything about your domain of application,
so take this for what its worth...

I've gleaned the following from the previous posts. Is it a fair summary?

- The local FFT is taking ~200 ms, which isn't fast enough.
- The remote FFT is substantially faster than this once the data gets 
there.
- However, it takes substantially longer (~1.2 seconds) to move the data
  than to process it locally.

What does fast enough mean here? What is your time budget per data set?
Is it only constrained by catching and cooking one data set before it is
overwritten by a new one (or before you choke on the stream buffers :) )?
Are there latency/timeliness requirements from downstream?
If so, what are they?
Provided your processing rate keeps up with the arrival rate,
how far behind can you afford to deliver results?
(i.e. how much pipelining is permitted in a solution?)

How fast is the remote FFT? I didn't catch a number for this one.
Or was the 200 ms the remote processing time?
(In which case, what't the local processing time?)
Do you have the actual server you're targeting to benchmark this on?

This helps to frame the external requirements more clearly.

You've stated the problem in the implementation domain.
It sounds like your range of solutions could leave very little headroom.
My instinctive response is to ask
Is there a more frugal approach in the application domain?

Do you need to grind down the whole field of potential interest?
Are there ways to narrow and intensify your focus partway through?
Perhaps to do a much faster but weaker FFT,
analyze it quickly to identify a narrower problem of interest,
and then do the slower, much stronger FFT on a lot less data?
Reducing the data load for the hard part may help with on-chip or
off-chip solutions. It may also help to identify hybrid solutions.

Alternatively, a mid-stream focusing analysis might be so expensive
as to negate the benefit, or any performant mid-stream analysis might
be merely a too-risky heuristic, or the problem may simply not lend
itself to that kind of decomposition. You did say that you had already
encountered a number of dead-ends - this may be familiar ground :)

I don't know your domain. I don't have answers, just questions.
I just figured those kind of questions were worth asking
before we try squeezing the last Mbps out of the network...

Lupestro

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/