Re: (OT) Does anyone use FOSS Virus Scan?

2007-07-20 Thread Jason Stephenson
Bill McGonigle wrote:

 
 Response time from the ClamAV team has been measured to be lower than  
 commercial vendors in many cases (virustotal I think was the source  
 on that) - that's not an issue to worry about.

I want to add that ClamAV has often caught stuff that Symantec running 
our our desktops did not even know about, yet. ClamAV gets constant 
updates, and it is recommended that you update your definitions at least 
twice a day, though checking every other hour is not unheard of. You are 
lucky to get daily (or even weekly) updates from most commercial AV vendors.


[Deletia]

 and there's a project to tag phishing and mail scams using ClamAV  
 signatures.

I have never had a virus, worm, or Trojan get through ClamAV. I have 
reported a couple of phish emails that got through the filter in the 
past. In most of the cases, they were added to the definitions within 
hours of being reported. In a couple of cases, someone else beat me to 
reporting the same phish.

The phish check is optional. There's a configuration option to disable 
it. It is on by default, though, and I have found it catches more than 
the virus/worm/trojan definitions do on both of my mail servers.

Probably one of the coolest features of ClamAV is that it is written 
entirely in standard C. You can compile it and use it pretty much 
anywhere you have a C compiler.

Oh, and 0.91.1 just came out, so if you're running an older version, it 
is time to update.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: (OT) Does anyone use FOSS Virus Scan?

2007-07-19 Thread Jason Stephenson
Another vote for clamav, here. Use it on all the mail servers I run.

I've never used it on a desktop machine, but have manually scanned stuff 
with it.

As mentioned earlier, there is a Windows version, too, called ClamWin.

It would also be useful to know how you intend to use it, because there 
may be something else better suited to your needs.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Liberation Fonts?

2007-05-25 Thread Jason Stephenson
Marc Nozell wrote:
 On Wed, 2007-05-23 at 11:35 -0400, Ted Roche wrote:
 Is that like Freedom Fries?

 Anyone tried this?

 https://www.redhat.com/promo/fonts/
 
 
 They are pretty nice.  I've installed them on my Ubuntu/Feisty laptop by
 just unpacking them into ~/.fonts and running fc-cache.  Rebooting would
 have worked too.

Seconded. I've installed them on my MacBook and started to use them in 
Open Office. Installing on the Mac is really easy, too. Just download 
the ttf tarball, open the tarball with Stuffit Expander, then open the 
fonts in the resulting folder. Font Book pops up and has a button to 
install the fonts. By default they are installed for the current user, 
but you can drag them to the Computer section and install them for 
everyone on the Mac.

I like the look of the font. The characters have nice, clean lines.


 
 -marc
 
 
 
 
 
 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Don't Download This Song - Weird Al

2007-03-30 Thread Jason Stephenson

Larry Cook wrote:

Larry Cook wrote:

http://www.myspace.com/weirdal/


Interesting!?  The trailing slash causes an error.  Need to use:


I don't know what's worse: the trailing slash causing an error or the 
fact that MySpace uses IIS. The latter likely has something to do with 
the former. (Yes, I've had to manage an IIS server in the past, but 
switched it to GNU/Linux and apache after about a year and none of the 
users noticed.)



I've long been a Weird Al fan. I'll Sue Ya is much better than White and 
Nerdy, though Weird Al does get all the references correct. I think, 
like Bare Naked Ladies, he's really a nerd, too.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Henniker report - Re: [GNHLUG] A ham radio flea market - tomorrow, in Henniker

2007-03-18 Thread Jason Stephenson

Bill Sconce wrote:


P.S. Another anecdote tickled me.  It was about a friend of a friend
who supposedly took someone's money to intall Vista on their PC.
He played a trick: installed Ubuntu instead of The Genuine Advantage,
told the client you'll notice that it looks a little different from
XP, but overall it's a lot better.  The client (supposedly) was
quite pleased with the new Vista, until he returned the money and
'fessed up...:) Dunno if it's true, but it's a nice story.


I heard the same thing online (believe it was /.), but it was a son who 
had been asked to install the Vista upgrade by his father, and no money 
was involved.


Dunno if it is true, but I suggested that we do this at work. No one 
would really notice the difference.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: x86 emulator for PPC Mac OS X?

2007-03-10 Thread Jason Stephenson

Paul Lussier wrote:

Does anyone know a virtual environment for the PPC-based Macs?  I have
a PowerBook G4 that I'd like to be able to play with some stuff on.
Specifically, I'd like to play around with a couple of the BSDs and
possibly some different Linux distros.


Have you tried just installing OpenBSD MacPPC and NetBSD MacPPC? They 
will probably work on your laptop, but you should check the 
compatibility matrix, first.


http://openbsd.org/macppc.html#hardware

It looks like OpenBSD will work on your PowerBook, so NetBSD will likely 
work as well.


http://www.netbsd.org/Ports/macppc/models.html#powerbook

You could always partition your drive and install the other OS in a 
multi-boot fashion. I've never done it on a Mac with OpenFirmware, but I 
hear it is easier than on the old NuBus PowerMacs where I have set up a 
dual boot configuration.





I found QEMU in Darwin ports, but it doesn't support OpenBSD, which is
the BSD I want to be mucking with right now.  VMWare seems to only
support the newer Intel Macs, and PearPC seems to not support any of
the BSDs.


Derek Atkins already mentioned VirtualPC, and I don't know of much else 
that you could try along those lines. I've not got much experience with 
such products other than setting up VMWare on a PC once.




Am I SOL here?

Thanks.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Emacs: Multiple files in one buffer?

2007-03-09 Thread Jason Stephenson

[EMAIL PROTECTED] wrote:



  What I'd like to do is have
each function in a C source file appear in different subfiles. But I
want to be able to perform text operations over the whole bunch of
them... query-replace, isearch-forward, etc.

I don't think it would require any extra structure in the document.
Emacs supports some kind of text attributes which are just slightly
too mystical for my current understanding. They use these text
attributes to implement the forms package for fill-out forms (maybe
that's where I saw it?).  I could see giving a chunk of text in a
buffer a source file attribute containing the name of the file whose
contents that chunk of text represents.



For C functions it should be relatively straightforward assuming that 
each function gets its own .c file and that the name of the .c file is 
identical to the function name. You could add a c-mode-hook that 
overrides write-file and save-buffer commands to cycle through the 
functions in the buffer, make a region around the function and then 
write the region to a file using write-region. There are other 
possibilities, but this is probably the most efficient.


It would probably be best to implement the above as around advice for 
the functions in question and load the file containing the advice in a 
c-mode-hook. If you don't have it already, I suggest getting the Elisp 
info manual and reading the section on Advising Functions. It explains 
how to write advice functions to override or to extend the built-in 
functions.


Also, if all of the .c files are in the same directory, you could add a 
new command to open all the .c files in a directory into the current 
buffer. Another possibility would be to open the .c files for the 
function signatures in a currently open .h file. With a little extra 
magic, you could have everything in one buffer and then save the .h 
stuff to the .h file and each function implementation to its own .c 
file. For bonus points, you could remove the old .c files of functions 
that were deleted or renamed.





/me goes poking in the forms package info manual... falls asleep.


The forms package might give you some ideas, but it is probably overkill 
for something like this.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: I reallly do not understand what the issue is.....

2007-03-08 Thread Jason Stephenson

Actually, I don't understand what the issue is, either.

I manually updated my servers at work by writing a textual zoneinfo file 
with the proper configuration as described in the manual page for zic. I 
checked the contents of /etc/TIMEZONE or /etc/localtime and made sure 
that my input file was defining that zone--US/Eastern in most cases. I 
also made a link from that zone file to EST5EDT. Finally, after 
eyeballing it one more time, I ran zic to compile and install the binary 
zoneinfo files from the textual input. It took all of 15 minutes to 
patch the servers that needed it for the DST changes. I just have to 
make sure that they all get rebooted once before Saturday so that the 
server applications that only read zoneinfo on startup will re-read the 
files. (That has already been done.)


I really can't understand all the fuss over something that we've had 
nearly two years to prepare? Don't those people run a real operating system?


Frankly, the utter shite that people will put up with when it comes to 
computing never ceases to amaze me.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Emacs: Multiple files in one buffer?

2007-03-08 Thread Jason Stephenson

[EMAIL PROTECTED] wrote:

This message is addressed to all the Emacs gurus on the list...


Dunno if I'm a guru or not, but I've used GNU Emacs for 15 years and 
dabbled with X-Emacs briefly. My .emacs is only about 4K in size, but 
I've created a couple minor and major modes for various special file 
types. I'm also running the latest GNU Emacs (22.0.95) from cvs.



I have a vague sort-of-almost memory of having read something about a
major mode which will let you edit multiple files in a single buffer.
I wondered how that could work, how emacs could know which part of the
buffer belonged to which file, but I didn't think it was useful
(indeed, I deemed it dangerous) at the time.


Closest thing to what you describe that I have heard of is 
HtmlModeDeluxe (http://www.emacswiki.org/cgi-bin/emacs/HtmlModeDeluxe). 
It allows you to run multiple major modes at once on different regions 
of a buffer, so that the different PHP, HTML and CSS sections have 
proper highlighting. However, all of those regions in the buffer come 
from one file.


I'm not sure that you could make different regions of a buffer point 
explicitly to different files. My experience with Emacs suggests that 
the notion of a buffer is very tightly integrated with the file system's 
understanding of a file.


That said, you can actually load multiple files into a single buffer, 
but the default save commands will save the entire buffer as a single 
file unless you have severely hacked the file save hooks for your 
current mode and have some clearly defined markers that Emacs could use 
to split the data apart. You'd also need some way to preserve the file 
names so that your mode would know how to save the various sections. 
Essentially, your buffer data would need to have some kind of structure 
to it, and your editing mode would need to enforce that structure in 
some way, most likely through custom navigation commands. Even if you 
were just editing free-form plain text files, your buffer would need 
special lines that would tell the file save hook where a new file begins 
and what that file's name is.


Generally, when I've needed to edit multiple files sort of 
simultaneously I've simply opened the various files and split the window 
however many times are necessary for portions of each buffer to show.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ripping from CD to FLAC, and then transcoding to lossy (was: More on MP3 and open formats)

2007-02-27 Thread Jason Stephenson

Ben Scott wrote:

On 2/26/07, Jason Stephenson [EMAIL PROTECTED] wrote:

I'm already planing a nice utility for managing all of that, too.


 I'm starting to wonder if there isn't something out there already.
I wrote my own collection of really bad hacks (shell and Perl scripts)
to do this; I know others have; now you are, too.  It seems like
there's an itch, here.


I've got a Perl (or is it sh?) script that I've used to rip CDs and 
convert the output to mp3.


However, this time around, I have grander ideas. I'd like to make 
something with a GUI front end that can be used to archive CDs in FLAC 
(that backup copy that fair use allows us to make) and convert the files 
to other, open file formats, such as Ogg* and MPC** (that format 
shifting thing we're also supposed to be able to do). There'd also be a 
cataloging/search feature and the ability to restore a lost CD from 
the archive. While I'm at it, it might as well be able to play the files.


This will give me something to do with my new MacBook.

* From what I've read, Ogg may not be all that safe to use patent-wise. 
Is this just FUD, or is there something to it?


** MPC claims to be patent-free, but I'm not sure exactly how. It is 
using MPEG-1 layer 2 technology. I've not bothered searching for patents 
covering layer 2, and I have no idea how specific the Alcatel/Fraunhofer 
patent is.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: More on MP3 and open formats

2007-02-26 Thread Jason Stephenson
In reply to Ben's message about ripping to FLAC and then converting to 
other formats, I believe that is what I am going to do. I'll rip to FLAC 
and store the results on one of my PCs or possibly even burn them to 
DVD+R DL discs. Then, I'll convert the FLACs to Ogg when needed for 
greater file transportability.


I'm already planing a nice utility for managing all of that, too.

I'll see how using one of my PCs as a file server works for playing 
music on my LAN. It worked pretty well with MP3s, but FLACs are a bit 
bigger.


I like the hardware support that FLAC seems to have. I'm going to have 
to get a Phatbox for my car! ;)


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ARTICLE - ESR gives up on Fedora

2007-02-25 Thread Jason Stephenson

Bill McGonigle wrote:

On Feb 24, 2007, at 10:02, Thomas Charron wrote:


 The dependency couldn't be met.  The package maintainer screwed up,
and had it dependent on a version of a package that wasn't available.


Ah, OK, thanks for the correction.

Still, if I hit that problem I'd go file a bug at bugzilla.redhat.com.  
Granted I've had mixed success with that avenue.


Were I desperate I'd grab an SRPM and edit a SPEC file.  That's clearly 
not something most users know how to do.


Fedora is not aimed at most users. Fedora is not meant to be stable. 
Go read their mission statement. They're basically the playground for 
Red Hat's commercial releases.


ESR, of all people, should know this.


This may be a disadvantage of the open source model in a way.  I can 
edit the SRPM and thus it doesn't get highest priority from the 
developers.   Apple had a parallel problem with a recent security (or 
was it Airport driver?) update on the PPC architecture.  They had it 
fixed 3 hours later - at least in part because they had to.


To me, its an advantage, but then I don't install from binary packages. 
I install everything from source on my servers and workstations, and 
then I know it will generally work if it builds. [There has been 1 
recent exception to this. See below.] I also use FreeBSD and OpenBSD 
mostly. (This is at home. At my alleged day job, I have a mix of 
Windows, FreeBSD, and Fedora servers and workstations to look after.)


BTW, sometimes I find ports that are broken on my configuration, or that 
 don't provide an easy way to enable the features that I want. When I 
encounter these, I fix them and send a patch to the maintainer of the 
port. I usually get a polite thank you and notice that some of my 
changes end up in the Makefile the next time that I update.


I have also found 1 port that would build and install and simply refused 
to function on one machine. I downloaded the latest package from the 
developer and installed that. It worked perfectly. The version in ports 
was two years old apparently, and didn't play well with amd64. I'm only 
slightly embarrassed to admit that the package in question is imap-uw.


I have also been known to change the source of various applications to 
add features, etc. I usually send patches off to the respective maintainers.


This is how open source is supposed to work according to ESR. However, 
I think he's become lazy and has forgotten how his bazaar really works. 
It sounds to me in his latest ravings that he really wants his software 
to come perfectly formed from the priests of the cathedral. He's 
forgotten the freedom and responsibility that comes from having access 
to the source code, and instead he'd rather be at the mercy of his 
vendor, just like most users. Rather, he'd prefer that his vendor got 
it right in the first place, unfortunately he's forgotten just who his 
vendor is.


For somebody whose CS isn't as rusty as mine - I think one should be 
able to setup a dedicated process to watch a repo and build graphs of 
dependencies and preemptively find this kind of breakage.  Comments?


Yes, you probably could, but that's what the million monkeys on the 
Internet are doing for you when they install stuff and it doesn't work. 
In this case, people are cheaper than software.


Just my .02 dollars.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: ARTICLE - ESR gives up on Fedora

2007-02-25 Thread Jason Stephenson

Nigel Stewart wrote:


Without disagreeing with your points about how open source is
supposed to work, I think doing better repo quality control
would be a good direction for things to go.  There doesn't seem
much point in letting a repo get into a inconsistent state and
letting that flow downstream


Yes, you are also correct. I should have put a smiley at the end of my 
remark as my tongue was in my cheek at that point. If you're going to be 
dealing with binary packaging, some sort of automatic checking should be 
in order.


With RPM, it shouldn't be too difficult, as you can query an RPM for its 
dependencies. If you find a package that isn't in the repository, then 
the package with the bad dependency should be flagged and disabled until 
the dependency issue is resolved.


In the specific case that ESR is whinging about, the packager apparently 
had a different version of a library installed when he/she built the 
package. That sort of thing has happened to me in the past, so I 
switched to installing from code, even though I mostly use ports these 
days. (Kind of like packages for source code.)


I find it somewhat ironic that ESR is complaining so loudly about 
*binary* package dependencies and that he's threatening to switch 
distros over it. You'd think someone with ESR's credentials and attitude 
would be running his own, homebrew distro. ;-) (Ha ha! Only serious.)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: More on MP3 and open formats

2007-02-25 Thread Jason Stephenson
Not that anyone really cares what I'm doing, but until this Alcatel 
thing, I never thought that I was infringing on someone's patent 
rights by ripping music to MP3. I did it because it pretty much worked 
everywhere without hassle, though I knew that Ogg was supposed to be better.


Now, I'm looking at alternatives such as Ogg and FLAC.

Ogg Vorbis home page:

http://www.vorbis.com/

FLAC page:

http://flac.sourceforge.net/

So, looks like I'll be spending the next couple of weeks re-ripping the 
hundreds of CDs in my collection.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] End-user uses for x86-64 (was: Why are still not at 64 bits)

2007-02-17 Thread Jason Stephenson
If end users are defined as home users and office users, then 64 bits 
will never matter to them, just like 32 bits doesn't matter to them 
today. For the majority of people, its just a yard stick, like 4 
cylinder vs. 6 cylinder vs. 8. Most have some notion of what it means, 
that more is generally better, but they don't often base their buying 
decision on that one metric.


They'll buy the computer that's available today that the salesman and 
the demo software or the review online or in the magazine says that they 
can do the work that they want to do on it. They don't particularly need 
to know or care that the CPU has a 64 bit address space. As long as it 
mostly works and does what they want to do in a reasonable amount of 
time, they are happy.


End users are buying 64-bit machines today, they're getting them with 
64-bit or 32-bit operating systems, and for the most part, it doesn't 
really matter to them. Very few of them would notice the difference in 
performance in the work that they are doing.


Sixty-four bitness will never matter to end users, but it will become 
ubiquitous over time, and sooner or later nearly everyone will have 
computers and devices with 64 bit CPUs and operating systems and the 
vast majority of people won't really notice.


Yes, there are uses for 64 bit address space, just as a 128 bit address 
space would enable use to tackle unthinkable problems. Yes, the 
performance issues are there, but if it really matter to John and Jane 
Computeruser, 64 bit computers would have become common place in the 
market before now and DEC would have bought Compaq.


As a friend of mine said in 1992, I wish they'd stop wasting time with 
semiconductors and get on to doing computations with light.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] End-user uses for x86-64 (was: Why are still not at 64 bits)

2007-02-17 Thread Jason Stephenson
Ben's point about the advantages of more memory and the comparison to 
the 16-bit to 32-bit transition is well taken, but I don't think that 
changes my main point:


Typical end users as defined before don't really care about the 
differences. As long as they can do more or less what they want to do 
with the computer, they won't really notice the difference.


Sure, their 64-bit PC will be faster than their old 32-bit PC, but isn't 
that what they expect? It's the same thing going from a 4 cylinder to an 
8 cylinder automobile engine. You generally expect better performance.


They don't really care how that improved performance comes about.

I'm not going to argue that 64 bits won't make a difference, just that 
typical home and office users won't notice until it enables something 
that no one has done before, or not done well. If you can only run the 
ultrarealistic, VR porn with teledildonics module on the latest 64 bit 
tech, then people will run out and buy 64 bit specifically to get 64 
bit. However, before there is a compelling reason for them to get 64 
bit, they won't care as long as what they get this year is better than 
what they had last year.


Eventually, you won't be able to buy mainstream hardware with anything 
less than 64 bit CPUs, and from looking at the latest Dell PC Catalogs 
that I get, that time appears to be now or very soon.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Why are still not at 64 bits [was Can't figure out Firefox

2007-02-15 Thread Jason Stephenson

Ben Scott wrote:

On 2/15/07, Thomas Charron [EMAIL PROTECTED] wrote:

... It may be double the number of address bits, but it
is woo more than double the address space. ...


Exactly how much more than double is a woo?


Quite specifically, it's one metric assload.


 What's that in imperial assloads?


I believe it would 1 butt of sherry.

http://en.wikipedia.org/wiki/Butt_(unit)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: OT: Cygwin/X on Windows XP

2006-12-27 Thread Jason Stephenson
Erm, never mind, I fixed it by unmounting everything and reinstalling 
the fonts while the Cygwin bash window was still open. I found this 
solution by digging deeper into the Gmane Cygwin-x archives.


Go figure.

According to some posts that I've seen, that isn't supposed to fix it if 
you install Cygwin with DOS line endings, as I do, but whatever works, 
right?


(I'm also sending this to my work address so I can fix it there in the 
morning. Never know, I might just forget this over night. ;)


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: SPARC Live CD?

2006-12-20 Thread Jason Stephenson

Tom Buskey wrote:


I'd imagine there's a NetBSD and maybe OpenBSD or Gentoo.


I'd imagine that, too, but I can't find a working link to either for the 
Sparc architecture.


The only live OpenBSD CDs that I find are for i386.

The working links that I can find for NetBSD live CDs are also for i386.

I found a neat article on making your own OpenBSD live CD. Of course, it 
is targeted at the i386 architecture.


http://www.onlamp.com/pub/a/bsd/2005/07/14/openbsd_live.html

There are a couple of FreeBSD live CD projects:

http://livecd.sourceforge.net/
http://www.freesbie.org/

I'm not sure if they support all of FreeBSD's architectures or not. 
FreeBSD also only supports sparc64, and not sparc.


Cheers,
Jason

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Linux and fonts and Firefox and human-factors design

2006-10-31 Thread Jason Stephenson

RE: User interface (which is what this has turned into)

I find the best user interface is the most minimal. Something that isn't 
cluttered with a lot of buttons and gee-whiz googaw. I like the command 
line, and then simple GUIs for times when data needs visualization.


The problem is that everyone tries to solve every problem with the point 
and grunt interface. The interface needs to fit the task at hand.


Until the motherboard on my 7-year-old FreeBSD workstation (originally 
ran Slackware!) went kaput on me last week, I used BlackBox as my 
primary X-Windows interface. It has an easily configured root menu that 
I could put my favorite apps into. If I needed something else, I can 
fire up a rxvt and run something from there. It's simple, has a short 
learning curve, and works for the beginner and expert alike.


Now, I'm stuck using my Windows XP gamestation to do my work. With 
Mingw, MSYS, Cygwin, Firefox, Thunderbird, GNU Emacs, and Samba on my 
web/mail server, it ain't so bad.


This also opens the question of trying to win the desktop away from 
Microsoft. Wes Peters wrote an interesting article on this topic back in 
May (URL below). I think he makes a good point.


http://ezine.daemonnews.org/200605/dadvocate.html

It also begs the question of trying to make X more like Windows. Do you 
really want to imitate Microsoft's UI mistakes in order to win converts? 
What we need is something completely different, and I'm not talking 
about Mac OS X, either.


Now, on the topic of fonts in Linux/*NIX:

No one has answered this because the objections are real. Using fonts in 
*NIX totally blows. Witness all the different font servers that are 
available for X. Fonts need some serious attention. It seems that every 
X app does its own thing looking for fonts. Not to mention groff, TeX, 
and the others.


Anyway, just my pseudorandom thoughts while I wait for cvsup to finish 
updating ports on my server.--I need to update clamav and install 
cdrtools so I can get a more recent backup of my files off my old PC's 
/home disk.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Spam and mailing lists

2006-10-17 Thread Jason Stephenson

mike ledoux wrote:


I'm sure there are some exim fans out there, but I'm not one of
them.  I have had two experiences with Exim, neither positive.
The relevent one was a server that processed 20-50k inbound
messages/day, and was ground nearly to a halt under Exim.  Replacing
with a properly configured sendmail on the same hardware completely
eliminated the problem.


My experience was just the opposite. When I came to my current job in 
2003, the mail server ran Sendmail on Red Hat 7. I was asked to do 
something about the growing spam and virus problem, so I installed 
clamav and spamass-milter. Clamav seemed to work just fine, but Sendmail 
would crap out every few hours, and it would sometimes make it a day or 
so before crashing. It turns out that there was apparently a memory leak 
in the milter code for Sendmail that shipped with Red Hat 7 or that was 
in the update RPM.


After I switched to a properly configured installation of Exim with 
SpamAssassin and Clamav, the crashing of the MTA vanished on that machine.


The server is the same computer today, though it has had RAM and OS 
upgrades as well as the addition of a hard disk for storing IMAP 
folders. Last week it delivered 127,713 messages and handled 201,781 
SMTP connections. That puts it very near the 20,000 messages per day mark.


Load sometimes goes up to 5 when our batch notices are being sent. 
That's when my software hammers the server with a thousand or two 
messages in the space of minutes.


Exim may very well require more resources than Sendmail. I've never 
benchmarked either application, nor seen the results of published 
benchmarks.--Does anyone benchmark MTAs? Would it be a worthwhile 
research projet?




Of course, that might say more about the difference in the previous
admin's competence and mine than it does about the difference
between exim and sendmail...


Yes, I think the key phrase in the preceding paragraphs is properly 
configured.




It should be possible to do what you want to do with a sendmail
milter, which would not require changing MTAs.



That's very likely, but I still like Exim, and I can point Ben at some 
documentation for doing what he wants. ;)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Spam and mailing lists

2006-10-16 Thread Jason Stephenson

Ben Scott wrote:

[repling to off-list message, with author's permission]

On 10/16/06, Jason Stephenson [EMAIL PROTECTED] wrote:


Exim ... It's my preferred MTA.



 Every MTA is somebody's preferred MTA.  ;-)


True. I will enumerate the reasons that I like Exim:

1. It is not Sendmail.
2. It is very powerful.
3. It is what I know.

[At least, I'm honest.]

I've used Exim since 2000 (version 3.something). I found myself in a SA 
job at the University of Kentucky Engineering Computing Center. The MTA 
on the College of Engineering's mail server, for which I was partly 
responsible, was Exim. I learned to configure and maintain Exim at that 
job, found that Exim suited my needs elsewhere, and have not bothered to 
try another MTA.


Prior to that job, I had dabbled in Sendmail and found it ridiculously 
complex.--I also think it somewhat silly using M4 to configure an 
application.--I also found it ridiculously arcane with overly short 
configuration mnemonics. Exim's configuration is very straightforward by 
comparison, and for a basic setup, there isn't much that needs 
customization from the default.


However, if you need something special, chances are that Exim can do it. 
If Exim can't do it as-is, it is possible to have Exim run external 
programs or connect to local daemons and use the results. If that is not 
enough, Exim can be compiled with Perl embedded and you can run Perl 
directly in the configuration file and at run time. (I don't recommend 
this, but used sparingly, it can solve quite a few interesting 
problems.) Exim can run also filters on messages after they've been 
processed but just before delivery. Finally, Exim has a notion of 
routers and transports. These can be used to make Exim deliver messages 
in a variety of ways, and can be used in place of some of Exim's ACLs 
which are normally used during the various SMTP phases.


Exim also supports database lookups directly including Berkely DB, DNS, 
flat text, LDAP, Oracle, MySQL, ProstgresQL, and Interbase/Firebird. 
These are very handy in the ACLs, and far more handy than they might 
seem at first blush. I use them to maintain my own blocklists of spam 
hosts, and this feature, along with a little Perl and router/transport 
magic, could be used to implement a listserv directly in the MTA.


I'm sure that Postfix and Qmail are quite configurable and useful. I 
haven't used them, so I don't know what their capabilities are exactly. 
Therefore, I won't pretend to compare them to Exim.


Granted, you could argue that my Sendmail experience has been limited 
and that it has all largely been negative. I can't really compare the 
deeper features of Sendmail with the deeper features of Exim because I 
never took the time to figure out Sendmail's deeper features. I wanted 
something that worked, was powerful enough to get the job done, and 
would not take me a lifetime to master. ;)


Again, it's probably not exactly the answer that Ben is looking for, but 
it's hard to get specific about why I like Exim, or anything for that 
matter. It basically comes down to features, configurability, and it is 
what I know.





If the above doesn't work exactly, I know how to set up the ACLs in Exim
so that you can check the arguments of any SMTP command and do whatever
checks are required.



 This part is interesting to me.

 Some searching for something unrelated just now did, in fact, lead me to:

http://www.exim.org/mail-archives/exim-users/Week-of-Mon-20060703/msg00026.html 


The above is a pretty good idea and some of that could be copied into 
the appropriate places in an Exim configuration file, but parts of it 
are missing.--I like the suggestion at the bottom about possibly 
implementing the listserv directly into the MTA.


Additionally, the above link points out something Ben may have 
overlooked in his original request. Because of the way that SMTP works 
and because mailman uses the From: and Reply-to: to determine who can 
send mail to the list, you can't really check for a valid list member 
until the STMP DATA command is sent and the data are received. If you 
use any earlier phase (MAIL, RCPT, etc.), then you may actually kick a 
message that should legitimately go to the list. The reason being that 
you don't have the From: and Reply-to: headers until the DATA phase.


If you want to check on the envelope sender, then you force everyone to 
have total control of their MUA and/or you preclude someone from sending 
a quick message when not at their usual location, assuming that they can 
easily change what appears in the From: header of the email but not what 
the MUA sends as the envelope sender in the MAIL command.


Ostensibly, you could also end up in a situation where Exim accepts the 
message for the list, but mailman rejects it because the envelope sender 
is on the list and does not appear in the From: or Reply-to:.





which is exactly what I'm talking about.  Unfortunately, the linked-to
scripts are on a server which

Re: Solved: Problem with bash login.

2006-08-05 Thread Jason Stephenson

Ben Scott wrote:

 Okay, the next thing to do (after Thomas Charron's .gnomerc idea) is
poke around under /etc/X11/ for anything that looks like a shell
script or initialization file.  Generally speaking, everything starts
in there, somewhere.  In particular, if the system is using gdm, the
/etc/X11/gdm/gdm.conf file may prove instructive.


Here's what I'd do... I'd switch to XDM and then you pretty much know 
that .xsession is going to be executed. The other option is to switch to 
 using startx from the console and then the file is .xinitrc.


If you still want to use Gnome or KDE, I believe that they both have a 
little program for startup, like kde-start or gnome-start. Either of 
these should do all you need to start up the environment. (When I was 
hacking KDE stuff back in the day, I'd actually start KDE from a rxvt 
while running the black box window manager.)


At the bottom of your .xsession you exec whichever one you want.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Malware best practices

2006-07-27 Thread Jason Stephenson

Ben Scott wrote:


 The MySpace worm does highlight something important: Programmers
keep making the same stupid mistakes, over and over and over and over
and over again.


As a programmer, I can tell you why. Most programmers are not well 
versed in the art or the science (if there really is any) of programming.


Do you know why? Most programmers don't really get to see that much 
source code. It's true. In the commercial realm of closed source 
software most programmers only get to see the code of the project(s) to 
which they are assigned. They never get to see much code that's better 
or worse than what they are used to seeing.


The same is true in most university CS programs. Students are not 
exposed to all that much code. It's mostly theory and mathematics and 
then applying that theory and mathematics in code.


This is very far from what most architects or engineers do in their 
educations or their careers. They study other's designs and 
implementations. They appreciate one another's work as art. Closed 
source programmers cannot do this, while open source programmers have 
the opportunity. (However, I doubt very many of them do this.)


Additionally, software is in its infancy. I imagine that the first few 
thousand bridges that were built were pretty dodgy things. They were 
probably very likely to collapse under you. It took mankind a long time 
to figure all this out. (They still don't always do it right as the big 
dig mess is proving.) Software is a bit more complicated on the inside 
than making a bridge, too.


Writing software is like writing fiction or nonfiction in the sense that 
the only way to really get better is to do it. You read a lot and you 
write a lot.--It helps to eat your own dog food, too.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


OT: Whirled Peas (was Re: Malware best practices)

2006-07-27 Thread Jason Stephenson

Fred wrote:


Humans will never learn to live in peace (I pray that I am wrong here). Oh 
well. Perhaps the way to induce peaceful living would be to give the 
cyber-equivalent of the thermonuclear bomb to everyone. Kinda like giving 
everyone a lit match whilst standing in a pool of petrol.


Idi Amin, the infamous African strongman and reputed cannibal, is 
attributed as having said that there will never be world peace until 
every nation has a nuclear arsenal sufficient to destroy its neighbors.


I shudder to think that he may have been correct.

For myself, I recently decided to stop distinguishing between the myriad 
little wars going on, or even the larger wars that have happened in the 
past. As far as I'm concerned, there has been one global war raging for 
the last 5,000 years or so.--War is civilization's handmaiden, or is 
that the other way 'round?

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Stupid question regarding Thunderbird and IMAP

2006-07-27 Thread Jason Stephenson


You seem to have already gotten answers on your problem and I hope that 
they work for you.


Something I'll add is that when building UW Imap (boo hiss! I hear you 
jeer), I've had to use the WITH_NETSCAPE_BRAINDAMAGE option to get IMAP 
to work properly with Mozilla and Mozilla-based mail programs.


I don't know if the above applies to dovecot, but since the email 
programs you mentioned having trouble are Mozilla-based (except for 
squirellmail), I thought that I might throw this out there.


Fortunately for me, the users at my day job aren't sophisticated enough 
to want folders within folders on their IMAP accounts.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Problem with bash login.

2006-07-15 Thread Jason Stephenson
Ah, professor, but the real problem here could be Gnome/GDM using a 
non-standard initialization


I get my bash environment in regular X by sourcing .bash_profile in my 
.xsession file:



#!/usr/local/bin/bash

# If we have a .profile, then load it:
if [ -f ~/.bash_profile ]; then
. ~/.bash_profile
elif [ -f ~/.profile ]; then
. ~/.profile
fi

exec blackbox


It is that easy and everything I run later seems to have the proper 
environment.


Cheers,
Jason Prof. Cockiasse Stephenson

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: What's a developer to do?

2006-04-20 Thread Jason Stephenson
Uh, he could release the source and let people build it themselves. That 
is honestly the only way to guarantee the code runs on your machine, to 
compile it yourself.


Frankly, I think that is what the different distros are for, providing 
binary packages that work with their mix of software and libs. They 
*should* be doing the work of distributing the binaries.


I must say, though, that I've never really had an issue with getting 
binaries to run on GNU/Linux or FreeBSD. I've always been able to 
resolve the depency issues by reading the errors when a dynamically 
linked application fails to link. The most trouble that I recall having 
was with getting Java to run on a Slackware box, and that was a simple 
matter of soft linking a couple of binaries from where Slackware puts 
'em to where Red Hat would put them, and then running ld on a couple of 
the Java binaries so that the link info would get updated.


Anyway, I know this isn't helpful but I'm starting to agree more and 
more with RMS (and probably Maddog, too) the more that I deal with 
binary only software. I've recently adopted the motto that it isn't 
software without source code, so I have little sympathy for the 
difficulties that developers face getting their binary-only software to 
work on different platforms.


Sorry for the polemic. I'll step down off of the soapbox, now.

I'd suggest to your developer that he actually consider running ld as 
part of the post-install. That just might fix a lot of the problems.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Free web-based email?

2006-04-08 Thread Jason Stephenson

Bill McGonigle wrote:

I've been pretty happy with SquirrelMail.  It's not flashy or AJAX yet, 
but it works OK.  Runs on Linux/Apache/PHP - I think many ISP's offer it 
with their minimal-level packages (a few bucks a month).


I set up SquirrelMail where I work. It works and was pretty simple to 
setup. Another feature it lacks that I might actually add is SSL 
support. SquirrelMail itself doesn't need much to support SSL if your 
server does. It just needs to have all the links changed that start 
http://; to use a funtion that checks the protocol in use and outputs 
either http or https. 'Course it might just work if you configure Apache 
to force SSL on that directory.


Frankly, if you don't trust Google with your email, then I don't think 
you could really trust anyone other than yourself, could you? Says the 
guy who has had a Yahoo! account since 1999.


Cheers,
Jason

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Question about GPL issue.

2006-04-06 Thread Jason Stephenson

Bruce Dawson wrote:

One thing that might help would be clean rooming it... have *someone


I've done the above in a couple of projects. Mostly by reading the 
manpage and the usage report and doing the code based on that, without 
looking at the source code of the original program.


Actually, looking at the original source doesn't necessarily taint your 
project, either. If it did, no one could look at much of anything other 
than their own code. In the case of some low-level stuff, porting from 
FreeBSD to Linux and vice versa ends up being a nearly complete rewrite 
anyway.




But, IANAL. Your experiences will be different.


Yeah, someone will find a lawyer somewhere who will argue that doing 
what either I or Bruce have described creates a derivative work. Doesn't 
matter if they are legally correct or not; that's what judges, juries 
and courts are for.






To stay spiritually clean, :-) you could look in the BSD source trees
for a module that does the same thing.  Then you can do anything you want
with it as long as you keep the notice in the comments.



That's what I would do if I were faced with a large body of code. If it
was a small body of code, I'd just re-write it from scratch.


Another option is to contact who ever claims the copyright on the code, 
explain your intent, and ask if you can use it under a different 
license. There's nothing in the GPL that says the code can't be 
dual-licensed. There are some prominent GPL projects that do this, too. 
(MySQL comes to mind and Qt, but Qt is a more complicated story.) To 
some extent, that is what OSS is  about. Stallman doesn't like it, but 
he's not God, merely a saint ;)


Cheers,
Jason

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: perl and network addresses

2006-03-31 Thread Jason Stephenson

Stephen Ryan wrote:

Can anyone think of a better way to blit an arbitrary number of bits 
from 0 to 1?



Well, let's see

Taking advantage of the fact that all of the '1' bits are at the end of
the hostmask, you've actually almost gotten it already.

hostmask = (1  (32 - n)) - 1
netmask = ~ hostmask


Doh! That's so obvious, so obviously, I overlooked it. ;)



1  (32 - n) in binary is (n-1) '0' bits, a '1', then (32 - n) '0'
bits.  Subtracting 1 from that gives n '0' bits followed by (32 - n) '1'
bits.  The 'not' operator flips all the bits for the netmask.

This works for /1 through /32 networks, even though some of those are
nonsensical.  A /0 might break this because of overflow (1  (32 -n)
overflows a 32-bit integer); theoretically, it should work even for /0
so long as 1  (32-n) returns 0 (32-bit gcc 4.0 on my Athlon64 desktop
computes this correctly, but complains 'warning: left shift count =
width of type' while compiling.  Anyway, if you're running a /0, you've
got other, bigger problems.


Using gcc 3.4.4 on a 32-bit Pentium III, I get no warnings when 
compiling your test program, even with -Wall. When it runs, 0 gives the 
same result as 32, so it overflows (silently) on my machine.


If you're running a /0, you really must be Root. ;)

I'm going to limit the input on my network calculator and throw an error 
on IPv4 net masks that are not between /8 and /30 inclusive. The reason 
being that  /8 and  /30 don't really give valid IPv4 networks.


Cheers and thanks,
Jason

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: perl and network addresses

2006-03-30 Thread Jason Stephenson

Paul Lussier wrote:


Yes, more or less.  Between you and Jason I've been able to come up
with exactly what I need.  Thanks a lot for all your help.  Why I
couldn't see this for myself is beyond me.  Of course, this week has
been full of me missing the details to the point where I somehow
managed to mail my taxes to myself from work the other day rather than
to my accountant :) So, just in case you wondered, the USPS system is
working at peak efficiency !


You're very welcome to the help, and we all have those weeks. It took me 
a while to realize what your real question was.


Once I figured out your question, it was actually rather  interesting: 
adding network addresses to interpolate between different networks. 
Trying to answer it allowed me to discover some facts about IPv4 
addresses and masks, so I got to learn something, too.


The thing that I found most interesting is if you use the one or two 
digit kind of mask, i.e. /19, you can determine how many addresses are 
on the network via the following C code: addresses = 1  (32 - n). 
Where n is the part of the mask after the /.


I wish I could find a faster way to blit the bits to make the real 
mask from the /N style than using a for loop. Only alternative I can 
think of is to use a switch on the 33 possibilities (0-32).--Of course, 
anything  /8 and  /30 doesn't make a real network.


Can anyone think of a better way to blit an arbitrary number of bits 
from 0 to 1?


Now, I'm working on a network calculator application that will support 
IPv6 as well. I should probably do it in JavaScript, uh, sorry, AJAX, so 
that the Web 2.0 people will notice. ;)


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: perl and network addresses

2006-03-28 Thread Jason Stephenson

Paul Lussier wrote:

Python [EMAIL PROTECTED] writes:


Would it help to convert to 32-bit integers?  



I might.  I'll try that.


It will definitely help. If you get the netmask and address both in 
32-bit integers, then calculating the network and broadcast addresses is 
very straightforward. Here's some sample code:


network = address  netmask;
broadcast = address | ~netmask;

The above is C, but should work in Perl, too.

Of course, after looking back through the thread, I see Ben has already 
pretty much answered the above. ;)






I think I understand the arithmetic.  I do not really understand what
you are trying to do.



That's okay, neither do I ;)

(If you really want the long convoluted discussion, I'll be glad to
post it, I just figured no on would care.  Of course, I also often
misunderstimate the intellectual curiosity of fellow geeks :)


I think Paul explained it pretty well in his first post. Let me explain 
to see if I really understand.


Paul is using a network that is restricted to using a /19 netmask for 
addressing, but it is really using a /16 when configured. So, he wants 
to limit address to 10.0.32.0/19 but needs to configure broadcast and 
network addresses for 10.0.32.0/16. Why he needs to do that, I have no 
idea and wouldn't need to know. ;)


Ben's previous message pretty much explains how to solve this.

It seems to me that the answer is that your IP addresses are limited to 
the range of 10.0.32.0 to 10.0.63.255 with 10.0.0.0 being the network 
address and 10.255.255.255 being the broadcast address, no?


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: perl and network addresses

2006-03-28 Thread Jason Stephenson

Paul Lussier wrote:

Jason Stephenson [EMAIL PROTECTED] writes:



It seems to me that the answer is that your IP addresses are limited
to the range of 10.0.32.0 to 10.0.63.255 with 10.0.0.0 being the
network address and 10.255.255.255 being the broadcast address, no?



Err, you've got the IP addresses wrong.  It's 10.32.0.0/16, but
segmented on a /19 boundary. I need to be able to calculate the next
network, which for 10.32.0.0/19, would be 10.64.0.0/19, then take the
host portion and add it to this new network such that any given host
has the same host portion on all networks it may exist on.


Doesn't matter. I got the network address wrong, too. ;)

You want to interpolate the address of one host to another network, is 
that it?



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: perl and network addresses

2006-03-28 Thread Jason Stephenson

Paul Lussier wrote:


Errr, no, just the opposite actually.  Trying to *prevent* routing
from a very existent router :)


Sounds to me like what you really need is a router with VLAN capability. 
If I understand correctly, it sounds like you're trying to implement VLANs.


Your setup actually sounds very similar to something that we're 
designing for all the libraries in our consortium. Right now, each site 
has a Class C (/24) on a 10.10.*. In the near future, we plan to 
implement each site having a Class B (/16) with different class Cs for 
each VLAN. For example, if a site is now on 10.10.32.0, it will move to 
10.32.0.0 with something like 10.32.0.0/24 reserved for network 
equipment, 10.32.10.0/24 for the staff, 10.32.20.0/24 for the public, 
10.32.30.0/24 for staff wireless, 10.32.40.0/24 for public wireless, 
etc.--The Dracut Public Library will be our first test case, since 
they're moving (back) into their renovated building next month.


Without VLANs setup in the router, I can't imagine how that would work 
to prevent traffic among the various 10.32.0.0 subnets. I suppose you 
could simulate it with some really complicated routing rules.


At this point, my knowledge on the matter of networking begins to recede 
into nothingness. I can set up a simple Linux or *BSD router/firewall. I 
can do the math (poorly, but that's what computers are for). I can even 
use the socket() interface, but configuring fancy-shmancy, complicated 
network topologies is beyond my current abilities.


I didn't design the above mentioned topology, nor did I figure out the 
configuration in the Cisco routers that we buy. However, I'm promised by 
our contractor that they'll show me enough so I can break things. ;)


Long story made slightly longer, I'd suggest looking up how to configure 
VLANs on whatever you're using for a router.--I know you mentioned a 
FreeBSD firewall earlier.


Cheers,
Jason Can't-the-network-for-the-wires Stephenson
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


FTP PASV IE

2006-03-23 Thread Jason Stephenson
I always say things that are wrong and discover within 5 minutes of 
saying them that they are wrong.


IE has an option to use passive mode in Internet Options. I saw it just 
now when trying to find another option. (I'm doing something for work 
that pretty much requires IE at the moment.)


I just wanted to admit my mistake before getting publicly trounced. ;)

OK, I'll go back to playing with the packet shaper at the office

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: FTP, proxies, firewalls (was: Fedora ftp install without a name server?)

2006-03-23 Thread Jason Stephenson

Ben Scott wrote:


Perhaps, an upgrade or a switch to a different firewall software is in order.



  What are you using now?


Currently, it is a relatively old release of IP Filter (ipf) from 
http://coombs.anu.edu.au/~avalon/ that was hacked up by the OpenBSD 
folks before the licensing clarification.


(Really weird stuff seems to be going on with that machine tonight. It's 
running OpenBSD 2.7, and I'm using a KVM to access the console. However, 
tonight, when I try to login at the console, everything I type is in all 
caps, regardless of the state of the caps lock key. I can ssh in just 
fine, so I want to blame the presence of the KVM and the fact that I 
recently started using the keyboard key combination to switch between 
systems. In the past, I always used the button on the KVM itself. It 
must have something to do with that machine having an AT keyboard port 
and I'm using an AT/PS-2 adapter to connect it to the KVM.)


I used to use ipfw when my gateway was a PowerMac with MkLinux on it. I 
had pretty good luck with ipfw, and its built in ftp proxy module 
seemed to work.


I intend to upgrade my gateway to OpenBSD 3.9 when it comes out this 
summer. I may end up having to upgrade the hardware, too.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Firefox crashes on a bad .gif?

2006-03-16 Thread Jason Stephenson

Bair,Paul A. wrote:


Found it: http://icons.wunderground.com/graphics/360arrows-r-nogray.gif



I'm running Firefox 1.5.0.1 on FreeBSD 5.4 and the gif above does not
cause problems in my browser.


I'm running Mozilla 1.7.12 on FreeBSD 6.0 and it doesn't cause the 
browser any trouble. Makes my visual cortex dump core, though. Maybe 
you're better off not seeing it. ;)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: METROCAST BLOCKS RESIDENTIAL E-MAIL

2006-03-14 Thread Jason Stephenson

[EMAIL PROTECTED] wrote:


Just off the top of my head...

 * Mandating SMTP AUTH
 * Universal use of GnuPG + message signing
 * HashCash (or similar systems) http://www.hashcash.org/


They're all hacks. The only *real* solution is something completely 
different.




In general, any spam-proof messaging system will follow these rules:


There's no such thing. Never will be.



 (1) By default, do not accept any messages


You can do that now, with greylisting, which eliminates the majority of 
spam and viruses. Greylisting means returning a temporary failure the 
first time that a new sender tries to deliver an email to your server, 
or it could be configured on a user by user basis. Spam agents and 
viruses don't generally try again, so those messages are never 
delivered. Legitimate MTAs will try again, so legitimate mail will get 
through. However, this won't stop spammers that use real MTA software.



 (2) Accept messages from authentic senders


Who determines authenticity? If it's just that there's a key pair on a 
server somewhere, then there's nothing to stop spammers and viruses from 
creating their own key pairs. There's nothing to stop them making new 
ones when the old ones are revoked, or have no trust. (And AFAIK, only 
the key owner can revoke their own ky. I can't revoke yours and you 
can't revoke mine.)



 (3) Retract sender authority if/when it's used to send spam


You've got that now with black lists, and you'll still need black lists 
with PKI. If you only trust keys signed by people or organizations you 
know and trust, you'll never get mail from strangers, who may want to 
offer you a real job, etc.


The real problem with anything designed to work with SMTP as it is, is 
that the cost of delivery and the cost of determining what's ham and 
what's spam is squarely on the recipient. It costs a spammer with an 
army of bots nothing to send out 1,000,000 emails. It costs the 
recipients of those emails in bandwidth, server resources, and even man 
hours to deal with the influx of spam. All of that adds up to money.


If the spammer had to pay for the storage of their messages before 
delivery (or pickup, rather), then spam would disappear very quickly. 
This is, in fact, what the IM2000 proposals have been about, making the 
sender bear the cost without adding some ridiculous email tax or 
micropayment scheme.


It is an extremely tough nut crack. Numerous proposals have been 
discussed, and there are many critiques of them on the web. (If you 
search for IM2000 discussion or proposal, I'm sure you'll find many of 
them.) Nothing that's been proposed so far seems adequate to me. Every 
proposal so far can be shot through with holes.


I'm starting to think that it is the very open architecture of the 
Internet that is the real problem. At its very base, the 'Net is 
designed to be open. The basic plumbing was designed at a time when 
there were only a few thousand nodes, and the admins all new each other, 
more or less. You could pretty much trust everyone else to behave more 
or less responsibly.


Today, that architecture really makes it like a frontier environment. 
Each individual is pretty much on their own in protecting themselves 
form the hazards and predators of the environment. If you have an email 
server, you must run anti-virus and anti-spam software. If you don't, 
that's like a colonist in 1640 coming to the New World without a 
firearm. It's more or less the same for firewalls and whatever the 
latest whiz-bang security device is. It has gotten so that even on 
corporate, government and ngo LANs, you need firewalls on each machine 
to protect them from each other.


It's also a human problem. Some people just are not ready for a frontier 
environment. If it were a real frontier, those people who keep opening 
the virus-laden attachments in their email would have been eaten by 
wolves by now. Ditto for those people who have fallen for phishing 
schemes, etc. That is the Internet equivalent of being eaten by wolves.


Things are only going to get worse when IPv6 becomes mainstream and 
there are trillions of throw-away addresses.


What are the alternatives? Something like AOL or Compuserve before they 
joined the rest of the 'Net? No. There was abuse there, too.


I can't say for sure. However, I'm convinced that without completely 
redoing the network architecture so that it resembles a virtual police 
state (read: prison or public high school), then all bets are off. 
We're just going to have to deal with things as they are, unless someone 
has the cajones to pony up a better solution, and can convince 
1,000,000,000+ people to switch to it all at the same time.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Passwords: does size matter, what characters?

2006-03-10 Thread Jason Stephenson

Drew Van Zandt wrote:


Also... what drives me crazy is that requirements conflict on websites
where security isn't important anyway, so I can't use the same
password for all the ones that don't really matter.  PASSWORDS ARE
NEVER GOING TO BE THAT STRONG, get over it and use real authentication
(2-factor) if it's that important.


Ah, yes, two-factor authentication based on something you can forget and 
something you can lose. ;)


Seriously, though, I believe that passwords are great for letting people 
in and lousy at keeping people out.


I *think* I mostly agree with Drew, though. I actually do use the same 
or very similar passwords on many different web sites where the security 
doesn't really matter.


When I need a good password, I run my little pgen program that spits 
out line noise (or the equivalent). I write it down, and after I've used 
it a couple of times, my fingers remember it.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Passwords: does size matter, what characters?

2006-03-09 Thread Jason Stephenson

Ted Roche wrote:
Designing a web site for a client, he asked what the general guidance  
was for passwords. Users are going to be logging into the site (just  
plain http initially, no banking info, SSNs or credit card numbers,  all 
that comes after SSL and first round financing). Looking around,  web 
sites I visit are all over the place and some are nonsensical (no  more 
than 8 characters), others require a minimum of five, six, some  allow 
alphanumeric but no punctuation. I usually throw in upper-,  lower-, 
numeric and a punctuation symbol or two. Is there some reason  to shy 
away from letting the user type whatever they want, assuming  you escape 
it properly in HTML and the destination database? Not  allowing them to 
use their login ID seems like a good minimal rule.


 Are there commonly accepted guidelines?


Let them enter what they like. I usually add punctuation to my passwords 
and have a little C program that outputs passwords of various lengths 
(the length is specified as a command line parameter). These passwords 
resemble line noise: ~mgMs;T!--I get frustrated with sites that won't 
accept punctuation.


It's not difficult if you remeber to encode and decode the strings 
properly. The browser should encode it when sending it over, so decoding 
 it properly is your main concern.


I actually store my passwords for various web sites as plain text in a 
mysql database. I keep the passwords for my workstations and servers in 
a little notebook along with various notes about little admin tricks 
that I've picked up, etc. (Yes, I know, they aren't stored in the most 
secure manner, but I believe it is safe enough for my purposes.)


If you're storing these in a database for web site authentication 
purposes, then you'll probably want to store a hash of the password and 
not the actual password. This is safer in case your database/server 
gets hacked. To authenticate a user, you'd hash the password input and 
compare it to the hash in your database. MySQL has a password() function 
that can do the hashing, or you could use some SHA or MD5 algorithm.--I 
imagine PostgreSQL also has something like the password() function in 
MySQL, but I've never programmed with Postgres.


The other option is to use normal HTTP authentication and let htpasswd 
(if you're using Apache) manage the passwords for you.


Anyway, that's about all I can think of that you haven't mentioned. I 
wouldn't impose too many restrictions on their input, but I wouldn't 
allow Joes (the user name as the password), and would probably require a 
minimum of 6 characters. Other than that, I'd let them enter what they will.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Used Laptops (was Re: METROCAST BLOCKS RESIDENTIAL E-MAIL)

2006-03-09 Thread Jason Stephenson

John Abreau wrote:


I've had good luck with eBay. I picked up an old Thinkpad there a few
years back for about $180, and it still works well today.



I thought of ebay, but I've not used my ebay id in about 6 years, and 
I'd rather not go that route.


I found a couple sites today that sell refurbished laptops and they even 
offer warranties. Www.usanotebook.com looked like a pretty good place to 
go online.


I think I'll do some looking around in the Salem (NH) area. I'll check 
out Microseconds and PCMax.


It's funny, in a way. For $700 dollars you can get a brand new, low-end 
notebook, or you can get a mid-range notebook that is one or two years 
old that has better specs than the brand new one.


I'm definitely going the used route this time around, as it is not going 
to be my primary computer system.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Used Laptops (was Re: METROCAST BLOCKS RESIDENTIAL E-MAIL)

2006-03-09 Thread Jason Stephenson
I'm CCing my reply to the list because it sounds like Christopher meant 
for his question to go to the list.


Christopher Chisholm wrote:


I've been keeping my eyes out for an old laptop HD for a while.. I 
really want one of those USB 2.0 enclosures on a small drive, but the 
ones they sell are crazy over priced.  I just want something like 5 gigs 
or more for some music/work files.
Does anyone know of a good place to look for something like this?  I 
usually check e-bay the past few times I've looked there wasn't really 
anything very cheap.


I've not seen very good prices on laptop hard drives. They always cost 
more than 3.5 drives. About the only way to get them for cheap is to 
take them out of your old notebook when you get a new one, or if you put 
a bigger drive in your notebook.--I got mine when my old laptop stopped 
booting.


I found some decent USB 2.0 enclosures at CompUSA. They're by Norwood 
Micro, and look kind of like an iPod. They're white plastic with 
aluminum cooling fins. Also, unlike most of the other enclosures I've 
seen, they don't require a second USB power adaptor. It gets all its 
juice on the one USB line. I've been using it for over a week now with 
good results. They cost about $25.




thanks!

-chris



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: METROCAST BLOCKS RESIDENTIAL E-MAIL

2006-03-08 Thread Jason Stephenson

Drew Van Zandt wrote:

Happens I know the newly-hired IT director for a new library in the
New England area... any pointers to info on libraries using Linux thin
clients etc. I can pass along to them?


It just so happens that by day I am the Assistant Director for 
Technology Services (*yawn*) for the Merrimack Valley Library 
Consortium, which is a concortium of 35 Massachusetts public libraries. 
(Aren't you impressed? Not!)


Some of our members (at least 2) are considering getting Linux clients 
from a Canadian company called Useful. They've got systems that can have 
up to 10 monitors, keyboards and mice connected to a single PC and all 
being used at once by different people simultaneously. I imagine they've 
hardwired ptys to each video card/keyboard/mouse combo in the drivers. 
Yes, it runs X, and it comes with print management and timeout software 
(two things that most libraries want for public access computers).--I'm 
a little fuzzy on how much of that extra stuff is Free software.


They are going to be at the PLA (Public Library Association) convention 
this month in Boston. They're having a special demo. session with one of 
their customers during the show. (They invited me to come for a look, 
today.) Stop by their booth and I'm sure you can get the meeting 
details.--I'm not going, so I promptly free()'d that section of my brain 
when I hung the phone up.


As for what my Consortium uses, we have a mix of Fedora GNU/Linux, 
Solaris, and Winders computers in the server room at the central site in 
Andover. Desktops as central and at the member libraries are almost all 
Winders.


If anyone wishes to contact me about libraries and technology, feel free 
to email me at my work address: [EMAIL PROTECTED] You can pass that 
address on to your friend, Drew.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Used Laptops (was Re: METROCAST BLOCKS RESIDENTIAL E-MAIL)

2006-03-08 Thread Jason Stephenson

Ted Roche wrote:

At Monday's CentraLUG meeting, Steve Amsden was showing off LTSP. He  
said the laptops he was using were for sale in bulk for $240 each.  Used 
beaters, and not cutting edge, but the prices are getting amazing!


Speaking of used laptops. My 6+ years old Compaq laptop stopped booting 
recently. After doing the usual perambulations and sacrifices, and it 
still not working, I yanked the hard drive and slapped that in a nice 
little USB case to carry about with me.


So, I'm in the market for an inexpensive laptop that works, and that 
would mostly work with Linux or FreeBSD.--If the crappy winmodem won't 
work, I won't care, so long as the hardware is still functional and it 
has working ethernet or PCCARD slot for my ethernet card.


I'm wondering if anyone knows of good sources for working, used laptops.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: asset management tools?

2006-03-01 Thread Jason Stephenson

Paul Lussier wrote:



I've never found anything decent.  We're in the (very slow) process of
writing our own.  Part of the problem with an asset management system
is that it's a shame to stop there :)


Five years ago, give or take, when I was still a system administrator at 
 the University of Kentucky, we came to the same conclusion and built 
something not quite like what Paul described, but close enough. We made 
different technical choices but had something that was useful enough. I 
have no idea if that system is still in place. It was started before I 
got there, but I made a few enhancements and bug fixes. We included a 
crude trouble ticket system in it, so we could track events by machine.


This sort of thing is never finished because you can always find 
something to add and ways to improve the software, etc.




What we're doing is builing what I called the Grand Unified Database
of Everything (I need a better name so I can have a snazy acronym :)


I like your choice of name. It could be made into an acronym in two 
ways, either G.U.D.E or GUDoE.


I like 'em both. The first makes me think of gouda, and I'd be inclined 
to call it cheese instead. Like good cheese, the product could improve 
or worsen with age. It would depend a lot on how it is handled and the 
environment in which it is stored.


The second acronym makes me think of Samuel Beckett's play, Waiting for 
Godot. Like Godot, the product may never arrive, at least not in a 
final state. Anyone waiting for it to be finished would be like the two 
characters in the play who wait for a Godot who never comes.


Well, I'm being bellowed at and must run.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: using make to convert images to thumbs

2006-02-26 Thread Jason Stephenson

Python wrote:

I have a client that laid out their images and thumbs into almost
parallel directory structures.
/img/thumb
  /x  /img
/y  /x
  /*.jpg  /y
/*.jpg

x and y are two digit directory names used to shorten the directory
scans to retrieve files.  I thought I could use a Makefile to automate
the creation of thumbs, but this pattern has me stumped.


How so? It is very straightforward. I'd put the Makefile in the /img 
directory or just above it. Your target to make the thumbnails would run 
whatever program makes the thumbnails with the appropriate options.


If you needed different target directories, you could specify them in 
the environment and on the command line. BSD make makes this trivial and 
I'm pretty sure that GNU Make works about the same.


Here's an example:

The Makefile:

## Makefile example starts here. ##
IMG_BASE = /img
THM_BASE = /thumb/img

IMG_PROC = /path/to/image/processor
IMG_PROC_OPTS = # default options for image processor

TARGET = # undefined. define on command line

thumbs:
if test ${TARGET} =  ; then
echo TARGET undefined
else
${IMG_PROC} ${IMG_PROC_OPTS} ${IMG_BASE}/${TARGET} \
${THM_BASE}/${TARGET}
fi
## Makefile example ends here. ##

Then, on the command line, you might run:

$ make thumbs TARGET=x/y

The above should work with BSD make, and I'm 90% certain that it will 
work with GNU make, as well. It might require a little massage to 
actually do anything useful.




I am going to just write a script to step through the source tree and
check the corresponding time stamp unless someone offers a suggestion.
(No they will not shuffle things around because they have lots of logic
in place.)


That would work if you have it run periodically and just make thumbs for 
new files, or files that have changed. My example above could be used to 
do entire subdirectories at once.


If you know much about make, there are ways to make the above fancier 
and more bulletproof. For instance, you might want to have IMG_PROC_OPTS 
be added to what is specified in the environment rather just being 
hardcoded in the Makefile or completely overridden by the command line 
environment. This would make it easier to have slightly different 
options on a given run.


Also, if you need to specify filenames or globs to the IMG_PROC, you can 
do that as part of your target. My assumption above is that the IMG_PROC 
can take directories as targets and processess everything in 
them.--Perhaps it is a script.


Along the same lines, a fancier solution would run a shell loop over the 
files in the TARGET directory, and process each one with IMG_PROC if you 
don't have a script to do everything in a directory in one go.


HtH,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: using make to convert images to thumbs

2006-02-26 Thread Jason Stephenson

Jason Stephenson wrote:


## Makefile example starts here. ##
IMG_BASE = /img
THM_BASE = /thumb/img

IMG_PROC = /path/to/image/processor
IMG_PROC_OPTS = # default options for image processor

TARGET = # undefined. define on command line

thumbs:
if test ${TARGET} =  ; then
echo TARGET undefined
else
${IMG_PROC} ${IMG_PROC_OPTS} ${IMG_BASE}/${TARGET} \
${THM_BASE}/${TARGET}
fi
## Makefile example ends here. ##


There are errors in the above Makefile example. One that would actually 
run through make would like this:


## Makefile example starts here. ##
IMG_BASE = /img
THM_BASE = /thumb/img

IMG_PROC = /path/to/image/processor
IMG_PROC_OPTS = # default options for image processor

TARGET = # undefined. define on command line

thumbs:
if test ${TARGET} ==  ; then \
echo TARGET undefined \
else \
${IMG_PROC} ${IMG_PROC_OPTS} ${IMG_BASE}/${TARGET} \
${THM_BASE}/${TARGET} \
fi
## Makefile example ends here. ##

I noticed them after I sent the email. Unfortunately, I don't pay as 
much attention when typing code in an email than when actually doing 
something that I'm going to try to use. ;)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Standard Unix utilities

2006-02-22 Thread Jason Stephenson

Tom Buskey wrote:


Solaris has rename(2) also.


Just in case someone is reading this and doesn't know the man pages, 
rename(2) is a system call and part of the C standard library:



RENAME(2) FreeBSD System Calls Manual 
RENAME(2)


NAME
 rename -- change the name of a file

LIBRARY
 Standard C Library (libc, -lc)

SYNOPSIS
 #include stdio.h

 int
 rename(const char *from, const char *to);

DESCRIPTION
 The rename() system call causes the link named from to be renamed 
as to.
 If to exists, it is first removed.  Both from and to must be of 
the same
 type (that is, both directories or both non-directories), and must 
reside

 on the same file system.

 The rename() system call guarantees that if to already exists, an
 instance of to will always exist, even if the system should crash 
in the

 middle of the operation.
b



As Paul and others have pointed out, rename as a command appears to be 
part of Perl. Apparently, it isn't installed on FreeBSD when Perl 5.8.7 
is installed from ports with the standard options. I seem to recall 
seeing it long ago and far away, probably after installing Perl from 
source on a Solaris machine 5 years ago or so.


Anyway, I, too, have written a few utilities over the years. I wrote my 
own EOL converter that could handle \r, \n, and \r\n. It was smart 
enough to figure out what to do so long as you told it what to output. 
Now, I find perl -pi -e 's/\r//' filename works just about as well for 
what I usually needed the other one for.


Generally, I write a program for anything that I have to do more than 
once, particularly if it involves a lot of complicated steps.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Standard Unix utilities

2006-02-21 Thread Jason Stephenson

Tom Buskey wrote:

Why didn't I see rename?  Aha. rename is a linuxism.  Or GNUism.  It's on
linux.  It's on Cygwin.  It's not on Solaris.  It's not on Irix.

I don't have a BSD system to test, but it might be on that.
I bet it's not on HP-UX or AIX.



[EMAIL PROTECTED]:~$ uname -sr
FreeBSD 6.0-RELEASE-p4
[EMAIL PROTECTED]:~$ whatis rename
VOP_RENAME(9)- rename a file
rename(2)- change the name of a file
ldapmodrdn(1)- LDAP rename entry tool

It's not available as a command on FreeBSD. Must be in the GNU fileutils 
collection.



This is why many of us have reinvented the wheel so many times and keep
things in our ~/bin :-(

You'll find the same things with unix2dos/dos2unix, pkill, killall, df
(different options!), anything that looks at disk partitions, etc.


Watch out for killall! On some OS permutations it does exactly what it 
says: it kills all running processess! IIRC, it has this behavior on HP-UX.




Thankfully the Unix world has converged a bit.  People with OSS systems
tended to put tools they needed into the standard distributions and they got
propaged.  The commercial OS vendors started to follow suit with perl, bash,
tcsh.  df in Solaris 10 finally has -h (does HP-UX 12.x have -k now?)


It has that option in Solaris 9, too. I'm not sure about 8, 'cause I 
never used 8. We went from 7 to 9.




I think we'll see further merging with OpenSolaris.


One of the first things that I often do when setting up a new Solaris 
machine is to install the GNU fileutils, gcc, and several other packages.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: change file names

2006-02-21 Thread Jason Stephenson

Ben Scott wrote:

On 2/21/06, Jon maddog Hall [EMAIL PROTECTED] wrote:


This is why I really miss the printed man pages.



  Well, you can always do something like this:

cd /usr/shared/man/man1
for i in *.1 *.gz ; do j=$(basename $i .gz) ; j=$(basename $j .1)
; man -t $j | lpr ; done


If you have a postscript printer, then:

gzip -dc $j | groff -m mandoc -Tps | lpr

will get you prettier output. It's pretty easy to convert them to PDF, too.

I second the recommendation of not printing them all, though. I've never 
checked how many pages it would be, but I'm sure it would be a lot! ;)


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: LUA language

2006-02-07 Thread Jason Stephenson

Larry Cook wrote:

I have not used it myself, but I know that http://www.freepops.org is 
using it for modules and plugins.


Lua is also the scripting language in several F/LOSS, 3-D game 
programming toolkits.


I have no personal experience with it myself, though it looks interesting.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: A rant about ZendPlatform

2006-02-05 Thread Jason Stephenson

Randy Edwards wrote:

   (Forgive me for stating the obvious. :-)
  Well, folks, I'm peeved. For a $1000 product, I expected *much* better
  than what I saw. I was shocked at all the stuff it added to my system, and
  fuming at the fact I had to undo all of their crap by hand.

   Isn't that the reason most of us are using free software -- to avoid the 
stupidity and corner-cutting that the entire business model of commercial 
software forces onto people?


I'll second that, and add a little something from my own personal 
experience:


The more you pay for software, the less it actually works.

I've found that most of the software systems that I've had to maintain 
that come with 6-digit (and higher) price tags are complete and utter 
crap. They require near constant attention from the admin. just to keep 
them up and running, and should some user be so bold as to make a 
mistakeWell, beyond that door lies madness.


There's a saying that I've heard about SAP, We don't change SAP to suit 
your business, you change your business to suit SAP.


(No, I've never used SAP's software. I just heard that once from a 
consultant who made ridiculous amounts of money doing SAP installations 
and troubleshooting.)


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Emacs

2006-01-26 Thread Jason Stephenson

Tom Buskey wrote:


What version of emacs do you prefer?  GNU or X?


I started with GNU Emacs, and I currently use GNU Emacs. I tried X Emacs 
for a while, but found it a bit odd in places.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Emacs (was Re: a question about evim)

2006-01-25 Thread Jason Stephenson

Paul Lussier wrote:

Jason Stephenson [EMAIL PROTECTED] writes:



Fred wrote:


...
I use emacs extensively, and get annoyed with the backup files
too. So I wrote a bash script I run peroidically to copy *all* the
backup files in the directory tree to /tmp.


All you need to get rid of the backup files in emacs is the following
line in your .emacs file:

(setq make-backup-files nil)

You can also do this interactively, but the keystrokes are rather arcane. ;)



How so?  M-: (setq make-backup-files nil)


I was thinking more along the lines of:

M-x set-variable\r
make-backup-files\r
nil\r

M-: is probably quicker--less typing of the return key.



Hmm, I wonder what the difference between (setq make-backup-files nil)
and (setq version-control never) is...

Ahh, I see.  version-control determines whether or not you get
multiple backup files with version numbering like foo.txt.~1~ and
foo.txt.~2~.  So having version-control set to nil means I can still
get foo.txt~, but won't get the version numbered files.  Setting
make-backup-files to nil means I don't even get the simple backup file.

And note that neither of these options affects auto-save, which in
most cases should *NOT* be disabled :)


Yes, but I also have auto-save disabled. I've not used auto-save in 
emacs in over 12 years!


As someone suggested earlier, I use revision control (cvs in my case) 
for all of my important files.


On a side note, I use vim when I have to or when it is more convenient 
than emacs, like when logging in remotely to a server or when making a 
quick configuration file change. I almost got bitten at work today with 
a difference between emacs and vim. In emacs, when you write your 
current buffer to a file with a different name than the original file 
(i.e. doing something like a Save As...), the buffer is now associated 
with the new file. In vim, apparently, this isn't the case, so when I 
wanted to make a new version of a shell program today I did the ESC :w 
newfilename sequence to write the file out before making my changes. 
Being used to emacs, I naturally assumed that a later :w would put the 
changes in the new file, but they went in the old one. Had I not noticed 
the filename when vim reported it was saving the file, I'd have wondered 
why the new program did exactly what the old one did! A cp and 2 mvs 
later, and it was all sorted out.


I also find myself automatically typing emacs key sequences in other 
text editors on occasion--not in vim, however!


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: BSD User's group?

2006-01-15 Thread Jason Stephenson
Jeff Kinz, thanks for the tip on UTF-8 and grep. I'll give that a try on 
Tuesday.


I also thought of one other problem that I've encountered with FreeBSD. 
With two releases, 6.0 and one of the 4.x releases, the installer didn't 
work right on my laptop. It actually skipped a couple of the steps, like 
setting the root password, when doing a fresh installation and not an 
upgrade. However, on that same laptop, I could not get video to work 
with Fedora Core 4. I even tried the nofb kernel option, which I read 
online often helps.


I occasionally use this laptop, a 6-year-old Compaq Presario for testing 
things.


I need a new laptop anyway.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: BSD User's group?

2006-01-14 Thread Jason Stephenson

Tom Buskey wrote:


SMP and journaling file systems.  BSD belives in FFS and doesn't think
journaling is the way to go.


Yes, but with UFS2 and soft updates one does not need a journaling 
filesystem. Don't ask me for the details right now, I'd have to look it 
up again. :) NOTE: I am NOT a kernel hacker, but I've read Marshall 
McKusick's books on the internals.


FreeBSD does SMP. The implementation used to be better than what Linux 
had at the time, then Linux improved with help from IBM and others. The 
FreeBSD SMP implementation was changed, I believe, for 5.x and possibly 
again for 6.0. I'm not sure on the SMP details, since I haven't touched 
a MP machine with FreeBSD or Linux in about 3 years. It ran FreeBSD 
4.something and Debian GNU/Linux (Sarge? Slink? Potato?) just fine. (We 
built two of them for testing purposes. They had Tyan Tiger motherboards 
and two Athlon processors. IIRC, the Athlons ran at 1.7 GHz each.)


Right now, I'd wager that Linux does SMP better than FreeBSD.



If you're running on non x386 hardware, OpenBSD and NetBSD will work as well
on them as on 386en.  Linux tends to focus on the PC with the possible
exception of PPC.  Yes, they run on other archs, but things will be missing
or not available.  The 386 vs the sparc vs the alpha vs the PPC version of
NetBSD/OpenBSD will work mosly the same.


Yes, NetBSD is perhaps the most ported and most portable software on the 
planet. I forget how many architectures they claim to run on at the 
moment, but it is more than any other OS/kernel. I have not used NetBSD 
much except for trying to repair an old Sparc box that had it installed 
at a previous job. The problem ended up being a dying hard disk. (What's 
new, right?)


Currently, I use OpenBSD on my firewall/router, FreeBSD on this 
workstation, my web and email server,  and on my laptop, and I have 1 PC 
that I built for gaming that has Windows XP installed.


It's not too hard to explain why I like the BSDs. I like their 
philosophy of design, and the fact that it is real UNIX. I really like 
the ports collection, and if installing from source isn't your thing, 
then packages are also available and usually as easy to install as 
typing pkg_add -r package_name.


With FreeBSD, I feel that I have more control over what is installed on 
my workstation and how it is configured than I do with most GNU/Linux 
distros.


I have used several GNU/Linux distros, btw. I first started with MkLinux 
on an old Power Mac in 1996. This was based on RedHat 4.2, IIRC. Then, 
in 1999, I built my current workstation and installed the then current 
Slackware 4.0 on it. I upgraded that over the 'Net to Slackware 7.


In 2000, I started working at the University of Kentucky's Engineering 
Computing Services. There, I got some more experience with Red Hat on 
some of the Mechanical Engineering workstations and Debian GNU/Linux on 
some of the application servers in the computing center. I also got my 
first exposure to FreeBSD there as it was used on the file servers and 
mail servers. My immediate boss, the Senior UNIX Admin., also used 
FreeBSD on his workstation. When I built my workstation for work there, 
I installed FreeBSD, too.


(I should add that we had HP-UX, Solaris, and IRIX workstations as well 
as servers that we also had to look after. It was a very heterogenous 
environment and I learned a lot while there.--My first experience with 
UNIX was on an AIX server while an undergraduate at B.U.)


I came to like FreeBSD, and after just a couple of months, I converted 
my workstation at home to FreeBSD and haven't used any other OS on it since.


I originally installed Red Hat 7.something on my laptop, and upgraded it 
to Red Hat 9 before switching it to FreeBSD last summer.


At my current job with the Merrimack Valley Library Consortium, we have 
a mix of Windows, Solaris, and Red Hat Fedora Core servers that I'm 
responsible for keeping up and running. I actually chose Fedora for this 
environment because 1 server was already running an older Red Hat 
release when I got there, and so the other two guys in computer 
operations were somewhat familiar with it.


(On a side note, has anyone noticed that a system running Fedora Core 4 
feels slower than it did when it was running Red Hat 7.3? I blame 
changes in the kernel options, but haven't delved too deeply into it. 
Namely, it seems to take longer when grepping through files. The 
performance is acceptable for email, though I'm thinking of writing a 
program to convert the password database to the FreeBSD format so I can 
switch the server to FreeBSD.)


Anyway, I see I've not really given much insight into what is so great 
about FreeBSD. This has mostly been a rambling biographry of my 
experience with UNIX and UNIX-like operating systems. I'm sorry for that 
and perhaps I could come and give a talk on the virtues of FreeBSD.--I 
mean to get to many of the meetings at Martha's but somehow never manage 
to get there. I think I've 

Re: BSD User's group?

2006-01-10 Thread Jason Stephenson

Martin Ekendahl wrote:
Does anyone know of any BSD user groups in NH or the greater Boston 
area? I've been a long time user, but always get drawn back to BSD bases 
systems for some reason.


BLU? at http://www.blu.org/ might fit. However, it seems rather 
Linux-centric.


I hang out on this list (and not any from BLU) and I'm a big FreeBSD 
fan, and I'm also not from NH, but MA (at the moment, anyway).


I used to subscribe to several of the FreeBSD mailing lists, but 
unsubbed when moving to MA in 2002 and only resubbed to FreeBSD-announce.


If enough people are interested in starting something, I might be able 
to find the time to help out. I really shouldn't promise anything, though.




-Martin
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Any Opinions on SuSE 10.0 vs other Distros

2005-12-24 Thread Jason Stephenson

Ben Scott wrote:

On 12/21/05, Michael ODonnell [EMAIL PROTECTED] wrote:


I don't mean to turn things into a Gnome-vs-KDE thing, but
I'm a KDE guy and options are good.  Pick your preference.


Awww, c'mon now fellers - make up your minds.
Is it Gnome vs. KDE, or KDE vs. Gnome...?  ;-



  Neither.  FVWM rules!  ;-)



None of the above. Blackbox all the way! This is what I currently use.

If I had to choose between KDE and Gnome, I'd definitely choose KDE. 
However, I'm biased having been a KDE committer from '99 to '01. I 
*think* I still have CVS access but haven't done anything with it since 
my daughter was born.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: any good light shareware/freeware to detect hardware configuration?

2005-12-16 Thread Jason Stephenson
The previous two suggestions are pretty solid, and I'll second the 
Knoppix suggestion.


However, my usual approach to used hardware (and I deal with a lot of 
it) is to open the case and take the parts out. I generally write down 
the serial and part numbers and maker names off of the bits and pieces 
and then start Googling for the stuff that I've not seen before.


Cheers,
Jason

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: set default file permissions for a directory

2005-12-09 Thread Jason Stephenson
In your first post, you said that you can set the umask to 002. Have you 
tried that?


I'm pretty sure that even using scp actually logs in the user enough 
so that the shell environment is set up and things like the umask set in 
.profile or whatever for their shell is sourced and does work. At least 
it does seem to in my experience.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: call to arms

2005-12-05 Thread Jason Stephenson

Bruce Dawson wrote:


OOo.org and XML are good steps in the right
direction because they allow quick and easy analysis of documents, and
provide structure to new documents. But screen readers aren't the
solution that's needed.


That is a key point! A FLOSS screen reader doesn't actually have to 
read the screen. It can read the document data directly if it's 
something open like XML with text data. (Stuffing binary into XML as 
MS does is generally considered a bad idea.)


If the technical aspects of that issue can effectively be hammered home 
so that even a legislator can understand it (Good luck, there!), then 
that's another strong advantage in favor of XML and open standards!




___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: where to buy a new system.

2005-11-29 Thread Jason Stephenson
I generally build my own for home and home business use or use used PCs 
that others are throwing out.


I buy my parts generally from MicroSeconds in Salem, the CompUSA in 
Salem, or from www.computergate.com


In doing price comparisons online, Computer Gate usually has the best 
price on name brand stuff. I've never had to return anything that I've 
bought from them, so I can't really speak to their customer service. I 
have been getting parts from them for 6 years now.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: SOHO Backups?

2005-11-17 Thread Jason Stephenson

John Abreau wrote:

Jason Stephenson wrote:

I have heard that you can burn a tar file raw to a CD-R and then treat 
it like a tape. I've never gotten that to work, so I assume this is an 
urban legend.



You can burn *any* file to a CD-R, assuming it's small enough to fit. 
The problem is that you then don't have a filesystem on the CD-R, so you 
can't mount the disk like a normal CD; you need to read it back from the 
raw device the same way you wrote it.





Yep, but I never got the untar bit to work, even with the same device 
name. Might try it again sometime.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: SOHO Backups?

2005-11-16 Thread Jason Stephenson

Lawrence Tilly wrote:

What about the software side of the discussion.  Assuming a basic
backup strategy around either CD-R or DVD-R, what are your favorite
tools for scheduling and handling nightly ( incremental ) backups and
periodic full?


I've seen scripts for doing incremental backups to CD-R. If you try 
searching the FreeBSD.org mailing lists for cdrecord or some such, 
you'll find some that have been posted there.


Personally, I just make snapshot backups periodically. I burn to CD-RW 
generally, so I blank the disc and replace it's contents.


[Snip]

For myself I am very happy and comfortable w/ command-line solutions. 
However, an intuitive GUI would help my significant other participate

as well.  At the same time, I would prefer if it could be kicked off
via cron.  Maybe something that's configured and can run restores via
GUI but executed via cron?  Maybe I'm asking too much?  :-)


I have a script that I wrote that basically just runs mkisofs with the 
options to create an ISO. It makes a directory structure using the 
graft-points option. I also include Rock Ridge and Joliet extensions. It 
gathers up the directories I want and ignores a couple. Whenever I need 
to change the directories that I'm backing up, I just modify the script. 
This is rather rare, though, because I have a pretty well-defined 
directory structure on my workstation. I mainly just backup my home, a 
cvs repository, and a /home/root where I keep copies of the 
configuration files from all my home servers.


I run cdrecord manually from the command line, but the cdrecord step 
could easily be added to the script.


I find it's very easy to fire up rxvt, su to root, run my script, and 
then type in the cdrecord options.


I haven't put any of this in cron, but I'm sure it would work from 
there. I only make a backup once every week or two, or if I've been 
making heavy modifications to some of my projects.


As for restores, I've copied data from my backups a couple of times that 
I remember. All I do for restore in my situation is cp -Rp or some such. 
There's a complicated trick that you can do with cpio that's faster, but 
I can never remember the proper command line switches. ;)


I have heard that you can burn a tar file raw to a CD-R and then treat 
it like a tape. I've never gotten that to work, so I assume this is an 
urban legend.


At my day job, we use DAT for backup, but we have gigs and gigs to 
back up there. At home, I just have about 350 MB of stuff that I 
regularly backup, so CD-RW works pretty well for me.


I have a few CD-Rs hanging around of particularly dated snapshots. I 
made them when I thought I was doing something that might cause me to 
lose files that I might later want to retrieve. They're on a variety of 
media, and so far I've been able to read them all, and some are now 
four+ years old. Granted, I don't look at these very often, but I did go 
through them all this past August just to see what I had on them.


HTH,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: SOHO Backups?

2005-11-15 Thread Jason Stephenson

Jim Kuzdrall wrote:

However, I hear CD-ROM is unreliable even over 12 months, so that's
out.



I have heard this too.  Does anyone know the physical mechanism 
responsible for the deterioration?  I seem to associate the tale with a 
study at a library, and the CDROMs being scratched by handling.  If 
that is the mechanism, it is irrelevant for SOHO.


Exposure to light and excessive heat can deteriorate the dye layer on 
CD-R and CD-RW media.


I've been using CD-RW and CD-R media for backups of my data for the past 
five years and have had no real issues. I have had some cheap CD-R discs 
that either don't burn (cdrecord reports media errors) or that have had 
read errors after sitting about for 2 to 3 years.


I have not had any problems with CD-RW media of any sort, but I also 
don't buy cheap CD-RWs. I've actually rewritten the same disc 
consistently for 4 years. This disc actually outlived my first burner!


I find that there is a huge difference in longevity between different 
manufacturers and at different price points. The cheap stacks of no-name 
CD-Rs that you get for 10 cents a piece don't last nearly as long, nor 
are generally as reliable as the discs that cost close to $1.00 each. I 
also tend to only buy Fuji Film or Taiyo Yuden discs for my serious 
tasks. Though I have to admit I've had some pretty good luck with a 
cheap stack of Ritek-made discs with the ATT brand on them.


I have used these backups, so they have had real world testing. I've 
used them to copy my important data onto a new hard drive when I last 
upgraded my system, and when switching from GNU/Linux to FreeBSD.


I generally backup everything overwriting what's on my disc each time. I 
often make two copies, just in case, though I keep them both generally 
near the computer, so if there's a fire, I've lost everything.--I mainly 
do them in the more likely event of hardware failure.


If you're just backing up a few hundred megs of data then CD-RW/CD-R 
media is fine. A couple gigs and I'd look into backing up on DVD+-R/W.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Hardware FAQs and the wiki? [ was Alternative to k3b? ]

2005-11-07 Thread Jason Stephenson

Tom Buskey wrote:


I've used a Matrox G450 to drive dual CRTs.


I'll second the Matrox G450 recommendation. I've used 'em for the exact 
same purpose back before they were inexpensive. Ran it on a triple boot 
system with Debian GNU/Linux, FreeBSD 4.x, and Windows 2000 Pro.


Worked great in all environments with dual monitor support.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Attempt at cgi mail exploit

2005-08-31 Thread Jason Stephenson
The funny thing, to me,  is that I see stuff like this in my mail logs 
all the time, both at my day job and at home:


2005-08-30 00:20:36 SMTP protocol violation: synchronization error 
(input sent without waiting for greeting): rejected connection from 
H=[81.12.246.11] input=POST / HTTP/1.0\r\nContent-Type: 
text/plain\r\nContent-Length: 833\r\n\r\nRSET\r\nHELO sightz.com\r\nMAIL 
FROM:[EMAIL PROTECTED]\r\nRCPT TO:[EMAIL PROTECTED]


(I changed the email addresses to protect the [not so] innocent.)

Apparently, someone learned to program HTTP and figures everything is a 
web serverNot so clever hackers. (And, yes that is coming in on port 
25.)


BTW, trying to exploit cgi mail programs is an old trick. I've seen 
failed attempts at posting to common cgi mail programs on my server for 
ages. What's funny is that I use my own, custom contact form and cgi 
(written in C, no less). It only sends email to me, and it requires that 
all fields be filled out. The reason it's funny is that I've taken the 
name of a common cgi mail program, swapped the first and second 
syllables of the name, and removed the file extension (which is 
meaningles on *NIX anyway). Of course, no one has ever used it to send 
me mail, except for myself during testing. :(


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Annoying question: Installing W2K or WXP on a Linux-only box

2005-08-09 Thread Jason Stephenson

Try this:

htt://www.sigio.com/articles/win2k.html

It was how I got Windows 2000 to install and boot on a computer with 
FreeBSD 4.x on the main disk.


You can pretty much ignore the FreeBSD-specific stuff in there.

Also, feel free to ask if you have any specific questions related to it.

Cheers,
jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Annoying question: Installing W2K or WXP on a Linux-only box

2005-08-09 Thread Jason Stephenson

I wrote in error:

Try this:

htt://www.sigio.com/articles/win2k.html


Yeearrggg! Left a p outta there!

http://www.sigio.com/articles/win2k.html

The key is to disconnect the first hard drive; hook the second hard 
drive up as the first; then install Windows. Windows wants to be on the 
first disk and will only install it's boot loader on the first drive.


After you get it faked out, you can easily use grub to make it boot.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Java on Debian (wow that was easy!)

2005-07-22 Thread Jason Stephenson

For some real fun, try installing Java from the source!

Warning: This is not for the easily frustrated, and it probably helps if 
you have a masochistic streak!



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Sometimes, I think Zawinski just might be right.

2005-06-20 Thread Jason Stephenson

Paul Lussier wrote:
[...]

I'm more inclined to blame AbiWord...  It sounds like it's generating
bad postscript.  Especially since it sounds like you can't even export
to PS and open it with a ps application (incidently, did you try using
gv or ghostview rather than xpdf?).


I'm inclined to blame AbiWord, too, now. I'm not the only one to have a 
problem with AbiWord's PostScript output.


http://www.abiword.org/twiki/bin/view/Abiword/FaqPostscriptOutputProblems





Now, the question is, do I go to the bother of installing Open Office
just because I occasionally need to print some word processed
document? Or, do I just redo everything in TeX?



Depends upon what you want?  If you know TeX pretty well, it may well
be a lot faster.  I find OO to be a pain when you need to do any
heavy-duty formatting.  I keep my resume in text and in PDF.  I use
LyX to create the PDF from the LaTeX which LyX exports.  I tried
migrating to OO, and it didn't pass my 30-min. test (after 30 min. of
fighting with OO, I still didn't have a decent layout of my resume!).


I know TeX well enough. I reformatted the letters that I was working on 
yesterday to get more than desireable output. Printed with no trouble, 
too. Think I'll stick to TeX from now on.




Of course, LyX isn't without it's frustrations either.  It's often
difficult to change a default template without knowing how to hack the
LaTeX code...


Y'know, I actually tried LyX about 6 or 7 years ago, before I tried 
AbiWord. I should probably give it another go. It was a niftly little 
program back then, and I imagine it hass improved. It had some rough 
edges as I recall.


I was actually pondering writing my own TeX editor or giving LyX another 
try even before reading your message. (The world really doesn't need 
another text editor or word processor, does it?)






Right now, I'm inclined to ask, along with Jamie Zawinski, What year
is this? Why is this stuff such a pain, and do I really want to put
up with this?



One solution is to not print. I find I waste significantly less paper
and ink that way :)


I really don't print that much personally. At home, I mostly print stuff 
for my wife by plugging her iBook into my LaserJet via USB. I have it 
connected via Parallel Port to my workstation and mostly just send it 
postscript or plain text on the rare occasion that I print something for 
myself.--I actually had a working Samba config for it, too, and could 
print from my Windows gaming PC before that PC bit the dust.That, 
however, is another story.


Well, I guess I'm off to reformat my resume in TeX!

Cheers!
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: cd burning question

2005-06-20 Thread Jason Stephenson
It has been a while since I last used cdrecord, but FWIW, I don't see 
any errors in the output that you provided. It looks like a clean burn.


I've been having some issues on a different OS with different burning 
software using CD-RW discs. The odd thing is that if I mount the iso and 
the burned CD, all the files appear to be there and they have the same 
size...


It's hit or miss, too, depending on the disc and the amount of data 
being written.


I think FreeBSD's burncd command may have a problem with the rings that 
keep getting bigger on CD-RWs as you blank and rewrite them.


Oh well, I guess that wasn't very helpful. ;)

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: HD partitions?

2005-06-17 Thread Jason Stephenson

Here's what things look like on my FreeBSD system:

[EMAIL PROTECTED]:~$ df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad0s1a248M 56M171M25%/
devfs  1.0K1.0K  0B   100%/dev
/dev/ad4s1e 73G 17G 51G25%/home
/dev/ad0s1e248M160K228M 0%/tmp
/dev/ad0s1f6.6G4.6G1.5G75%/usr
/dev/ad0s1d248M 13M215M 5%/var
diderot:/usr/home/www   17G1.2G 14G 8%/mnt/www
diderot:/usr/home/jason 17G1.2G 14G 8%/mnt/jason
linprocfs  4.0K4.0K  0B   100% 
/usr/compat/linux/proc

portal:64651.0K1.0K  0B   100%/p

My system is composed of two ATA hard disks, the system disk is 10 GB 
and I have an 80 GB disk mounted as /home. You'll also see two NFS 
mounted systems on there and a couple of extras: linprocfs for linux 
binary compatibility, the portalfs for network portals, and devfs for 
dynamic device creation.


There is also a 2GB swap partition on ad0, the system disk. My system 
has 1 GB of RAM, and this is the recommmended amount. I have to say 
though that my system rarely uses more than a couple KB of swap, and 
that's even when doing something outrageous, like compiling the JDK 1.5 
from sources, which I did this past Tuesday.


To translate the above into something linux-specific:

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/hda1248M 56M171M25%/
devfs  1.0K1.0K  0B   100%/dev
/dev/hdb1 73G 17G 51G25%/home
/dev/hda2248M160K228M 0%/tmp
/dev/hda36.6G4.6G1.5G75%/usr
/dev/hda4248M 13M215M 5%/var


On a 200-GB disk, I'd recommend something like the following:

swap 2x RAM
/ 256MB
/usr 10GB
/tmp 256MB
/var 512MB
/home - the rest.

However, the above can easily change depending upon which distro you 
choose to install. I'd recommened the above for most distros, Debian, 
Slackware, or for just about any of the BSD OS. Red Hat likes to make a 
/boot partition and the put everything else in one big partition. I 
don't know what SuSe would try to do, 'cause I've never used it.


If you find yourself wanting disk space for a web server or database, 
you can always configure the program to store it's data in /home/. I do 
this on my webserver with apache's root set to /home/www and the mysql 
datafiles stored in /home/mysql.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Linux vs. BSD?

2005-06-17 Thread Jason Stephenson

Theo is a nut.There, I said it. I got that off my chest. ;)

That said, I do use OpenBSD for my homebrew routers, and he makes a few 
good points.


I have used and continue to use in some capacity FreeBSD, OpenBSD, and 
various flavors of GNU/Linux: Red Hat 6.x, 7.x, 8.0, 9.0, FC3, RHEL; 
Slackware 4.0, 7.0, 9.1; Debian, several releases and even a machine 
that had a mix of packages from 2 releases. I've also administered many 
commercial Unix systems: HP-UX, Solaris 7 through 9, IRIX, and had the 
pleasure of briefly working with (against?) a system running a very 
old release of OSF/1.


At home, I have a laptop with Red Hat 8 on it, my workstation with 
FreeBSD 5.4, a web and email server with FreeBSD 5.4, and my internet 
router/firewall with OpenBSD.--I'm thinking about installing FreeBSD on 
the laptop before going on a trip this summer, too.


In my opinion, there are places where the BSDs, and FreeBSD in 
particular, are technically superior to Linux. There are also places, 
such as suport for NUMA and very large scale SMP, where Linux is 
superior to the BSDs. I can tell you there are situations where I'd 
choose Linux and others where I'd choose one of the BSDs. The criteria 
vary with each situation, and I'm not going to draw a hard line like Theo.


For my personal use, I prefer FreeBSD. As a programmer and 
administrator, I like the fact that configuration is done through a 
well-organized set of text files.


I also tend to install most packages from source rather than from binary 
package files. The ports system on FreeBSD (and the other BSDs as well) 
makes this very easy. I don't usually inspect the source code for every 
package that I install, but I do always look through the ports Makefiles 
for configurable options. In the case of some packages, I'll even go so 
far as to modify the Makefiles to install stuff in an alternate 
location, or to change the default choices. In many cases, you don't 
have to do this, but in some, you do.


For my current hardware and situation, it's largely a question of style. 
I know that GNU/Linux would perform adequately on all of my current 
systems at home. I prefer the BSD way of doing things for my own computers.


At work, we have a couple Solaris boxes and the other 'nix systems are 
running GNU/Linux. I decided that GNU/Linux would be the better choice 
for that environment given the technical abilities of the other staff 
and the fact that the mail server was already running GNU/Linux when I 
got there.


Apart from a couple of very specific problem domains, the main reason 
that I see for going with one OS over another is largely a question of 
style, and how you work.


Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: ps/pdf to text converter

2005-06-17 Thread Jason Stephenson

Charles Farinella wrote:

Hi all,

Does anyone know of or have experience with a tool to turn either ps or
pdf documents into text?

--charlie



I have AFPL ghostscript 8.51 installed and it has 2 utilities that might 
help you:


ps2ascii
pdftops

The first converts PS to ascii text, the second converts PDF to PS. I 
think the first might also handle PDF.


I've not used them. I found them by doing man -k pdf

HTH,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: [Pedantic RANT] Re: [HUMOR] End Times

2005-06-08 Thread Jason Stephenson

I *SO* want to invoke Godwin's Law on this thread.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Trade show banner

2005-05-24 Thread Jason Stephenson

Bill McGonigle wrote:


[insert catchier suggestions here]


Code Free or Die!

I like this one because it works on a couple of levels if you stop to 
mull it over.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Question about spamassassin using MySQL

2005-04-25 Thread Jason Stephenson
Benjamin Scott wrote:
On Apr 25 at 3:13pm, Bruce Dawson wrote:
Steven: Thanks for the clarification. I was under the impression that 
the milter is called only after the message had been received.

Obviously, in order to do content analysis or other magic on a 
message, you have to receive the content.  As I understand it, what 
these tools do is allow the SMTP DATA verb to be sent, and to receive 
some or all of the data from the sender.  Then, before the SMTP result 
code 250 (Message accepted for delivery) code is sent, the filter runs 
and makes a decision.  If the message fails, an SMTP error status code 
is sent instead.
Yes, that is pretty much how spamass-milter and exim with exiscan-acls 
works.

This is fine as long as your mail volume is reasonably low.  As mail 
volumes increase, however, it becomes impractical to do this all in 
real time on your MX.
We had serious problems at my day job using spamass-milter. Dunno if the 
problem was with our version of sendmail being buggy or what. (There are 
known stability problems with spamass-milter and certain versions of 
sendmail.) Sendmail would lock up, spamassassin would die, and 
occasionally the swap manager would start thrashing. Sometimes a 
shutdown -r was the quickest way to fix the mess. This tended to happen 
when either we were at our busiest part of the day (between 9:00 and 
10:00 a.m.) or when processing a message with a large attachment, large 
being a variable value depending on circumstances.

Switching to exim and adding RAM to the system really helped. The 
computer now has 1 GB of RAM (instead of 512 MB), and processes 500MB+ 
of mail per week, over 30,000 messages. I'd say that's a medium-sized 
installation.

(I should add that I switched to exim only after adding RAM and still 
having problems with spamass-milter. I'm not saying that it will do this 
in all installations, but it certainly did so in ours.)

So, if anyone is interested in exim ACLs to use spamassassin, as well as 
clamav, drop me a line. I've got a set up that works well for 600+ users.

Before I send this message, I'm going to pop back in here and add 
something that I think is appropriate.

I've used spamassassin and exim in 3 environments, now. With several 
different set-ups. What I've found is that spamassassin really shines 
when each user on the system has their own bayes db, preferences, and 
auto-whitelist. This is almost always the case in the procmail 
environment. Spamassassin was, I believe, designed to be used this way, 
and it is very, very accurate when processing a single user's mail.

When spamassassin is used on a system level, as is more often the case 
when it is run from an ACL or milter, it maintains 1 spam db, 1 set of 
preferences, and 1 AWL for all the users on the system. This is because 
spamd normally runs as some user (nobody in my case) on the mail server, 
and the MTA communicates with spamassassin via spamd.

In the case of my current day job, I have to run it this way, because 
the customers of my MTA (600+ librarians, a few of whom hate computers 
as an distraction from their real job) don't have the ability to 
maintain their own bayes db, preferences and white list.--If it's not as 
simple as point and click, they won't do it, and why should they? It 
takes time to process all that spam every day.

Don't forget, too, that you should run the false positives back through 
the system so that spamassassin can adjust itself to be more accurate.

That's why I have mixed feelings about sites that do unilateral 
blocking based on blacklists.  Many of these systems find 75% of their 
mail volume is bogus (spam, worms, phishing, and backscatter).  They get 
faced with the proposition of lowering the load on their systems by 90% 
at the cost of 5% of their legitimate mail.  If you're an ISP trying to 
get by on paper thin margins, that might be considered acceptable 
losses.  Of course, that's cold comfort to those who (like me) *are* 
the acceptable losses.  :(

Spam sucks.
Yep, it does.
With the default spamassassin set up, the RBLs only add to the score of 
a message, so it's not as bad as rejecting outright. I know many sites 
also configure to reject any connections from IPs on certain RBLS.

I don't use any third-party RBLs for out right blocking, but allow 
spamassassin to adjust a message's score. I do, however, maintain my own 
list of known bad actor IPs that I refuse connections from. Keeping 
track of this list is a bit tedious, but a couple of shell scripts help. 
Generally, you have to send us spam that makes it through the filter to 
end up on this list.

I know it isn't perfect because of collateral damage, and blocking by IP 
is practically pointless since IPs in dynamic blocks do actually change. 
I've considered removing the list for a week to see how it affects the 
system, and possibly removing it completely if the amount of spam 
getting through to my end users unfiltered doesn't appreciably 

Re: free software alternative to Access

2005-04-18 Thread Jason Stephenson
Jeff Smith wrote:
I should have used scalable instead of robust.  Ideally,the
Access design tools would have been designed separately
from the db engine. You plug in the db of your choice on
the back  end.  Alas, I'm told you can do that, but I
haven't met anyone who a) has done it, or b) can show /
explain how to do it.
I've done it. It's relatively easy with ODBC. I've had an Access front 
end application that I created talk to a MS SQL Server 2000 database. 
You should be able to hook your Access front end up to any database 
server provided that you have an ODBC driver for your database. I know 
that they are available for MySQL, and most likely Prostgres as well. 
I've not used MySQL with Access but have used ODBC to connect to a MySQL 
server running on Linux from a Windows computer using a custom front end.

If you'd like some help with this, you can contact me off list.
Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Tracking Internet Computers

2005-03-04 Thread Jason Stephenson
That sounds interesting, but I wonder if regularly setting your 
computer's clock via ntpd or ntpdate would defeat this.

Having a sysctl MIB to turn off the TCP timestamping would certainly 
defeat this. I'll wager that if such doesn't already exist in the 
various BSD and Linux kernels, it will before too long.



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Tracking Internet Computers

2005-03-04 Thread Jason Stephenson
Just answered my own question. On FreeBSD, there is a sysctl MIB to 
activate or deactivate RFC 1323 timestamping:

net.inet.tcp.rfc1323
Setting it to 1, the default, turns it on, setting it to 0, turns it off.
Looking at my OpenBSD machine, I see the same sysctl MIB as FreeBSD.
Finally, checking my laptop with Linux kernel 2.4.20, I find 
net.ipv4.tcp_timestamps, which likely has the same effect.

So, if you're worried about the technique being used on you, set those 
sysctl MIBs to 0, and don't worry about it. ;)


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: SLUG meeting tonight 7pm UNH Morse Hall 301 Topic: Video Processing

2005-02-14 Thread Jason Stephenson
Bruce Dawson wrote:
It goes to gnhlug-announce@mail.gnhlug.org (well, that's the official
address). And (!) gnhlug-discuss is subscribed to that list, so it also
gets everything sent to the announce list.
Though I know the above is the official answer, I'll confirm that I 
received two copies of the notice, one addressed to gnhlug-announce and 
the other for gnhlug-discuss. Checking the headers reveals that -discuss 
appears to be subscribed to -announce.

:)
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Debian flamewar

2005-02-09 Thread Jason Stephenson
Heh. I find this discussion mildly interesting from where I sit, a 
mostly xBSD user.

It's funny, too, 'cause I didn't start using FreeBSD for my workstations 
and personal servers until I worked in a data center environment with 
mixed UNIX systems. At the University of Kentucky's College of 
Engineering Computing Center, we had multiple systems running HP-UX, 
Solaris, IRIX, Red Hat GNU/Linux, Debian GNU/Linux, FreeBSD, and even 1 
web server with OpenBSD for a certain network/clustering research group 
that some on this list have probably heard of. Also, the two of us in 
the UNIX Support Group were responsible for several dozen other UNIX 
and Linux workstations spread throughout the college and not just the 
servers in the back room.

It was largely the issues of package management that made me decide to 
go with FreeBSD and OpenBSD for my computers at home, instead of various 
flavors of GNU/Linux. I found the ports collections to be the solution 
to deb and rpm hells. In almost every case, I cd into the program's 
ports directory, type 'make install' and the software builds, installs, 
and then runs with no hitches. There's very little to package manage and 
that's how it should be, IMHO.

I experienced all kinds of problems on the Linux machines, mainly 
because we were a research institution and the profs would need some 
bizarre hardware combination that wouldn't quite work with the default 
packages from the various releases. It became a nightmare trying to 
maintain a machine loading packages from 2 different Debian releases, or 
trying to install binary RPMs on some of the RedHat machines with 
different kernels, etc.

It soon became routine for my colleague and I to install nearly 
everything from source code on certain machines because we knew that the 
packages would not work.

I have found that installing from source, and knowing what different gcc 
and make errors mean, is the only way to get software that will run on 
your machine 100% with no faults other than their own bugs or bugs in 
the libraries they link to. It can be time consuming to maintain a 
machine with source-compiled binaries, but I haven't found any update 
routine that's simpler or more sure-fire than:

cvsup -g -L2 /home/root/sup/FreeBSD-ports-supfile
pkg-delete -a
cd /usr/ports/[pkg-cat]/[pkg-name]
make install clean
The last two lines need to be repeated for each package you wish to install.
I can't count the number of times I've had
apt-get update
apt-get install
fail and for what reasons when.
I've also dealt with RPM-hell of having to install RPM after RPM, just 
to get the one thing that I want to use installed. I've also had 
installs fail on some machine, again because of custom packages needed 
for one reason or another.

I can tell you that I've only ever had two package fail make install in 
FreeBSD ports because one time I updated right after a change to one 
package, and the second time because the package maintainer had written 
his makefile to check for the headers from a specific version of the 
mysql libs and I'd installed a more recent one. In both cases, the fixes 
were simple changes to the makefile that I sent off to the package 
maintainers. One was incorporated into the makefile, and the other 
maintainer wrote back thanking me, but that he'd already updated the 
repository with a better change.

I've not ever had a problem with ports on OpenBSD, but I've used it much 
less.

Oh, and don't get me started on the package management in IRIX, HP-UX, 
and Solaris. They have varying degrees of success and failure, and you 
often have to go to third party sources on the Internet or resort to 
installing things from source for the useful stuff.

Anyway, to me, it isn't software without source code, and I do prefer to 
install from source because then you know it works with the libraries 
installed on your system because it was compiled against those very 
libraries. You don't need to worry about binary compatibility issues 
between different library and kernel versions busting your binary 
packages. Yeah, you could end up with something that doesn't compile, 
but at least then, you can see what the problem is and fix it yourself.

Well, gee, that was longer than I anticipated.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: HP/Compaq Presario and Linux

2005-01-29 Thread Jason Stephenson
Travis Roy wrote:
I'll second this. In fact, every Compaq that I've ever seen does 
this. It's usually the last partition on the disk, is roughly 32MB 
in size and generally of a type not recognized by Linux fdisk or fips.

It was actually the first partition and it was 5 GIGS in size.
Guess it varies by model. :) Now that you mention it, I believe the 
partition on my laptop is about 2GB in size, but I'm certain that it is 
partition 4. IIRC the partition was 32MB on a Presario desktop from '95 
that I used until two years ago as a gateway/firewall. It ran OpenBSD 
like a champ.

You say that like new computers come with rescue CDs.. This Compaq did not. There is a utility to 
create them to CD-R(W) media, but you're only allowed to do it once.
That's one of the reasons I stopped buying named brand computers several 
years ago. I generally build my own, so I don't have to worry about such 
things. I know, not everyone has that luxury and it isn't helpful in 
answering the question.

I did manage to finally get FreeBSD working. What I ended up having to do was repartiton the drive, 
let it recover to partiton 2 (the next one after the rescue partiton). It left the other two new partitions 
along.

Linux CDs still would not boot correctly, still no idea why. The only ones I could get were a live 
rescue CD (not Knoppix). Anything else would just cause the system to reboot.
It's very likely that HP/Compaq have changed a few things. ;) I have not 
had any real issues with my Compaq laptop running either FreeBSD or 
Linux, or even Win98SE that came with it.

Y'know, there might be a setting in the BIOS that turns that silly check 
off. Have you ever seend the Compaq BIOS configuration utility? Nice and 
graphical and all.Gives me the willies. I'd check my laptop for you, 
but when I installed a 20GB drive on it after I got Linux running, I 
decided not to put the BIOS partition on there. Still works, I just 
can't change the BIOS and it won't sleep properly. Seeing as little as I 
use it these days, it doesn't really matter.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: HP/Compaq Presario and Linux

2005-01-29 Thread Jason Stephenson
Jerry Feldman wrote:
Most PC vendors today place a hidden partition from which you can
reinstall or repair the OS. The reason for this is that they do not have
to provide you with an installation CD. 
Right. I'm aware of that. It's another reason why I don't buy named 
brand PCs. They advertise an 80 GB disk, which they're measing in base 
10, so really it's only a 72 GB (or so) disk, then they put some 5 to 8 
GB partition on the disk with the install medaia, so really you're 
getting a 64 GB disk. It's all there if you bother to read the fine 
print. Personally, I'd rather have the install media on a bootable DVD 
and have the full use of my disk, thanks.

In general, I would probably buy a Dell or HP/Compaq depending on the
price and features, although I might also buy a system with Linux
preinstalled. 
I don't think Dell does the above, at least not on the Optiplex and 
PowerEdge lines.

Frankly, if I were buying a PC from a name brand maker, and I was 
concerned about support, I would buy nothing but Dell or Apple. Where I 
work, we buy Dell and Sun hardware, and I have pretty much the final say 
on what we buy. Since my boss is concerned about support, we buy the 
Optiplex machines for the 3 year warranty, etc. If I had the time at my 
current job, I'd actually assemble our own workstations.--I'm busy 
enough as it is. :)

That said, I've not really had to deal with tech support at any company, 
much. I generally fix the problems myself before I get to the point of 
calling. So, we're wasting money on support, but it makes my boss happy 
to think we have it, in case we need it. ;)

I haven't dealt with Dell's tech support in over three years, and when I 
did I found it acceptable. Compaq, I've never called them. Gateway, I've 
heard is abysmal, but judging from the overall quality of their 
machines, it's not surprising. ;)

Now, I have dealt with tech support from a couple of motherboard 
manufacturers recently and got a satisfactory response every time.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: HP/Compaq Presario and Linux

2005-01-29 Thread Jason Stephenson
Jon maddog Hall wrote:
It is not just that they want to save the cost of the CD, but some companies
are ordering systems without CD drives because they want a thin client on the
desktop...something without floppies and/or a CD...something that could
boot over the network and be diskless or be (re)installed standalone with only
a hard disk.
All data comes through the corporate server.
They view floppies, CDs and USB Flash devices as just another way of having
games and viruses introduced into the work environment.
Yep, I know all about that paranoia. I work for a consortium of 
Massachusetts public libraries.

I've long thought that thin client is just market speak for rip-off. 
In other words, you pay more for less computer.

Yeah, there are places where they work, but there are places where they 
don't.

I prefer to assemble my own gear, 'cause then, when something breaks, I 
generally know what broke and what it'll take to fix/replace it. I know 
that's not an option for everybody, but if you're getting into GNU/Linux 
or similar territory, then you can't rely on the big vendors to hold 
your hand. That's what lists like this are for.

I realize that the OP is in a situation where he doesn't have control 
over the hardware that was acquired. That's an unfortunate situation to 
be in and I face it occasionally at my current job and used to face it 
constantly at previous employers. In this particular case, I'd recommend 
installing Cygwin and forget about dual booting. Pretty much anything 
you'd want to do for network monitoring on GNU/Linux you can do with 
Cygwin, unless you want to turn the box into a packet shaping firewall, 
which I don't gather the OP wants to do.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: HP/Compaq Presario and Linux

2005-01-28 Thread Jason Stephenson
Benjamin Scott wrote:
  Keep in mind that many Compaq's keep the BIOS setup program on disk,
where just about everything else keeps it in firmware.  That means that if
you blow away the utility partition, you can no longer do anything useful to
configure the BIOS.
I'll second this. In fact, every Compaq that I've ever seen does this. 
It's usually the last partition on the disk, is roughly 32MB in size and 
generally of a type not recognized by Linux fdisk or fips.

If you do manage to remove this partition, you can get it back by 
running the recovery CD that came with your Compaq. Then, you'll want to 
repartition the drive and not touch the one that meets the above 
criteria. You usually can't miss it, 'cause it shows up as Partition 4 
on a system with only 2 partitions.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: cron job verification

2005-01-11 Thread Jason Stephenson
Larry Cook wrote:
On my Debian boxes, cron jobs send me an email if and only if they have
any output.  

Also on RedHat 8 and Solaris.  I would guess that is the standard 
behavior for all distros.
Also on Solaris, HP-UX, *BSD, etc. It's the standard for cron on UNIX or 
any UNIX-like OS.

Also, I believe the
mails are sent to root; I have an entry in /etc/aliases which actually
sends it to my normal user account.

I think it is sent to the account that owns the crontab file, which 
would be the account you did the crontab -e from.  If I want email to 
go to a list of people I just pipe the output to mail.
In vixie cron and most others, you can specify a user to get the mail, 
it generally looks something like:

[EMAIL PROTECTED]
man 5 crontab, it's enlightening.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: grub mysteries

2005-01-08 Thread Jason Stephenson
Jeff Macdonald wrote:
However, that's under Linux. When booting however, Grub sees something
else. hd3 becomes hd1! hd1 becomes hd2! Changing grub.conf to point to
the correct devices doesn't help. I'm only able to boot using a floppy
and specifying root, kernel and initrd command by hand. Has anyone
else seen this? This is not a PCI bus ordering thing where ide=reverse
would fix things.
Grub can be told to map drives to different locations, essentially 
fooling the OS into seeing the drives as having a different letter in 
the BIOS. It could be that you have some of that going on in a menu.lst 
or grub.conf file. I'll wager that it may have been added automagically 
by the install software of either Fedora or Gentoo.

When I configured a workstation to dual boot with MS Windows 2000 on a 
second hard disk, I had to use grub's map command to fool Windows into 
believing it was on the first BIOS drive, or Windows would not boot.

Look in the grub config files, usually under /boot/grub subdirectory, 
for lines like the following:

map (hd1) (hd0)
Where hd1 and hd0 could be other grub drive specifications.
Another possibility could be the Adaptec controllers playing with the 
BIOS' view of the drives and their ordering. I've not ever used an 
Adaptec IDE controller, but I did have to reconfigure an AHA-2940 
controller in order to get it to work properly with grub on the system 
described above.

Cheers,
Jason
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: FSF looking for Sys Admin

2004-12-17 Thread Jason Stephenson
Mark Komarinski wrote:
And that's upstate NY.  It's worse in NYC.
Taxachusetts indeed.
Yeah, I moved from KY to Massachusetts a couple years ago. The sales tax 
here is lower, in KY it's 6%, and the bite from state income tax is 
about the same, though lower once you factor in some deductions that MA 
lets you take that most states don't, like rent, up to a certain amount.

Thing I don't understand are the people who live in NH, but work in MA. 
Frankly, if I did that, I'd demand to vote in MA or refuse to pay the 
state income tax.

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Slightly OT: GPG and PGP Global Network

2004-12-12 Thread Jason Stephenson
I don't usually followup on my own list mails like this, but I thought 
that I'd share what I did.

Well, I went ahead and told them to publish that key. However, once I 
checked it, I discovered that I had already revoked it in 2002! If you 
query pgp.mit.edu or the new PGP Global Network for my key, you'll see 
that it was revoked.

So, they asked me to verify a revoked keyI think that's a bit odd, 
but oh well. *shrugs*

Now, to set up Enigmail and to generate a new key
Anyone want to have a key signing party?
Cheers,
Jason
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Slightly OT: GPG and PGP Global Network

2004-12-10 Thread Jason Stephenson
About 3 years ago, I published my public key to a PGP keyserver. It has 
two signatures from friends. I have not used it other than a half-dozen 
times during the first year after publishing it, in fact, I'd almost 
forgotten about it.

Tonight, I received a message from the PGP Global Network to verify that 
the key they have for my email address is correct. I have the option to 
Accept or Cancel.

One of the email addresses associated with this key is no longer valid 
for me.

I'm wondering if I should 'Cancel' the key and generate a new one 
(though I'll likely not use it), or should I 'Accept' the key and then 
publish my revocation certificate to the server. What do those of you 
who use PGP/GPG on a regular basis think is the better course of action?

Just as an aside, has anyone ever used encryption with Mozilla/Netscape 
email? I understand that it uses a certificate to identify the user and 
that you can get one from several sources. Is it compatible with PGP or 
is it something else entirely?

Yes, I know there is a GPG plugin for Mozilla Email and I have 
considered using it. I just haven't bothered downloading the code.

TIA,
Jason
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Decoding Microsoft (Outlook) Attachments

2004-12-09 Thread Jason Stephenson
Why go to all the bother of decoding the winmail.dat attachments? 
Lookout can be configured to send proper attachments. I've forgotten 
exactly what the steps are, because I haven't had to help anyone with it 
in about 4 years.

Most folks are thankful for the help if you're polite about it.

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: For those following Sender based authentication - a question

2004-11-21 Thread Jason Stephenson
Jeff Macdonald wrote:
Anybody agree with the following statement?
The HELO domain represents the mail provider used by the author of the
message and thus is more closely related to the author than any other
header within the message.
This is from the CSV doc to the FTC. Is is just mean, or does this
seem to ignore multiple hops?
I disagree with that statement completely, because the HELO can be and 
often is any-old string the spammer, errr, the author of the message 
feels like putting in there. You'd be surprised at the number of 
connections that my mail servers reject from other servers that send 
either my server's IP number or host name as their name in the HELO.

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Ports (was Re: The First Linux Distro?)

2004-10-04 Thread Jason Stephenson
Bill McGonigle wrote:
As to 'most stable' I've run into unresolvable and/or circular 
dependency problems with both rpm and dpkg (dpkg more) so frequently of 
late that I'm about to give up on that method.  Nice idea, but 'works 
sometimes' isn't quite enough.  I'm interested in finding out if ports 
does any better.  But since Slackware doesn't attempt such feats it 
doesn't suffer such problems, so from that perspective it is more stable.
I use FreeBSD on two of my computers at home, my main workstation and my 
web/email server. I also use OpenBSD on my gateway/firewall. Ports do 
solve most dependency problems that you encounter with rpm and dpkg. 
Occasionally, I have run into a certain port, mysql++ comes to mind, 
that assumes you have a different version of a dependency installed, but 
in those cases, I've had an easy time modifying the Makefile to look for 
the other version of the package. (Usually that means checking for a 
different library or header version.) When I find these issues I send a 
patch to the port maintainer. Of course, that's not so easy for folks 
who aren't programmers.

You can also install pre-compiled packages of things that are available 
in ports. This works a lot like installing packages on Slack. (I ran 
Slackware for 4 years before switching to FreeBSD.) You'll occasionally 
run into problems with dependencies, and they're harder to fix in binary 
distributions than with ports, so I pretty much install everything from 
ports.

Rebuilding your system, by which I mean updating system source and 
application source and then recompiling it all from scratch, can take 
quite a long time. I often set aside a good part of a weekend for a 
complete source upgrade for my two machines. However, I do believe that 
the benefits are worth the trade off in time. I don't think that I'd 
switch my home machines back to a system that relied heavily on binary 
packages after using ports. It has become so much a habit of how I work.

Cheers,
Jason
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Making a Windows disk a file on Linux

2004-09-10 Thread Jason Stephenson
You've gotten some good advice so far, but I just wanted to mention 
another alternative that I've done on several different systems, 
including GNU/Linux and FreeBSD.

If you really are interested in having a FAT32 or some specific 
filesystem available on a drive without repartitioning, then another 
workable solution is to create an empty file of the desirable size by 
using dd if=/your/file of=/dev/zero with other appropriate options. You 
can then format that file with a file system and mount it with the 
appropriate options. You can read and write to that file as if it were a 
regular filesystem. I've even used the above to make a bootable file 
for VMWare.

It has been a year or more since I've done that and the specifics vary 
by OS (lately, I've been using FreeBSD more for that sort of thing), but 
finding instructions was fairly easy on the 'Net last time I needed to 
look. Also, if prodded, I could probably get you the proper sequence of 
commands on your OS choice after a few minutes refreshing my memory with 
the man and info pages.
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Want to reverse engineer Yahoo's multiple photo upload

2004-09-01 Thread Jason Stephenson
Jeff Macdonald wrote:

Anybody have an alternative that isn't shareware?
Yes, Apache (http://www.apache.org/) can be configured as a proxy server 
and could log/store just about anything you want.

___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Limiting login SSH attempts?

2004-08-29 Thread Jason Stephenson
Bill McGonigle wrote:
On Aug 29, 2004, at 19:07, John Feole wrote:
What about using TCPWrappers and the /etc/host.allow, /etc/hosts.deny 
funtionality?

I only know about the attack/host-ip after the fact so I can't just add 
it to the hosts.deny.  Does TCPWrappers have some stateful rules?
If you know that legitimate ssh connections will only be coming from a 
certain range of IPs, then you can deny everything but what is in your 
hosts.allow. You don't have to worry about denying any specific IPs, 
since you deny everything but what is specifically allowed.
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


Re: Sound issue

2004-07-29 Thread Jason Stephenson
The most likely thing is you don't have the proper cable running from 
the back of the CD-ROM drive to the sound card. If you built the machine 
yourself, one should have come with the CD-ROM drive, if not, I'm not 
sure if anyone sells them, and I don't know exactly what they are called.


___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss


  1   2   >