Re: Moving web server to new IP
On Wed, Jul 26, 2006 at 09:05:03PM +1200, Simon wrote: I know this is strictly not a debian question, but i will be using debian todo it! I need to move our web server to a new IP range. This is hosting around 300 websites, about 250 on 2-3 IPS (standard name based virtual hosts) and the rest on their own IPs (SSL hosts). All running on apache/php/mysql. Im wondering how i can achieve this over a period of a week rather than all in one go. set up a host in your own domain called 'vhost.your.domain.com' or whatever. make sure that your web server is configured to use both the new and the old IP address. gradually change the DNS for the virtual host domains so that www.vhostdomain.com is a CNAME for vhost.your.domain.com rather than an A record. dual hosting of the web server gives you time to move them gradually. pointing the www. records at a CNAME will make it easier to move them all again in future if you ever need to. if you don't want to use a CNAME (and there are pros and cons - e.g. dont do it if you want an MX record pointing at the same IP), but still don't want to manually edit 250 zone files, you can use perl to change them all in one go. something like: perl -p -i.bak 's/\b\d{10}\b/2006072701/; s/OLD_IP_OF_WEBSERVER/NEW_IP_OF_WEBSERVER/g;' * NOTE: the * on the end indicates all files in the current directory. use standard shell wildcards to refine the file selection if you need to. if you're paranoid (as i am), copy all the zone files to subdirectory under /tmp and run it in there first as a test to confirm that it will do what you want. when you're happy with the result, run it in the directory where you keep your primary zone files. note that the first search and replace looks for any sequence of 10 digits and replaces them with today's date. this assumes two things: 1. that you use the standard MMDDnn format for the zone's serial number, and 2. that you don't have anything else that looks like a serial number in the zone file. btw, you can use perl to automatically change all the A records to point at the CNAME too...you just need a slightly more complicated search regexp: perl -p -i.bak 's/\b\d{10}\b/2006072701/; s/IN\s*A\s*OLD_IP_OF_WEBSERVER/IN CNAME vhost.your.domain.com./;' * My thoughts are to set up some sort of proxy to proxy the requests from one IP range to another. But, this would result in wierd hit stats (coming from the proxy IP rather than the client IP - i think). are the two IP addresses in the same network segments or at the same physical real-world location? if so, then just make sure both IP addresses are routed to your web server. if not, then proxying will be needed. you could do it with DNAT, but only if the two different IP networks are routed to the same actual location (i.e. at the same ISP or your own network blocks) - in which case, you're better off either routing both IPs to the same host or (as below) putting a second NIC in the web server. much simpler and less hassle than DNAT. (as for stats, i vaguely recall seeing an apache module which looked at the Via: headers added by proxies to the request and logged that rather than the actual TCP source address. can't remember what it's called. it's not something that's very important, though, esp. if it's only going to be for a week or so). alternatively, put a second network card in the web server and connect it to both networks (if physically possible). craig -- craig sanders [EMAIL PROTECTED] (part time cyborg) -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Which Spam Block List to use for a network?
On Mon, Jun 21, 2004 at 12:46:01PM +0200, Francisco Borges wrote: ? On Sat, Jun 19, 2004 at 08:15:11AM +, Adam Funk wrote: On Friday 18 June 2004 15:40, Francisco Borges wrote: THE QUESTION: We need to use some form of Block List at the connection level, Whatever you do, don't be one of those ignorant, asinine admins who block mail from all dynamic IPs. No, I don't intend to do that. yeah, good decision. blocking mail from dynamic/dialup IP addresses is the right thing to do, but it's much better to be an informed, intelligent and suave admin who does that than an ignorant, asinine one (but that's true of everything, isn't it?). Interestingly enough, *today* I got a note from a colleague has started doing it at his network. smart colleague. I don't know the axact number by heart but we are above 1500 users here; blocking dynamic IPs would be a disaster. permit your own dynamic/dialup IP addresses, same as you (should) do with other restrictions (e.g. rejecting non-fqdn hostnames...good thing to block from external sources, but not a good idea to block from your own users). reject other dyn/dialups - they should use their own ISP or mail server. in postfix, you do that by putting the permit_mynetworks rule *before* the reject_rbl_client rule. craig -- craig sanders [EMAIL PROTECTED] The next time you vote, remember that Regime change begins at home -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Securing bind..
On Wed, Jan 02, 2002 at 03:23:01PM +0200, George Karaolides wrote: On Tue, 1 Jan 2002, Craig Sanders wrote: someday soon, someone's going to take the good ideas from djbdns, combine it with the good stuff from bind (including backwards compatibility with bind config zonefile formats), add a few useful new ideas (e.g. an RXFR protocol that embedded the rsync protocol directly) to produce a fast, secure DNS daemon, and release it with a GPL-compatible license. this will blow both bind djbdns out of the water. Roll on the day! When such a godsend appears, I'll grab it with both hands, provided of course that besides reverse compatibility with BIND config. files, it also gives you a new, simpler config. scheme. what's so difficult about bind's zone files or configuration files? they're simple, straight-forward, and intuitively obvious to anyone with a functioning brain. they're also extremely well-documented, with numerous books available (_DNS Bind_ published by ORA being the must-have book for *any* DNS administrator) In the meantime, I submit that djbdns is OK unless you really, really want to stick to the BIND zonefile format, or unless you need something more than what djbdns's simplistic limitations can provide. though I seem to recall hearing it can be made to read BIND zone files. IMHO once you're used to it, the djbdns data file format is actually quite nice. I've worked on both BIND and djbdns and I find the latter easier to use. For example, the following ten entries would require four entries in named.conf and four zone files: that would be because there's actually four different zones involved. four zones == four config entries, four zone filesnot one file and a bunch of half-arsed assumptions. # Nameserver for my network addesses... [djbdns config example deleted] Surely that's not all bad? yes. there's too much magic - i.e. fixed assumptions that may or may not actually correspond to what you need. in other words, if what you need is *exactly* what DJB in his infinite wisdom has catered for then you'll have no problem. if you need something even slightly different then you're screwed...and don't bother complaining or asking to be accomodated because that's just evidence that you're an idiot. in other words, it works for the single scenario of one domain and one set of IP addresses, with DNS for both hosted on the same server. fine for a leafnode site...no use at all for an ISP. i.e. djbdns is only a toy, and something that works fine at the toy end of the scale doesn't necessarily work at all when you need to scale it up to a professional system catering for hundreds or thousands of domains. You don't have to worry about keeping A and PTR records in sync. frankly, anyone who thinks that this is difficult should not be trusted to have anything to do with managing DNS. I know there are management tools that automate synchronisation of forward and reverse mappings in BIND zone files, but why should the reverse-mapping information be in a file separate from the forward information? Once the three conditions above are met, why should we need to administer the forward and reverse mappings separately? because : a) the relationship between forward and reverse mapping(s) is completely arbitrary, and b) the .in-addrp.arpa zone isn't always hosted on the same server as the forward zone(s) - it isn't at all unusual for a NS which is authoritative for a domain to NOT be authoritative for any corresponding .in-addr.arpa domain(s). leafnode sites may have simple configurations which suit djb's assumptions but they're also the sites *least* likely to be authoritative for the .in-addr.arpa zones for their IP addresses...most leafnode sites don't own their own addresses, they rent or borrow them from their ISP. For all the arguments against djb's attitude re. development and licensing, it must be acknowledged that his keeping tight control of the software has prevented it from suffering from feature bloat. prevents it from suffering from features too. even worse, it prevents anyone other than djb from fixing or enhancing it. And since it's open-source no, it is NOT open source. the source may be available but it doesn't meet the criteria of the Open Source Definition, which requires that the source code be freely modifiable and redistributable AND that binaries compiled from modified sources be redistributable. and you can distribute patches to it, there's no shortage of patches to get it to do what you want. the rest of the unix world got sick of hunting for obscure patches to programs ages ago - hunt for the original source, then hunt for half a dozen patches to fix various problems or add required features and hope that the patches will actually apply cleanly without any conflict. that was the way we had to do things back in the bad old days. it sucked then, and it still sucks now. OTOH, djb thinks
Re: Securing bind..
On Mon, Dec 31, 2001 at 12:52:23AM -0500, P Prince wrote: there are two major problems with all of bernstein's software. the first is that it requires you to throw away your existing configuration...no big deal for a caching only name-server or if you only have one or two domains to serve. a severe pain in the arse if you have hundreds or thousands of domains. This is crazy. Anytime you change software packages, you must rewrite your configuration. it's not at all crazy to expect a new package which is supposed to replace an existing service to provide some level of backwards compatibility in order to facilitate the migration. if not compatibility with the config or zonefile format, then at the very least it should provide a 100% accurate automated translator. djb refuses to understand that software migrations need to be planned, they need to work smoothly with minimum fuss and minimum downtime, and that backwards-compatibility with existing standards (de-facto or otherwise) is mandatory. what he thinks of as legacy crap that should be thrown away is actually working configuration that is serving the needs of hundreds or thousands of users, all of whom will be extremely pissed off if it stops working for a few days because of migration difficulties. until he understands this, most professional systems administrators are going to ignore his code no matter how superior it is theoretically. theoretical superiority is completely worthless in the face of practical inability to use it. i won't use his software for this reason, and because it's non-free. both issues are show-stoppers as far as i'm concerned. And, if you or anyone you know manages thousands of domains, I'll mail you a crisp, clean 20 dollar bill. (In order to be eligible, you must provide the name of your employer, so that I can avoid their service.) i manage hundreds of domains myself. around 700 primary domains and around 950 secondaries. i know several people who manage thousands of domains. not everyone has tiny toy systems to design, develop, and manage. some people have real systems to look after. and you know what? even though i don't personally look after tens of thousands of domains, or know anyone who does, i'm still able to recognise that someone, somewhere might do exactly that and not dismiss the idea as crazy or ridiculous. i can actually imagine a world bigger than my current immediate needs. named.conf doesn't work with djbdns - a minor problem. This is a stupid argument. httpd.conf doesn't work well with thttpd, and proftpd.conf doesn't work well at all with wu-ftpd. as i said, named.conf is only a minor problem. it doesn't matter much. the real problem is that bind's zone files don't work with djbdns. that's beyond a mere problem, that's idiotic. more importantly, bind style zonefiles don't work with djbdns - the idiot invented his own stupid format for zone files. if djbdns had been backwards-compatible with bind zonefiles then it might have had some vague chance of replacing bind. Perhaps, but BIND invented its own zonefiles too. What you fail to realize is how bad BIND zone files suck. yes, i do fail to realise that because they don't suck. what, exactly, is wrong with them? unfortunately, bernstein's software is severely limited by his views. he's a fairly good programmerbut a lousy systems administrator, with no concept of how real world sysadmins use tools or how they automate them. I hope you don't consider youself a good systems administrator, i do, actually. i'm only a mediocre programmer, but i'm a damn good systems admin - which requires a completely different set of skills and aptitudes than programming. i've only met a few people who are as good as me at systems admin stuff, and even fewer who are better. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: Securing bind..
On Tue, Jan 01, 2002 at 01:18:43PM +1100, Donovan Baarda wrote: An interesting thing about djb is he does have knack for identifying real problems with existing defacto standard software and re-inventing it. he also reinvents things that don't have any significant problems, sometimes just because he won't admit that a particular programmer has ever done anything of worth. ucspi-tcp, for example...this abomination is a clumsy mess compared to the inetd + tcpd that everyone else uses. What then follows is fierce flamewars about the relative merits of the old vs djb software/licence/etc. In summary the djb implementation is full of good ideas and raises valid concerns about the original implementation, but is crippled by a crappy licence, disrespect for standards, and wierd configuration paradigm. well said! Eventually, this leads to yet another implementation or three that takes djb's ideas and addresses the licence, standards, and configuration issues. while it's true that he's sometimes the first to actually write code to provide alternative implementations (and action is worth a lot more than mere talk), it's not true that they're solely his ideas. people had been bitching about sendmail for many years before qmail came along, many of the flaws (both implementation detail AND design) were well known. The sad thing is if djb stopped using his crappy licence, there would be no need for the n+1 implementations his re-invention spawns, because the community could adopt his software and resolve the other issues to their own satisfaction. well said, again. you've hit the nail right on the head. while his stuff can often be used as a sign-post pointing out directions to take (and to avoid), but it can't be used unless you're willing to trap yourself into a dead-end...i almost fell into that trap with qmail (actually did on a few servers), but won't fall for it again. djb's software isn't free, and can't/won't evolve to meet future needs. someday soon, someone's going to take the good ideas from djbdns, combine it with the good stuff from bind (including backwards compatibility with bind config zonefile formats), add a few useful new ideas (e.g. an RXFR protocol that embedded the rsync protocol directly) to produce a fast, secure DNS daemon, and release it with a GPL-compatible license. this will blow both bind djbdns out of the water. ...kind of like postfix did to sendmail qmail. i had high hopes for the DENTS project a few years back they looked like they were really going to solve many of bind's problems, and their stuff was GPLed. it got off to a great start but unfortunately, the project seems to have died. maybe there's still some hope...sourceforge lists several DNS daemon projects: http://sourceforge.net/softwaremap/trove_list.php?form_cat=149discrim=238 moodns CustomDNS are two that i hadn't heard of before. moodns sounds a bit like what DENTS was going to be. CustomDNS is in java so i can't bring myself to take it seriously. La MaraDNS i looked at about a year ago and it has an even dumber zonefile format than djbdns (if that's possible). craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: Securing bind..
On Sun, Dec 30, 2001 at 07:31:30PM -0600, Michael D. Schleif wrote: ``By combining all these tools, you can finally approach the functionality of a trivial rsync script. Wow.'' Enough said . . . by throwing away all your existing zonefiles, DNS configuration, DNS tools and a bunch of features which djbdns doesn't support, you get to use rsync to transfer zonefiles around. an additional part of the price you pay is djb's moronic non-free software license and his rabid reinvent-the-wheel-as-a-square-because-it-wasn't-invented-here attitude. big deal. bind can do rsync zone transfers merely by writing a wrapper script for named-xfer. i've done it. it works. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: Securing bind..
On Sun, Dec 30, 2001 at 08:34:32PM -0600, Michael D. Schleif wrote: Craig Sanders wrote: On Sun, Dec 30, 2001 at 07:31:30PM -0600, Michael D. Schleif wrote: ``By combining all these tools, you can finally approach the functionality of a trivial rsync script. Wow.'' Enough said . . . by throwing away all your existing zonefiles, DNS configuration, DNS tools and a bunch of features which djbdns doesn't support, you get to use rsync to transfer zonefiles around. And, perhaps, your point? that throwing away all your existing configurations and starting from scratch just to get a trivial feature (which can easily be had with a shell script wrapper around named-xfer) is NOT a good idea. there are two major problems with all of bernstein's software. the first is that it requires you to throw away your existing configuration...no big deal for a caching only name-server or if you only have one or two domains to serve. a severe pain in the arse if you have hundreds or thousands of domains. the second is that it is incredibly inflexible - you can only use it in the particular way that bernstein wants you to use it...and if you actually need to use it some other way then you are, according to djb, an idiot because he is never wrong. bind is far from perfect. but it's a lot better than all of the alternatives. if something actually better (as opposed to just loud grandiose claims of being better) came along, i'd switch to it in an instant. Broken as many of them are, they still work quite well with djbdns, thank you. named.conf doesn't work with djbdns - a minor problem. more importantly, bind style zonefiles don't work with djbdns - the idiot invented his own stupid format for zone files. if djbdns had been backwards-compatible with bind zonefiles then it might have had some vague chance of replacing bind. an additional part of the price you pay is djb's moronic non-free software license Really? http://cr.yp.to/distributors.html yes, really. non-free. if you don't understand WHY it's non-free then read the DFSG again. and his rabid reinvent-the-wheel-as-a-square-because-it-wasn't-invented-here attitude. As you know, the software does *not* espouse his nor anybody else's views. So what? unfortunately, bernstein's software is severely limited by his views. he's a fairly good programmerbut a lousy systems administrator, with no concept of how real world sysadmins use tools or how they automate them. If conformance to standards is interesting to you, then check this out. djbdns does not conform to standards. he proudly ignored any aspects of the standards that were inconvenient to him. bind can do rsync zone transfers merely by writing a wrapper script for named-xfer. i've done it. it works. That, too, is my point -- glad you found it . . . so your point is that it's better to throw away years of configuration work so you can use djbdns than it is to write a simple wrapper script. right. good thinking. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: System Time Problems.
On Tue, Nov 27, 2001 at 04:54:46PM -0800, Nick Jennings wrote: On Tue, Nov 27, 2001 at 06:59:01PM -0500, Bulent Murtezaoglu wrote: But anyway, why not have the battery backed clock set to UTC? Because I am a simple man. unless you need to dual boot with windows, then setting the system clock to UTC *IS* the simplest and best solution. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: vim 6.0 packages
On Sat, Sep 29, 2001 at 08:01:44AM +0200, [EMAIL PROTECTED] wrote: On Sat, 29 Sep 2001, Craig Sanders wrote: On Fri, Sep 28, 2001 at 03:29:39PM +0200, Wichert Akkerman wrote: Not needed, I'll settle for Essential: yes :) good idea :) I don't think so. has debian really turned into a bunch of humourless robots who can't spot a joke when it walks up and slaps them in the face? craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: vim 6.0 packages
On Sat, Sep 29, 2001 at 08:50:47AM -0500, Steve Greenland wrote: it would also be nice for vim to have a higher alternatives precedence than nvi. It does. I just tested. Nvi uses 30, vim 120. good, that must have changed. i often have to install vim on a system, then run vi and find i'm in nvi rather than vim. yuk! Are you sure you don't have an alias or script earlier in the path? Or possibly you've got vim-tiny? Or you've picked up one of the i18nized nvi packages (I'm not sure what priorities they've been assigned.) yep. it usually happened on a system that i'd just built, then i needed to edit a config file and ran vi...noticed that it's nvi rather than vim, so quit and apt-get install vim vim-rt while i'm thinking of it. then run vi again and remember doh! gotta purge nvi too. now i just habitually purge nvi when i install vim. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: vim 6.0 packages
On Fri, Sep 28, 2001 at 03:29:39PM +0200, Wichert Akkerman wrote: Not needed, I'll settle for Essential: yes :) good idea :) it would also be nice for vim to have a higher alternatives precedence than nvi. i often have to install vim on a system, then run vi and find i'm in nvi rather than vim. yuk! 'dpkg --purge nvi' fixes that but it shouldn't be necessary. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: dhcp-dns problem
On Fri, Sep 07, 2001 at 08:47:10PM -0700, Dean A. Roman wrote: Will turning this feature?? off in Win2K allow the dhcp-dns scripts in linux to update bind? no, it's unrelated. it'll just stop the w2k clients from attempting to update the dns server. How do I fix the problem of dhcp-dns not updating bind? no idea. the log extracts you sent didn't show any problem...they only showed your w2k client being denied access. Is it related to the win2K feature?? completely unrelated. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: dhcp-dns problem
On Sat, Sep 08, 2001 at 03:02:54PM -0700, Dean A. Roman wrote: For future reference, I was using an alias for the NS name in my database records. yep, you can't use a CNAME for an NS record. like MX records, NS records can only point at A records. btw, i strongly recommend using a subdomain for dhcp-dns. e.g. if your domain is example.com, then use pn.example.com (pn == private network) or internal.example.com or whatever for dhcp-dns. just configure bind to be master for the subdomain, configure dhcp-dns to use it, and create an empty zonefile (an SOA record is all that is needed). that stops nsupdate from mangling your main zone file (it rewrites the whole thing on every update, losing your formatting and comment lines etc). it also makes it impossible for any error to affect your main domain, by isolating all dynamic updates to the subdomain. i also recommend buying a copy of _DNS Bind_ (pub. by O'Reilly) and reading it from cover to cover before working on DNS stuff. DNS isn't difficult, but it is easy to make small mistakes that have huge consequences. e.g. mis-using CNAME records as above is a very common mistake. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: dhcp-dns problem
On Thu, Sep 06, 2001 at 11:02:00PM -0700, Dean A. Roman wrote: Sep 6 15:07:31 srfs1 named[1944]: denied update from [192.168.100.100].1097 for mydomain.com Sep 6 15:07:31 srfs1 named[1944]: denied update from [192.168.100.100].1103 for 100.168.192.in-addr.arpa update requests are coming from 192.168.100.100 == /ETC/BIND/NAMED.CONF == acl dyn-update { 127.0.0.1; 192.168.100.20; }; 192.168.100.100 isn't listed in that acl. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: dhcp-dns problem
On Fri, Sep 07, 2001 at 08:17:04AM -0700, Dean A. Roman wrote: I'm a bit confused, and it is probably because I don't totally understand how the dynamic dns updates work. if the rejected updates are coming from a W2K machine then it has nothing to do with dhcp-dns. it's a fault with W2K. 192.168.100.100 is the windows machine that checked out the IP address from the dhcp server(srfs1-192.168.100.20). Should update requests be coming from a dhcp client? nope. How is the windows 2k dhcp client requesting a dns update? because microsoft thought it would be a good idea for clients to be able to update the DNS on the server, and for that stupidity to be ON by default. anyone but microsoft would have realised that it is insane from a security perspective to let unauthenticated unauthorised client machines screw around with such a fundamental service. this bug, btw, is particularly annoying if you host the DNS for a domain that is similar to a well-known/popular domain...you get hit by bogus update requests from all over the planet from moron users running W2K. ditto if you run a dialup ISP with customers running W2K. at first i thought this was some new kind of DNS attack, until i realised that it was just another innovative new idea from Microsoft. and there's nothing you can do about it unless you control the client machines. fortunately you have access to the machines on your network so it can be disabled. look under TCP/IP settings on the W2K machine. Does this mean that I need to put the entire subnet range that I allow for dhcp checkout(192.168.100.100-255) in the acl? not unless you want your end-users to be able to modify your DNS at whim. I thought that I only had to list the dhcp server(192.168.100.20) in the allow-update field? correct. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: exploring debian's users and groups
On Mon, Aug 06, 2001 at 11:11:18PM -0700, Daniel Jacobowitz wrote: sudo: HELP: Nothing uses it here, and I have sudo installed.. Maybe there's a way to only let users in this group use sudo? There is, sure, but the group isn't special in any way... users in group sudo don't have to type their password when running sudo. useful for, e.g., writing sudo wrapper scripts that are forked by an MTA such as postfix that refuses to run anything as root. also useful for sudo wrappers to adduser or chg passwd type programs that are executed from a CGI script (after appropriate taint checking etc). TMTOWTDI - the sudo group isn't strictly needed for this, you can also use the NOPASSWD keyword in /etc/sudoers. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: damn. so much for MAPS
On Fri, Jul 13, 2001 at 11:53:55PM +0200, Martin F. Krafft wrote: http://slashdot.org/article.pl?sid=01/07/13/0513251 i have already started setting up my own RBL domain at rbl.madduck.net. the resources are *not* enough to host for hundreds, [...] thoughts, ideas? i've been following this on news.admin.net-abuse.email. IMO, there's not much to worry about. MAPS was a good idea when it started, but its inertia has damaged the fight against spam for at least two years. in the long run, we'll be better off because alternatives will spring up which aren't as sluggish and unresponsive as MAPS has become (it takes forever to get a new spamhaus listed in MAPS. even worse, there's fairly compelling evidence that they won't list any site which is an above.net customer). orbs.org died nearly two months ago. within a month there ware at least 4 son-of-orbs services available, all of them with more professional policies and attitudes than AB's orbs.org. MAPS is committing suicide. by the end of the month there will be several replacements. there's even a list of dnsrbl lists available at http://relays.osirusoft.com/ - the guy running it is attempting to get the various dnsrbl services to co-ordinate and co-operate. craig -- craig sanders [EMAIL PROTECTED] Fabricati Diem, PVNC. -- motto of the Ankh-Morpork City Watch
Re: RFC: Removing SVGA support from Ghostscript packages
On Thu, Apr 05, 2001 at 07:07:16PM +0200, Dr. Guenter Bechly wrote: Yes there are people still using libsvga support and there even are packages that depend on it like bmv (a postscript viewer for console that is maintained by me and that is still actively developed). Therefore, please do not remove libsvga support before there is another option for previewing postscript files on the console (e.g. via framebuffer), since this would take away an important functionality from console only users!!! why not create a gs-svga package (which depends on gs) containing only the gs binary compiled with svga support? that way those who need it can have it, and those who don't can have one less setuid root program on their system. craig -- craig sanders [EMAIL PROTECTED] GnuPG Key: 1024D/CD5626F0 Key fingerprint: 9674 7EE2 4AC6 F5EF 3C57 52C3 EC32 6810 CD56 26F0
Re: Serial ports - how to get them to coexist peacefully...
On Sat, Feb 17, 2001 at 03:16:44AM +1100, hogan wrote: Can I make the onboard and oncard ttyS's play nice on same IRQ? no. or should I play musical jumpers until they're on separate IRQs? yes. Read something in 2.4.1 kernel config about making serial ports nice to one another when on same IRQ.. anything similar in 2.2.x? AFAIK, that only works for some dumb 1655x-type multiport serial cards - e.g. moxa 4/8 port cards and digiboards. craig -- craig sanders [EMAIL PROTECTED] GnuPG Key: 1024D/CD5626F0 Key fingerprint: 9674 7EE2 4AC6 F5EF 3C57 52C3 EC32 6810 CD56 26F0
Re: Apache mod_rewrite and Alias ?
On Tue, Sep 05, 2000 at 11:22:56AM +, Jaume Teixi wrote: RewriteRule ^/stats(.*)/reports/%{SERVER_NAME}$1 but it only forwards to /_document_root_/reports/virtualhost.com ok, so the rewrite rule is working. as I see on rewrite log then reports 404 so Alias /reports/var/reportshas no effect how to enable rewrite and then alias to change document_root path ?¿ is the Alias in the global httpd config, or in a particular vhost config? try putting it in the global config. also try putting it in each vhost's config. craig -- craig sanders
Re: Apache mod_rewrite
On Mon, Sep 04, 2000 at 06:20:09PM +, Jaume Teixi wrote: I'm still getting 404 RewriteLog shows: ' pattern='^www\.[^.]+$' = not-matched whats happening ? i wasn't paying enough attention to your rules. they can't work as written. you want to look at the SERVER_NAME variable, not HTTP_HOST. also, turn on mod_rewrite logging, and increase the log level to help diagnose any faults. the rule below is untested, you'll probably have to tweak it a bit to get it working. remember to set loglevel back to 0 or 2 or whatever once you've got it working. high logging levels will slow down your apache server. try something like: RewriteLog /var/log/apache/rewrite.log Rewrite Loglevel 9 RewriteEngine on RewriteRule ^/stats(/.*) /reports/%{SERVER_NAME}$1 alternatively: RewriteRule ^/stats/(.*) /reports/%{SERVER_NAME}/$1 (without testing, i'm not sure which form is better). you still need the Alias definition from my last message. then make sure that your stats processing software puts the results in a subdirectory under /var/reports which has exactly the same name as the ServerName keyword in the the VirtualHost.../VirtualHost config. e.g. if you have a virtual host with ServerName www.foo.com then the reports for that host should go in /var/reports/www.foo.com/ craig -- craig sanders
Re: Apache mod_rewrite
On Wed, Aug 30, 2000 at 10:18:14PM +, Jaume Teixi wrote: I need to do the following in order to access stats for each based virtual host when typing url www.virtualhost1.com/stats or www.virtualhost99.com/stats server page located under /var/reports/virtualhost1 or /var/reports/virtualhost99 I've tryed on my httpd.conf: RewriteEngine on RewriteCond %{HTTP_HOST}^www\.[^.]+$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)(.*)/stats/var/reports/$1 Apache produces a 404 any points to fix this ? mod_rewrite rewrites the url, you're confusing it with the path to the files. try setting up a global Alias like so: Alias /reports/ /var/reports/ then use mod_rewrite to rewrite the URL www.virtual.com/stats to www.virtual.com/reports/virtual.com/. the rewrite above should work, but the final line should be: RewriteRule ^www\.([^.]+)(.*)/stats/reports/$1 another way of doing it is to just have a symlink in their document root. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Sun, May 21, 2000 at 07:46:47PM -0800, Ethan Benson wrote: i think dlocate really takes care of the problem nicely, for things like status and file lists dlocate is quite fast. its unfortunate that it was removed from potato for a *ONE LINE BUG* with a fix in the bts... why oh why could there not have been an NMU?? i wasn't even aware that it was removed from potato until i tried to install dlocate on a potato system with apt-get a week or so ago. this is the second of my packages that have been removed for trivial reasons. i gave up on potato after the first one...at the time, i offered to upload a version which fixed a minor packaging error (i forgot to specify frozen as well as unstable) but i didn't get a reply until after the deadline and the answer was basically haha! too late! - this does not exactly inspire enthusiasm in me. for that reason (amongst others, like the fact that potato is already obsolete and will be even more obsolete by the time it gets released), i do not give a damn about potato. the bug isn't, IMO, even in dlocate. it is in the slocate package. slocate should NOT replace GNU locate if it is not 100% compatible with it. but, as i said, i don't care. i don't have the time or the energy to argue with a release manager whose goal seems to be to find excuses to remove packages from the distribution. IMO, the stable should be treated as a fork, anyway. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Sat, May 20, 2000 at 07:37:39PM -0800, Ethan Benson wrote: On Sat, May 20, 2000 at 07:07:00PM +0200, Wichert Akkerman wrote: Previously Keith G. Murphy wrote: I must say, my subjective experience has been that rpm's are much faster to install something. Of course, it's also faster to throw my clothes on the floor, rather than put them in the hamper... That is a result of the fact that rpm uses a binary database for its data, while dpkg uses a large number of text-files instead. The advantage of that is that it is robust (if a single file gets corrupted it's not much of a problem), and that it is possible to fix or modify things by hand using a normal text editor if needed. this is a tremendous advantage of dpkg, it should never be changed to use a binary database. agreed, the plain text db is the right way to do it. OTOH, it would be nice if dpkg did what apt does and uses a binary db cache to speed up operations...updating both binary and text versions as changes are made. the text version would be considered authoritative (or source code) and the binary db would be the faster, compiled version. if the binary version ever got corrupted for any reason, it could be regenerated quickly from the text version. dpkg would also need to detect whether the text version was newer than the binary version and, if so, automatically rebuild the binary. nice idea, perhaps...but i don't know how practical it is or whether the time needed to maintain the binary db would more than offset the time saved. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Sun, May 21, 2000 at 11:38:18AM +0200, Josip Rodin wrote: On Sat, May 20, 2000 at 07:37:59PM -0800, Ethan Benson wrote: Apt uses a mixed approach: it uses the same textfiles as dpkg but uses a binary cache to also get the advantages of a binary database. it does? where? See /var/cache/apt/*.bin files. An example why is that good is the speed of `apt-cache show foo' compared to non-speed of `dpkg -p foo'. (of course, there are faster things to browse the textual database, they just aren't in dpkg itself) dlocate and grep-dctrl for example. interestingly, 'apt-cache show' is even faster than dlocate (which makes use of grep-dctrl to do the search). $ time apt-cache show dpkg /dev/null real0m0.235s user0m0.210s sys 0m0.030s $ time dlocate -s dpkg/dev/null real0m0.407s user0m0.380s sys 0m0.010s $ time dpkg -s dpkg/dev/null real0m1.517s user0m1.410s sys 0m0.100s craig -- craig sanders
Re: Mass install / Autoinstall (Was: Re: Debian vs Red Hat??? I need info.)
On Thu, May 18, 2000 at 01:55:37PM -0400, Jeremy Hansen wrote: Most of the answers I've been getting on this subject seem like total hacks, which may work but really are tricks to doing this. I was really looking for something within debian that's built to do kickstart type installations. huh? what do you think kickstart is? it's the same kind of total hack - the difference is that you have to do it RedHat's way whether you like it or not, and it pretends to be easy enough to use that you don't need to know what you're doing to run it. personally, i think that anyone who needs to mass-build machines *SHOULD* know exactly what they are doing. i wouldn't trust any machine built by someone who needed such point-and-click tools. Although what you suggest may work, it leave little flexibility between machines and also takes a lot more work then I was hoping to do. actually, it leaves a lot of flexibility between machines. use ed or 'perl -i' scripts to automatically edit config files in place. For example, I have 20 machines at a co location I need to go install. Right now with Red Hat I can take my laptop, slap a floppy in each machine, turn 'em on, 5 minutes later I have 20 fully configured machines ready to rock. you can do the same thing with debian...just install the nfs server package on your laptop. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Thu, May 18, 2000 at 01:24:26PM -0700, Stephen A. Witt wrote: A lot of what makes Debian cool is appreciated only after some time with it. also, a lot of what debian does is only appreciated after you've had the misfortune of working with some other distros for a while...then you really appreciate debian's sanity. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Thu, May 18, 2000 at 09:29:03PM -0400, Jeremy Hansen wrote: Can I ask why debian doesn't include pine? Just curious. because it's a violation of pine's license to distribute modified binaries. pine is non-free. debian distributes a pine-src package (in non-free) which contains the pine source code plus debian patches plus a script to auto-build. at least, we used to...haven't bothered with pine for ages because mutt is so much better (and free). I know Debian has a very strict rule base on the packages it includes but every distro I have even installed always included pine and I was just wondering the reason behind not doing that with Debian. the fact that just about every other distribution is willing to violate the licensing terms for pine is no reason for debian to do the same. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Tue, May 16, 2000 at 10:43:20PM -0400, Chris Wagner wrote: At 07:29 PM 5/16/00 -0400, Jeremy Hansen wrote: Autoinstall (Red Hat's kickstart) This is also something fairly important. We need this as we do a lot of mass installs. For mass installs, just make a standard issue CD, boot from that CD, and copy over the OS. Or you could even make a disk image and dd it onto the hard drive. That assumes you have the same hard drive in all the machines. You can turn a 20GB drive into a 10GB drive. :) But even if you have 4 or 5 different hard drives in your organization, using disk images will still save you tons of time. even better, you can make a tar.gz image of your standard install, stick it on an nfs server and then create a boot floppy with nfs support. when building a new box, boot with the floppy, partition the disk (scriptable using sfdisk), mount the nfs drive, untar the archive, and then run a script which customises whatever needs to be customised (e.g. hostname, IP address, etc). then run lilo to make it bootable from the hard disk. alternatively, put it on a CD-ROM and make that CD bootable - just insert the CD and reboot for a fully-automated install. say 10 meg or so for boot kernel utilities, leaves you up to around 640MB of compressed tar.gz containing your standard install file-system image. btw, this tar.gz idea is how the debian base system is installed on a machine in the first place. the only significant difference is that you're installing your own tar.gz system image rather than the standard base.tar.gz. automating debian installs is pretty easy - IF you have a good understanding of how debian works. craig -- craig sanders
Re: Debian vs Red Hat??? I need info.
On Tue, May 16, 2000 at 08:44:18PM -0700, David Lynn wrote: I agree - dpkg and apt are great compared to rpm's. However, that's all assuming that there are debian packages out there that are up to date (which they're generally not). But this seems to be the only major drawback I've found to Debian. depends if you use stable or unstable. if you use stable, then many packages will be old versions. if you use unstable, then most packages will be the latest up-to-date versions. craig -- craig sanders
Re: ITP: TIP
On Wed, Dec 29, 1999 at 09:45:54AM +0100, Jordi Mallach wrote: On Tue, Dec 28, 1999 at 09:08:50PM -0500, Clint Adams wrote: Tip stands for Tip isn't Pico, and is a GPL'd Pico clone, written from scratch. It's great that there's a free pico clone, but I see namespace confusion and conflicts on systems with the sort of tip that uses /etc/remote, which is not currently in Debian, but should be. Oh. The author just knew about this 2 or 3 days ago, and he is still pondering about the name change. Should there be any problem for an initial tip package release, and after change the name? i suggest a name change to Tipo - Tip Isn't Pico, Oh! binary could be /usr/bin/tipo craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Squid (suddenly) does not resolv local hostnames
On Wed, Oct 13, 1999 at 12:53:06AM +0200, Laurent Martelli wrote: I'm having a strange problem with squid. It suddenly does not resolv local hostnames which are not in /etc/hosts. For instance, if I try to browse www with lynx http://www/;, it says: The requested URL could not be retrieved [...] two options in /etc/squid.conf may be useful to you here: # TAG: dns_defnameson|off # Normally the 'dnsserver' disables the RES_DEFNAMES resolver # option (see res_init(3)). This prevents caches in a hierarchy # from interpreting single-component hostnames locally. To allow # dnsserver to handle single-component names, enable this # option. # #dns_defnames off or: # TAG: append_domain # Appends local domain name to hostnames without any dots in # them. append_domain must begin with a period. # #append_domain .yourdomain.com craig -- craig sanders
Re: dhcp-dns problems
On Wed, Sep 29, 1999 at 12:58:08AM -0400, [EMAIL PROTECTED] wrote: I have a small network @Home and use dhcp to dole out the ip's, I use the dhcp-dns package so that I can refer to these boxen by name and so that various network utilities will work. Recently I've started getting emails to root from Cron saying update packet failed. have you recently upgraded to the latest bind in potato (8.2.1-1 or later)? if so, then you need to be aware that the config file location changed from /etc/named.conf to /etc/bind/named.conf, and the zonefiles in /var/named now live in /var/cache/bind. make sure you edit /etc/bind/named.conf to include everything that was in /etc/named.conf BTW, your message should have been submitted as a bug report and not posted to debian-devel. debian-devel is for issues related to debian development, not user support. craig (package maintainer for dhcp-dns) -- craig sanders
Re: DHCP and DNS
On Thu, Jul 15, 1999 at 03:24:24PM -0600, Jason Gunthorpe wrote: The ISC DHCP v3 BETA server supports DNS updates, I have not looked at it yet, but I assume that it uses dynamic dns with bind 8. if you don't want to use the beta DHCP server, there are alternatives. i've used dhcp-dns 0.5 on several machines, and it works. it's a set of perl scripts which use bind 8's dynamic update features. http://www.cpl.net/~carville/dhcp-dns.html takes about 5 minutes to set up. there are a few other programs around which do similar things but this seems the best of them. it's GPL-ed, so i might package it. craig -- craig sanders
Re: setting up a news server
On Sun, May 23, 1999 at 04:50:33PM -0500, Matt Garman wrote: I want to set up a NNTP server for reading Usenet news offline. I was reading in the ISP-Hookup-HOWTO about some possible solutions, CNews+NewsX or CNews+suck or Leafnode... I see Debian has these all packaged. My question is: which setup is typically the easiest to setup and get running? I don't need anything too fancy except the ability to read from more than one server (e.g. my ISP's news server and my school's news server). Also, my school's server needs to be sent a login and password before I can access it. other answers seem to have missed the main point of your question, so i'll have a go :) leafnode is what you want. it can use more than one news server, and it supports login/password access to them. it fetches new articles in all groups you're interested(*) in and stores them in a local news cache (under /var/spool/news). it's very simple to set up - just install the package, and edit the config file in /etc/news/leafnode. BTW, CNews is basically obsolete these days. if you want a real news-server (as opposed to a caching news server like leafnode) then inn is a much better choice. however, both inn and cnews are quite complicated pieces of software and take a lot of work to keep them running smoothly. this is especially true if this is your first time as a news admin and you haven't yet written a swag of scripts to deal with all the little quirks of news. leafnode doesn't require any maintainence. it just works. IMO, it is pretty much ideal for the scenario you describe. (*) interest level is based on whether you are actively reading a newsgroup or not. if you always want to fetch a newsgroup regardless of how often you read it, then just set up a cron job which runs at least once/day and does something like: touch /var/spool/news/interesting.groups/news.group.name here's my /usr/local/sbin/touch-leafnode script to do that: ---cut here--- #! /bin/sh newsdir=/var/spool/news interesting=$newsdir/always.interesting umask 002 cd $newsdir/interesting.groups cat $interesting | xargs touch ---cut here--- i also added the following line to /etc/cron.d/leafnode: ---cut here--- 0 0 * * * news /usr/local/sbin/touch-leafnode ---cut here--- to mark a newsgroup as always interesting, i just add it to the file /var/spool/news/always.interesting. leafnode still fetches it even when i don't read any news for a few days or a few weeks. craig -- craig sanders
intent to orphan: spamdb
i'm finding that the spamdb is of little use these days, as most MTAs today have built-in support for RBL, the DUL RBLs, rejecting mail from unknown domains, and postfix even has regexp header checking now. i had intended to rewrite the package properly in perl - i've always considered the current sh script versions to be just a proof of concept...but it hardly seems worth the effort now, most spam can be filtered out long before spamdb gets involved. i really don't have time to do much work on a package which isn't very useful any more. (another issue is that all the cron-job downloads of SpamDomains and Spammers and SpamNets from my home web server makes my internet connection very slow for most of every Sunday) comments?? craig -- craig sanders
dbackup (was: Re: Beta-testing and the glibc 2.1 (Was: Missing ldd? Have libc6 on hold? Get ldso from slink...)
On Wed, Mar 17, 1999 at 01:10:44PM -0800, David Bristel wrote: This is a good point, and it actually leads to an interesting idea for a package that would take care of this issue. Now, this is NOT an easy project, but, what about a package that has a list of the config files for ALL the packages, and would back up what is needed to restore a system to normal from a clean install? To have just the shadow, passwd, and the confs for all the different packages, we could back up just these files. Then, reinstall from scratch, ignore configurations, because the restore of the config files would handle it all. Some would say that this should be handled manually, but it would make it nice, and it's something that no other distribution has considered doing. Having to manually back up key files is a major nuisance. 'dbackup' did something similar to but better than this. unfortunately it got orphaned and eventually droppped form the dist. i have a copy still installed and can run dpkg-repack on it if anyone wants to play with it. IIRC, at the moment it outputs a list of filenames which can be fed into cpio or afio or tar etc - this is quite useful. # dpkg -s dbackup Package: dbackup Status: install ok installed Priority: extra Section: admin Maintainer: David H. Silber [EMAIL PROTECTED] Version: 0.1-alpha.2 Recommends: tar | cpio Description: Debian GNU/Linux Data Backup Program. Backup will copy all files that are not part of a Debian package or which have been modified since installation to some backup media. . Actually, at this point it is only true that dbackup produces a list of files which fit the above qualifications. It is up to the user to feed this list to some program (such as tar or cpio) for the actual backup. . I still need to provide user documentation such as a manual page, an info page, examples of use, etc. . I plan to provide a nifty-spiffy administration tool to make the final product easier to use, but this is not yet ready. if nobody else is interested, i may adopt this package myself. i think it's a shame that it vanished from debian. but i probably don't have time. btw, simply backing up a system's conffiles can be done by feeding the output of 'cat /var/lib/dpkg/info/*.conffiles' into tar/cpio/afio etc. craig -- craig sanders
Re: Missing ldd? Have libc6 on hold? Get ldso from slink...
On Sun, Mar 14, 1999 at 04:57:05PM -0800, Robert Woodcock wrote: The solution is to downgrade the ldso package to the one in slink, or actually take the plunge to glibc 2.1. so what's likely to break if i upgrade to glibc 2.1? will i still be left with a (mostly) usable system? (i'm willing to test it but not if my machine is going to be die) craig -- craig sanders
Re: Debian goes big business?
On Fri, Jan 22, 1999 at 10:38:54AM +0100, J.H.M. Dassen wrote: On Fri, Jan 22, 1999 at 20:26:12 +1100, Craig Sanders wrote: i mostly agree but wouldn't put it anywhere near that strongly. I would. Ben's phrasing strongly reminds me of Robert A. Heinlein; especially of the concept of TANSTAAFL and the political system he describes in Starship Troopers, where the right to vote must be earned through a tour of duty of public (not necessary military) service. In the case of Debian, users do not have the right of vote, but can earn it by becoming developers (i.e. by maintaining packages, but also by writing documentation, maintaining the website etc.). such a system works fine for an organisation (like debian) where participation or membership is completely voluntary. it would suck for the real world where participation in the nation state is involuntary, and there's nowhere outside to go to. Heinlein wrote some good books, but you've got to be careful in your reading if you want to avoid adopting some of his more fascist pro-militaristic and ultra-capitalist politics. Also, the sexual politics was certainly quite progressive for the '50s and '60s but comes across is being old-fashioned sexist trash these days. his stuff is still an enjoyable read, though (if you ignore complete crap like the number of the beast). Pournelle's even worse. in partnership with Niven he writes some great stories. take the politics with a large grain of salt, though. Must admit I like the Think of it as evolution in action phrase, though i use it in contexts quite contrary to their usage :-) (BTW: TANSTAAFL was Larry Niven, not Heinlein IIRC) i better stop now before debian-devel detours into an sf crit list for a while. craig -- craig sanders
Re: Debian goes big business?
On Wed, Jan 20, 1999 at 06:12:14PM -0500, Ben Pfaff wrote: Laurent Martelli [EMAIL PROTECTED] writes: ChL == Christian Lavoie [EMAIL PROTECTED] writes: ChL Bottom line: Debian should remain developer controlled. What about non-developper users ? Shouldn't they have a word to say, even if they can't or do not have the time to contribute with code ? They should have `a word to say', and they do--they can subscribe to Debian lists and give their feedback and advice, which developers are free to follow or ignore. But they do not, and should not, IMO, have the privilege of voting or otherwise setting policy. Users are not developers and shouldn't presume to be. i mostly agree but wouldn't put it anywhere near that strongly. users are not developers, but they might be one day. one of the good things about debian is that users who are willing to put in some work CAN join up as developers. i started that way a few years ago, and i'll bet that most debian developers did too. craig -- craig sanders
Re: Adding users from a list or database?
On Fri, Dec 18, 1998 at 05:59:53PM -0600, Steve Phillips wrote: What do you do if you have to add many users on a regular basis? there's a million ways of doing it...i usually write a little script to do it as i need it. try something like the following, which i wrote earlier tonight for someone else who asked the same question on another list...it makes use of useradd and chpasswd, see their man pages for details. ---cut here---make-users.pl---cut here--- #! /usr/bin/perl # create users automatically. expects input to be of the form: # # login:password:real name # # can read input from stdin or by specifying the input file on # the command line. # change the following if required to suit your system. they are correct # for debian and probably for RH and most other linux distributions. $useradd='/usr/sbin/useradd' ; $chpasswd='/usr/sbin/chpasswd' ; # uncomment only one of the following or set to tcsh or csh if you are # into perversions. #$shell='/bin/false' ; $shell='/bin/bash' ; $|=1; open(SHELL,|/bin/bash) || die couldn't open pipe to bash shell ; while () { chomp ; ($login, $passwd, $realname) = split /:/ ; print SHELL $useradd -s '$shell' -m $login\n ; $users{$login} = $passwd ; } ; print SHELL \n\n\n$chpasswd __EOF__\n ; foreach (keys %users) { print SHELL $_:$users{$_}\n ; } print SHELL __EOF__\n ; close(SHELL) ; ---cut here---make-users.pl---cut here--- craig -- craig sanders
how can i find out a netbios name from an IP address.
i install quite a few (debian linux) internet gateway boxes with samba installed so that the client can get their /var/www directory in network neighbourhood. in order to diagnose network faults (i.e WTF can't the stupid doze boxes see/login to/etc the samba share?), i often need to find out the netbios name of a machine. for some reason this seems to an extraordinarily difficult thing to find out if you don't already known it... given that: a) i don't have a windows machine, b) i don't want no stinking GUI tool, c) i'm usually not on site (logged in with ssh), how can i find out the netbios name of a machine when i have it's IP address? can samba do it? smbclient doesn't want to do anything unless i already know the name. are there any other tools that can do it? nat (part of the smb-nat package) sometimes works, but only if nmbd isn't running...wierd. nat also tries to do too much...all i want is the netbios name, i don't want it to try it's lame cracking attempts. (at worst, i suppose i could hack the source of nat so that it just does what i want. nat10 is GPL code, based on samba.) any pointers to command line tools which would be useful to a unix geek would be very much appreciated. thanks, craig -- craig sanders
Re: how can i find out a netbios name from an IP address.
On Tue, 15 Dec 1998, Jens B. Jorgensen wrote: You're in luck! I once was desirous of just such a thing myself and since it didn't exist (but I knew this was possible since you can do it in NT) I went ahead and wrote one myself. It's command line and you give it the IP of the machine and it spits out the name. perfect. thank you very much. is the source GPL or open source licensed? if so, i mighty package it for debian... does samba.org have a contrib/ directory somewhere for neat extras like this? if it doesn't it probably should. Actually I added a few more possibilities into it. I'll be happy to send you source but you'll need egcs and STL support since it's written in C++ and uses STL collection classes. shouldn't be a problem. debian is nearly always up-to-date with the latest stuff. $ dpkg -l egcc g++ libstdc++2.9 ||/ NameVersionDescription +++-===-==- ii egcc2.91.60-1 The GNU (egcs) C compiler. ii g++ 2.91.60-1 The GNU (egcs) C++ compiler. ii libstdc++2.92.91.60-1 The GNU stdc++ library (egcs version) craig -- craig sanders
Re: login time limits in slink???
On 15 Oct 1998, Paul Crowley wrote: Craig Sanders [EMAIL PROTECTED] writes: anyone know what it is in slink which is enforcing idle-timeout and daily time limits on serial lines? I don't have this problem, and I haven't installed idled: Description: Idle Daemon. Removes idle users. Idled is a daemon that runs on a machine to keep an eye on current users. If users have been idle for too long, or have been logged on for too long, it will warn them and log them out appropriately. yeah, i know about idled. i even package a similar daemon for debian (timeoutd). i don't have idled or timeoutd or anything similar installed on the machine in question. that was the first thing i thought of. this idle timeout only seems to occur for logins on a serial line (both terminal and ppp logins), never on console or a pty. thanks for the suggestion, but it doesn't help. this problem seems specific to slink...perhaps a new login binary does it. craig -- craig sanders
login time limits in slink???
anyone know what it is in slink which is enforcing idle-timeout and daily time limits on serial lines? i've hunted all over (even to the point of grepping every file in /etc, /bin, /usr/bin, /sbin, /usr/sbin) for it and can't find it anywhere. how do i turn it off? i don't want time limits. craig -- craig sanders
Re: HELP! Seriously messed up bo - hamm
On Sat, 19 Sep 1998, Michael Stutz wrote: So now, every time I run apt, I get this error: Updating package status cache...done Checking system integrity...dependency error You might want to run apt-get -f install' to correct these. Sorry, but the following packages are broken - this means they have unmet dependencies: wget: Depends:libc6 libg++27-dev: Depends:libc5-dev libgdbm1-dev: Depends:libc5-dev Running dselect doesn't help, either. dpkg is broken -- every time I run it I get something like: apt is telling you that there is a problem with some packages. apt is just asking for a little help - you have to manually clear the problem before it is able to continue. I really need help -- what should I do? try doing what apt suggests. run apt-get -f install. if that fails, then read on: it looks like your system is only partially upgraded from libc5 to libc6. try removing libg++27-dev and libgdbm1-dev with dpkg -r. then you should be able to install wget, either install it with dpkg or with apt (apt-get install wget). if that works, you should now be able to do a dselect upgrade. craig -- craig sanders
Re: Cheapbytes mess-up debian [FW: Debian 2.0 CD's]
On Wed, 26 Aug 1998, Dale Scheetz wrote: The autoup.sh script is also broken on the Official CD, so CheapBytes didn't do anything different there either. They have also been very helpful in providing a fixed version of autoup.sh to their customers. would be nice if they had sent me a copy of their fixed version so i could fold their changes into my version. (of course, they're not obligated to. the script is public domain.) BTW, i have a new version, 0.32. versions 0.29 to 0.31 never got released - i had been waiting for user feedback before releasing it. anyway, this new version fixes a few bugs and I *hope* it can work even in cases where all the packages which have to be removed and upgraded are on Hold status. i didn't know about the --force-hold argument to dpkg until today. the new version is available from the usual sites, including http://debian.vicnet.net.au/autoup/ and http://taz.net.au/autoup/ and various mirror sites. changelogs from 0.29-0.32: v0.29: 1998-08-15 (Craig Sanders) - changed all directory references from 'frozen' to 'stable' - fixed PKGS_NET problem (should have been $PKGS_NETBASE and $PKGS_NETSTD) - changed RMFILE to /root/autoup.removed-$DATE - fixed search for bo dpkg (for buzz upgrades). dpkg 1.4.0.8 lives in .../debian/dists/stable/main/upgrade-i386 now. note: still doesn't work for ftp upgrade. TODO. - fixed $FTP_DIR_2 - changed ^ii search pattern to ^.i when searching for installed -dev pkgs v0.30: 1998-08-15 (Craig Sanders) - really changed ^ii search pattern to ^.i v0.31: 1998-08-27 (Craig Sanders) - --force-hold added to $DPKG_ARGS and to the $DPKG --remove line v0.32: 1998-08-27 (Craig Sanders) - changed the displayed packages list as noted by Rob Hilliard. the diff between 0.28 and 0.32 is as follows: ---cut here--- --- autoup.sh 1998/07/23 09:46:14 0.28 +++ autoup.sh 1998/08/27 05:20:40 0.32 @@ -2,7 +2,7 @@ # upgrade a libc5 (bo) machine to libc6 (hamm). # -# $Id: autoup.sh,v 0.28 1998/07/23 09:46:14 root Exp $ +# $Id: autoup.sh,v 0.32 1998/08/27 05:20:40 root Exp $ # # based on Scott Ellis' excellent Debian libc5 to libc6 Mini-HOWTO # document at http://www.gate.net/~storm/FAQ/libc5-libc6-Mini-HOWTO.html @@ -46,14 +46,14 @@ #DPKG=echo dpkg #LDCONFIG=echo ldconfig -DPKG_ARGS=-iBE --force-overwrite +DPKG_ARGS=-iBE --force-overwrite --force-hold DATE=$(date +%m%d-%T) ARCH=binary-$(dpkg --print-installation-architecture) FTP_SITE_1=ftp.debian.org FTP_DIR_1=debian/hamm/hamm/$ARCH FTP_SITE_2=ftp.infodrom.north.de -FTP_DIR_2=pub/debian/hamm/hamm/$ARCH +FTP_DIR_2=/pub/debian/hamm/hamm/$ARCH # package variables @@ -93,8 +93,10 @@ current directory: ldso, libc5, libc6, timezones, locales, ncurses3.0, ncurses3.4, -libreadline2, libreadlineg2, bash, libg++272, dpkg, dpkg-dev, -dpkg-ftp, dpkg-mountable, libgdbm1, libgdbmg1, perl-base, and perl. +libreadline2, libreadlineg2, bash, libg++27, libg++272, +libstdc++2.8, dpkg, dpkg-dev, slang0.99.34, slang0.99.38, libgdbm1, +libgdbmg1, perl-base, perl, data-dumper, libnet-perl, dpkg-ftp, +dpkg-mountable, netbase, netstd. If you are using a mirror, press 'm'. If you need to download the files via FTP, press 'f'. @@ -113,10 +115,10 @@ # ask where the mirror is (this could do with some error checking) echo echo enter the full path to your local mirror of debian: -echo e.g. /debian/dists/frozen/main/$ARCH/ +echo e.g. /debian/dists/stable/main/$ARCH/ echo -TRY=/debian/dists/frozen/main/$ARCH ~ftp/debian/dists/frozen/main/$ARCH ../$ARCH +TRY=/debian/dists/stable/main/$ARCH ~ftp/debian/dists/stable/main/$ARCH ../$ARCH for i in $TRY ; do if [ -d $i ] ; then DEFAULT=$i @@ -248,7 +250,9 @@ PKGS_PERLBASE=$( echo $PKGS_PERLBASE | sed -e $SEDSCRIPT ) PKGS_PERL=$( echo $PKGS_PERL | sed -e $SEDSCRIPT ) PKGS_MOREDPKG=$( echo $PKGS_MOREDPKG | sed -e $SEDSCRIPT ) -PKGS_NET=$( echo $PKGS_NET | sed -e $SEDSCRIPT ) +PKGS_NETBASE=$( echo $PKGS_NETBASE | sed -e $SEDSCRIPT ) +PKGS_NETSTD=$( echo $PKGS_NETSTD | sed -e $SEDSCRIPT ) + echo checking that all needed files are available... # sanity check that we can find the packages @@ -295,10 +299,24 @@ DPKG_VER=$(dpkg -s dpkg | grep Version: | awk '{print $2}') DPKG_MINOR=$(echo $DPKG_VER | awk -F. '{print $2}') +# uncomment for testing +#DPKG_MINOR=3 + +# bo dpkg is now in /debian/dists/stable/main/upgrade-i386/dpkg_1.4.0.8.deb +# a quick hack to s=binary-i386/base=upgrade-i386= should fix it. +# +# actually, that sucks and this whole section should be rewritten. it +# doesn't work for ftp upgrades anywayit assumes that bo dpkg is +# available on a local disk. i'm tempted to just delete all this stuff. +# maybe the best thing to do is still check for bo, but don't do +# anything except tell the user to download and install dpkg 1.4.0.8 +# before continuing
Re: autoup.sh fails when upgrading bo - hamm
On Wed, 22 Jul 1998, JonesMB wrote: I am trying to upgrade from bo to hamm so I can run the new bash and try wine. I run autoup.sh and it ftp'ed the files needed for the upgrade. It failed to get me the file libstdc++2.8_*.deb [...deleted...] autoup.sh is expecting to find it in libs/, but it's not there anymore. it has been moved to base/. it must have happened recently because i haven't had any other complaints about it yet. download it from base, install it with dpkg and then run autoup.sh again. The autoup.sh I have is v 0.27 1998/05/29. I got it from http://www.debian.org/2.0/autoup/ Is this too old a version to be using or what? nope. that's the latest version. i'll put out a new version soon which corrects this libstdc++ problem. if you want to be certain you have the latest version, check http://debian.vicnet.net.au/autoup or http://taz.net.au/autoup/ the vicnet site is preferred - and also contains a .tar.gz file containing all the files which autoup.sh needs. debian.vicnet.net.au is my workstation at work, and has a much better net connection than the taz site (which is my home network). craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
autoup.sh v0.28 released
v0.28: 1998-07-23 (Craig Sanders) - libstdc++2.8 has moved from libs/ to base/ - libnet-perl has moved from interpreters/ to base/ as usual, it is available from http://debian.vicnet.net.au/autoup/ (primary site) ftp://debian.vicnet.net.au/autoup/ and http://taz.net.au/autoup/ autoup.tar.gz on the primary site has also been updated with all the latest versions of stuff from hamm. it weighs in at 9.1MB and includes: -r--r--r-- root/root450820 1998-07-17 07:55 bash_2.01.1-3.1.deb -r--r--r-- root/root 13912 1997-05-08 00:00 data-dumper_2.07-1.deb -r--r--r-- root/root166586 1998-06-23 20:41 dpkg-dev_1.4.0.23.2.deb -r--r--r-- root/root 13544 1998-04-28 09:46 dpkg-ftp_1.4.9.6.deb -r--r--r-- root/root 21946 1998-05-12 11:04 dpkg-mountable_0.7.deb -r--r--r-- root/root341616 1998-06-23 20:41 dpkg_1.4.0.23.2.deb -r--r--r-- root/root181344 1998-05-22 08:56 ldso_1.9.9-1.deb -r--r--r-- root/root284268 1998-06-12 12:21 libc5_5.4.38-1.1.deb -r--r--r-- root/root580872 1998-07-17 08:26 libc6_2.0.7t-1.deb -r--r--r-- root/root254826 1997-10-07 13:55 libg++272_2.7.2.8-0.1.deb -r--r--r-- root/root235522 1998-06-08 10:42 libg++27_2.7.2.1-14.4.deb -r--r--r-- root/root 17296 1998-05-25 12:29 libgdbm1_1.7.3-25.deb -r--r--r-- root/root 17230 1998-05-25 12:29 libgdbmg1_1.7.3-25.deb -r--r--r-- root/root 65308 1997-04-28 00:00 libnet-perl_1.0502-1.deb -r--r--r-- root/root 69678 1998-07-17 07:55 libreadline2_2.1-10.1.deb -r--r--r-- root/root 75864 1998-07-17 07:55 libreadlineg2_2.1-10.1.deb -r--r--r-- root/root 94784 1998-07-02 08:10 libstdc++2.8_2.90.29-0.6.deb -r--r--r-- root/root 1340204 1998-07-17 08:26 locales_2.0.7t-1.deb -r--r--r-- root/root111006 1998-04-16 11:09 ncurses3.0_1.9.9e-2.1.deb -r--r--r-- root/root124368 1998-07-17 09:51 ncurses3.4_1.9.9g-8.8.deb -r--r--r-- root/root350948 1998-07-17 07:56 netbase_3.11-1.deb -r--r--r-- root/root611710 1998-07-05 22:07 netstd_3.07-2.deb -r--r--r-- root/root282680 1998-06-08 10:47 perl-base_5.004.04-6.deb -r--r--r-- root/root 3125884 1998-06-08 10:47 perl_5.004.04-6.deb -r--r--r-- root/root 84354 1998-07-02 12:19 slang0.99.34_0.99.38-6.deb -r--r--r-- root/root 83902 1998-07-02 12:19 slang0.99.38_0.99.38-6.deb -r--r--r-- root/root261268 1998-07-17 08:26 timezones_2.0.7t-1.deb craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Lan Tcp/ip Question
On Mon, 20 Jul 1998, Tomt wrote: At 10:44 PM 7/19/1998 -0700, you wrote: It may be useful for you to assign the NIC's address to something other than 0x300. A lot of different (very different even!) cards try to use 0x300 (sound cards, primarily). Theres a sound card in the machine but its sitting on 0x330 Also you may want to try pinging the machine's own address on the ethernet. See what that produces. Aside from that I can't help you much. Works. Both machines can ping themselves but not each other. check that you don't have an irq conflict with the ethernet card. i've had enough irq conflicts with network cards for that to be the first thing i check when i get a system which can send but not receive packets. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
RE: Firewallsetup
CC-ed back to debian-user. On Fri, 10 Jul 1998 [EMAIL PROTECTED] wrote: i know this is urgent for you, sorry to take so long to reply...have been busy. btw, you would have been better off cc-ing your question to debian-user. i'd still get a copy and you might have got a quicker answer from someone elseI'm not the only person who can help you, there are lots of knowledgeable and helpful people on the mailing listalso, many people read debian-user to learn from watching the questions and answers, so it's better to have answers posted there. We share the cisco router and the c-net with an other company. I can't put all of the 192.12.120.0/24 net inside the fw (but I can subnet the c-net). I want somthing like this: inet -- cisco (192.12.120.254???) | hubother company (192.12.120.0/25) | |eth0 fw |eth1 | our network (192.12.120.128/25) Is this possible without changing anything in the cisco? What netmasks should I use on the fw? Please help, I'm getting more and more confused the more I read about this. yes, this is possible, but you will have to make a few small changes to the cisco. you'll have to change the netmask on it's ethernet interface to a /25, and you'll have to route the second /25 via the firewall's eth0 interface. also, you'd be better off assigning 192.12.120.128/25 to the other company, and 192.12.120.0/25 to your company. this is because the cisco is .254, thus is in the .128/25 subnet. i'd suggest: external (unfirewalled) net: network: 192.12.120.128 netmask: 255.255.255.128 broadcast: 192.12.120.255 cisco: 192.12.120.254 firewall eth0: 192.12.120.253 other hosts: 192.12.120.129 - 192.12.120.252 internal (firewalled) net: network: 192.12.120.0 netmask: 255.255.255.128 broadcast: 192.12.120.127 firewall eth1: 192.12.120.1 other hosts: 192.12.120.2 - 192.12.120.126 i note that you ask What netmasks should I use on the fw?. That's not exactly the right questionthe netmask you use must be used on all hosts on the network. this will mean reconfiguring every host, router, ethernet printer, and hub (if your hubs have ip addresses for snmp monitoring). if you don't change the netmask on all the hosts/devices then they will have no way of knowing that the net is subnetted. they will expect to find the full 192.12.120.0/24 on the local ethernet, so they won't route packets to hosts in the other subnet via the cisco, they'll just try to send it directly - which won't work. btw, here's a useful reference for you: http://ipprimer.2ndlevel.net/ it's a good summary/intro to IP networks. and another: http://www.internetnorth.com.au/keith/networking/subnet1.html a set of tables which can be very useful for subnetting. you can find more by going to altavista or somewhere and searching for CIDR and subnet. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Debian 2.0. Bind 8 works but causes dials for local domain names
On Sat, 11 Jul 1998, Steve Ball wrote: Requests for Internet addresses and names outside my domain are looked up on Internet DNS servers and correctly returned. Reverse lookups for addresses on my local domain are properly resolved and no Internet lookup is performed. Forward local dns lookups are returned correctly but there is a dns lookup on the internet that triggers a dialup, and any subsequent dns lookups also trigger internet lookups. i don't know if this is the cause of the problem or not (it probably isn't), but you have an extra } in the definition for zone 'local': zone local { type master; file local; }; }; it could be confusing bind. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: unsubscribe helper line
On Thu, 9 Jul 1998, Jeff Schreiber wrote: I'm with you. I think a larger subset of folks will screw this up. I'm sure it's an attempt to save a bit of bandwidth, but . . . Personally I think it's cool. i agree. just cut and paste the line into an xterm window or console. what could be simpler? i think that it would be better like this, though: echo unsubscribe | mail [EMAIL PROTECTED] it's simpler and easier to read (pipes don't confuse newbies anywhere near as much as redirection of stdin). also serves as an example to teach newbies something about pipes and the Unix Way Of Doing Things. :-) do all MTAs provide a /usr/sbin/sendmail clone? i think they do. if so, then it should call sendmail rather than mail to ensure that the From address is properly masqueraded by the MTA. so that makes it: echo unsubscribe | /usr/sbin/sendmail [EMAIL PROTECTED] craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: tail and grep
On Wed, 8 Jul 1998, Patrick Olson wrote: On Thu, 9 Jul 1998, Hamish Moffatt wrote: I think you need to somehow ensure that tail isn't used until that line isn't written in the log; -f will get it to wait, but will never get you any output in the dynamic.IP file. That makes good sense. I never thought about doing it that way (and don't have the foggiest idea how. Craig Sanders sent me what looks like some good advice but I haven't figured out how to implement it. I will take a moment to figure it out before asking any silly questions :) I did notice that I am using a script called 'ppp-on' instead of 'ip-up' so I have to wonder if that is part of the cause of my confusion. ppp-on and ip-up are two different things. you use ppp-on to start your ppp connection. /etc/ppp/ip-up is starting automatically by the ppp daemon (pppd) whenever a ppp link comes up. btw, pppd also runs /etc/ppp/ip-down whenever a ppp link goes down. pppd even passes useful arguments to the ip-up/ip-down scripts. i listed these in my last message. common uses for the ip-up script are to: setup routes, ip masquerading, firewalling, ip accounting, login/user accounting, register for dynamic dns (e.g. a .ml.org domain), start sendmail/fetchmail/whatever, swap /etc/resolv.conf for one that works with your ISP, and so on. ip-down is generally used to turn off or undo the stuff done in ip-up...e.g. terminate accounting, shutdown sendmail, etc. If I had a way to make the ppp-on script wait until the connection was established before going on past exec /usr/sbin/pppd debug lock modem crtscts /dev/ttyS1 38400 \ asyncmap 20A escape FF kdebug 0 $LOCAL_IP:$REMOTE_IP \ noipdefault netmask $NETMASK defaultroute connect $DIALER_SCRIPT to the rest of the script, I could just put tail in after the line above and it wouldn't be run until the connection was established (and the information was in /var/log/messages). this is exactly why the ip-up is there...so you can run things immediately after the ppp link comes up. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Pine 4.00 termcap(Pine in Debian?)
On Thu, 9 Jul 1998, Mark Mealman wrote: On Thu, 09 Jul 1998, Tim Buller wrote: I'm trying to compile Pine 4.00 on a hamm/i386 system, and it wants to Is there any truth to the rumor that Debian's dist won't include pine because of some restrictions in the license put out by Washington state? pine is distributed in hamm as two packages, the pristine original source code in pine-src and the debian patches in pine-diff. see below for the reason for this unusual distribution method. install them both with dpkg or dselect and follow the instructions to build your own pine, pico, and pilot packages. it takes about 10 minutes to compile on a reasonably fast machine. compiling it takes only two simple commands...one to extract the sources, and the second to build the packages. very easy. Washington's license doesn't seem any more restrictive pine's license is a lot more restrictive than most free software licenses. it does not allow distribution of modified binaries, therefore debian is not legally permitted to distribute a pre-compiled pine. than other public domain licenses. pine isn't public domain. most free software isn't public domain. the term public domain has a very specific meaning - i.e. that the work has no copyright. a very small percentage of free software is actually public domain, while most Free Software is copyright with a license (e.g. GPL, BSD, Artistic license) allowing use, modification, and distribution. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Debian Package Manager Worthless Junk???
On Tue, 7 Jul 1998, Pete Harlan wrote: Bob Nielsen writes: (To be fair, I haven't used Red Hat since 4.2 and it may have improved since then, but they severely mismanaged the conversion to glibc.) He who lives in a glass house should not throw stones, methinks... Debian 1.3.1 is a year old. Six months ago 2.0 was announced as Near Completion, when it was nearer inception than completion. what an odd opinion. i've been running hamm on my home machines and desktop box at work since bo became frozen and hamm became unstable. that's a good time to flag as hamm's inception date. It became good enough for me to trust on production servers around six months later in September last year. IMO, hamm was good enough for use by the general public since around Nov or Dec. debian doesn't have the commercial pressures that RH has, so we can afford to be perfectionist about what we do. i'd rather have it done right than done hastily. I'm not ragging on the Debian team, just saying lighten up on Red Hat a little. We're all on the same side, eh? They chose to risk leaping before looking, while Debian risked hesitating. Was the latter more prudent? Maybe. Are my Debian 1.3.1 systems prehistoric? Yes. Is that bad? Sometimes. i don't think bob was attacking RH at all. he was just stating a truth - RH *did* mismanage the upgrade to glibcas everyone who risked RH5.0 found out. They should not have released 5.0 in the state it was in. if pre-historic software is a bigger concern to you than running pre-release software is, then you could have upgraded to hamm at any time in the last 6 to 9 months without facing any major problems. you certainly would have had a better, more stable glibc system than if you had tried RH5. what did debian risk by taking the time to do it right? not a lot. a few impatient users may have chosen to install RH5 rather than wait for hamm or trial the pre-release version from the ftp site. big deal, like that really hurts debian a lot. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Hard lock-up crashes, need some clues!
On Mon, 6 Jul 1998, Shaleh wrote: What is running / not running at the time of the crash. The ^@ could indicate a daemon overflowing its buffer -- it could be a symptom or a cause. sounds more like a symptom to me. i've seen that lots of times after crashes - my guess is it's a result of fsck allocating the remainder of a block to the logfile or something like that. When I run Netscape and Enlightenment 13.3 I occasionally have this happen. Seems that NS does some things that eventually torque off E and X. netscape does weird shit. i've had it take down my X session a few times and occasionally (rarely) lock up the whole machine. what do you expect from commercial/closed-source software? :-( unfortunately, netscape is much better than any alternative - too bad it's so badly written. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: tail and grep
On Tue, 7 Jul 1998, Patrick Olson wrote: when I try tail -f /var/log/messages | grep local IP it prints (with a real IP address instead of 123.123.123.123) Jul 7 20:06:00 server2 pppd[587]: local IP address 123.123.123.123 on my console. That's exactly what it should do. But if I try to redirect it to a user's file (so he can see what his dynamic IP is) using tail -f /var/log/messages | grep local IP /home/pppusers/dynamic.IP it does nothing but create a 0 byte file. Questions: 1. What am I doing wrong? 2. Is there a way I can put this in the background so I don't have to remain logged in as root? i think you've got the wrong solution to the problemin other words, there are better ways of doing what you want. why not do this in /etc/ppp/ip-up (or /etc/ppp/ip-up.d if you are running hamm)? add lines like these to /etc/ppp/ip-up: DYNFILE=/tmp/dynamic.IP # first, delete the file just in case some evil user has it symlinked to # a system file (like /etc/passwd or /bin/bash): rm -f $DYNFILE echo $4 $DYNFILE chmod a+r $DYNFILE if this is for a dialin user (and not for a local console user who you've given ppp dialout access to) then you probably need to find out what their home directory is and put the dynamic.IP file in there. try something like this instead: USER=`w | grep $1 | awk '{print $1}' DYNFILE=/home/$USER/dynamic.IP rm -f $DYNFILE echo $4 $DYNFILE chown $USER $DYNFILE chmod a+r $DYNFILE (note: these sh script fragments are untested. use as a guideline only. don't expect them to work as is. RTFM and understand what it does before you trust any random code posted by a complete stranger on a mailing list) the /etc/ppp/ip-up script is passed the following parameters from pppd when the connection is established. that is where the $1 and $4 above come from. # This script is called with the following arguments: #Arg Name Example #$1 Interface name ppp0 #$2 The ttyttyS1 #$3 The link speed 38400 #$4 Local IP number12.34.56.78 #$5 Peer IP number12.34.56.99 there's also a 6th argument for recent (hamm only, i believe) versions of pppd. i have no idea what it's for. #$6 Optional ``ipparam'' valuefoo craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Fw: Problem setting interrupt and address on network adaptor for NC2501-3 Accton Lanstation
On Wed, 8 Jul 1998 [EMAIL PROTECTED] wrote: First of all I would like to introduce myself to you as Riaad Isaacs. I am employed by POS INTERNATIONAL in CAPE TOWN SOUTH AFRICA which distributes your ACCTON Lanstation 586 MMX / PDA 2000 / NC2501-3. I would like to bring it to your attention that I was unable to change the existing network card settings from Base address 6200 and interrupt 11 to anything else. you haven't really given much information about what the problem is, so i'm going to make a few guesses. if it's on io address 0x6200 then it's probably a PCI card. if it's a PCI card, then the only reason i can think of why you'd want to change the io (or more likely, the IRQ) is because you have an old ISA card which can't be changed on that ioport/irq if this is the case, then the best thing to do is to reboot the machine, go into the bios, and set IRQ 11 to Legacy ISA rather than PCI PNP. This will prevent the BIOS from allocating that IRQ to a PCI card. then reboot linux. you may (but probably wont) have to change your /etc/modules or /etc/conf.modules line telling linux the new IO address or IRQ of your network card. most PCI card drivers in linux auto-detect the io address and IRQ so you probably wont have to make any software changes. if this is not the case for your accton card (i've never heard of them before so have no idea what they are like), then you can type cat /proc/pci to get a listing of pci cards, and what IRQ they are using. hope this helps. I have been getting endless request from our clients to solve this problem. PLEASE;PLEASE;PLEASE can you help THIS IS REALLY VERY VERY VERY URGENT Thanking you in anticipation. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Firewallsetup
On Wed, 8 Jul 1998 [EMAIL PROTECTED] wrote: My goal is to setup a firewall to protect my subnet like this: Internet | Cisco router (192.12.120.254) | Local net 192.12.120.0 netmask 255.255.255.0 | FIREWALL eth0 = 192.12.120.190, eth1 = 192.12.120.202 | Protected subnet 192.12.120.200 netmask 255.255.255.252 This worked fine when I used masqurading and a fake net (192.168.2.0) but not when I try to use real IP addresses and a subnet. This is the firewall setup: (outside) eth0: IP = 192.12.120.190 Netmask = 255.255.255.0 Network = 192.12.120.0 Broadcast = 192.12.120.255 Gateway = 192.12.120.254 (inside) IP = 192.12.120.202 Netmask = 255.255.255.252 Network = 192.12.120.200 Broadcast = 192.12.120.203 Gateway = 192.12.120.190 you've got mismatched netmasks on the internal subnet and the external subnet. they won't be able to communicate with each other through the firewall/gateway box because all the machines on eth0 think that they have a full /24 (class C), and that 192.12.120.202/255.255.255.252 is on the local eth0 ethernet, not routed through the fw box. i'm not sure if i'm explaining this very clearly. from the nature of the mistake you've made, i think you need to read up on tcp/ip and on building firewalls before building one. subnetting isn't that difficult but it's easy to make mistakes if you don't understand how it works. unless you've got a good reason not to, stick with using private addresses (192.168.2.0) for your internal networkthat makes building the firewall purely a routing and ipfw problem, and avoids the hassle of calculating netmasks. if necessary (e.g. for accounting purposes), you can even route between your external net and your internal 192.168.2.0 netbut then your internal network can be reached if hosts on your external net are compromised. security policies are always a tradeoff between convenience vs. security. I have tried to turn on arp and promiscus mode but that doesn´t help. I'm able to ping both the Internet, localnet, and subnet from the firewall. I'm able to ping the firewall (both addresses) from a host on the subnet. Using tcpdump I see that when I ping a host from the subnet to the local net then traffic I forwarded out but not back to the host on the local net. My ipfw config is set to accept all traffic. yes, that sounds consistent with messing up the subnetting. it's not an ipfwadm or a routing problem, you have subnetted your IP space incorrectly. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
RE: Firewallsetup
On Wed, 8 Jul 1998 [EMAIL PROTECTED] wrote: (outside) eth0: IP = 192.12.120.190 Netmask = 255.255.255.0 Network = 192.12.120.0 Broadcast = 192.12.120.255 Gateway = 192.12.120.254 (inside) eth1: IP = 192.12.120.202 Netmask = 255.255.255.252 Network = 192.12.120.200 Broadcast = 192.12.120.203 Gateway = 192.12.120.190 you've got mismatched netmasks on the internal subnet and the external subnet. they won't be able to communicate with each other through the firewall/gateway box because all the machines on eth0 think that they have a full /24 (class C), and that 192.12.120.202/255.255.255.252 is on the local eth0 ethernet, not routed through the fw box. Thanx Craig. I do need (I think) to use real IP addresses because I need to have multiple web-servers (accessible from the Internet) inside the firewall that should be protected. I thought it was possible to tell my fw box to route all trafic between the two subnets. Is it possible to route eg 192.12.12.202 to a host on the private network eg 192.168.2.202? you must have misunderstood what i said (not surprising, because i didn't explain it very well) you *can* use 192.12.20 addresses on both sides of the firewall (i.e. internal and external), as long as they are subnetted properly. this generally means splitting the net into two or more equally sized subnets. for example... two subnets: .0-127 and .128-255, or four subnets: .0-63, .64-127, .128-195, and .196-255, or eight subnets: .0-31, .32-63, .64-96, ..., and .224-255 note it is possible to run more than one subnet on a single ethernet segment. for example, you could run .0-63, .64-.127, .128-.195 on eth0 and .196-.255 on eth1, as long as you always remember that eth0 actually had three subnets on it and not just one network. the three eth0 subnets would only be able to communicate with each via a router (i.e. your firewall box)...they are completely separate networks even if they happen to be on the same cable segment! what you can't do is just take a chunk out of the middle of a net, stick it on the other side of a firewall and expect that it will work. (actually, if you're careful and know what you are doing you might be able to fake it by publishing arp entries for each of the hosts that 'belong' on eth0 but are actually physically located on eth1. possible, but tricky and complicated and easy to mess up. this is the sort of thing that evolves - mutates is more accurate - into an undocumented nightmare) Other solutions how to protect just a part of my C-net? one idea that occurs to me is that you could connect your firewall box directly to the cisco router (use a cross-over 10baseT cable or coax), and use 192.168.x address for that network segment. all of your hosts could then be on 192.12.120.0/24. use ipfwadm firewall rules to protect specific hostsor protect them all (default policy deny) and use allow rules to unprotect certain hosts/ports. something like this: 192.168.1.0 +--+ | | | |eth0 +-+ +-+ inet :cisco: :linux: +-+ +-+ |eth1 | +--. 192.12.120.0/24 segment (your hosts) it would simplify things if your ISP could allocate you an IP address for the cisco's internet (isdn??) interface. your ISP would route your /24 net to your cisco, and your cisco would know to route it to the linux box. the linux box would apply firewall rules to filter out undesirable connections. it would simplify things even further if you could replace the cisco with an ISDN card for your linux box. that's assuming your internet connection is ISDN, of course. if it's some other connection type it may be worth your while finding out whether linux supports it. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
RE: Firewallsetup
On Wed, 8 Jul 1998 [EMAIL PROTECTED] wrote: I do need (I think) to use real IP addresses because I need to have multiple web-servers (accessible from the Internet) inside the firewall that should be protected. I thought it was possible to tell my fw box to route all trafic between the two subnets. route, no. redirect or masquerade or proxy traffic, yes. Is it possible to route eg 192.12.12.202 to a host on the private network eg 192.168.2.202? you could use redir or ipportfw or rinetd or similar programs to transparently redirect connections to a certain host:port to another machine. e.g. set up an ip_alias on the linux box for 192.12.120.202, and then configure one of the above programs to redirect/proxy/masquerade all port 80 (www) connections for that IP address to 192.168.2.202 this may be your simplest solution. all of the above mentioned programs are available as debian packages. read the documentation for all of them to see which best suits your needs. they are all in hamm. i think only redir was available for bo. (i haven't used any of them, i only know of their existence...not the details of how to configure them) craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
Re: Please help with IP Aliasing
On 1 Jul 1998, Andy Spiegl wrote: I am currently setting up a Mail and Webserver (hamm, 2.0.33). I have got a whole package of 256 IP addresses that I want to assign to this server. In the NET-3-HOWTO I read that I have to set it up like this: # here I am trying to set up the IP-Aliasing for the whole # subnetwork XXX.231.206.x ifconfig eth0:1 XXX.231.206.1 netmask 255.255.255.0 up route add -net XXX.231.206.0 netmask 255.255.255.0 eth0:1 # [...] # and so on, until: ifconfig eth0:254 XXX.231.206.254 netmask 255.255.255.0 up route add -net XXX.231.206.254 netmask 255.255.255.255 eth0:254 firstly, replace all those ifconfig route commands with something like this: i=1 while [ $i -le 254 ] ; do ifconfig eth0:$i XXX.231.206.$i netmask 255.255.255.0 route add -host XXX.231.206.$i eth0:$i $i=$(( $i + 1 )) done secondly, the route command is optional. and if you do use it, you should use a host route, rather than a network route. third: do you *really* need all those aliases configured right now? if not, then only configure the ones you actually need, *when* you need them. fourth: for a virtual hosting system, it's not terribly difficult to set things up so that the configurations for virtual web, ftp, mail, dns, and ip_aliasing are all controlled from one file. e.g. make a file called /etc/virtual-hosts which contains the following info: #IP-address domain name username XXX.231.206.1 foo.com.au foo XXX.231.206.2 bar.com.au bar modify /etc/init.d/networks to use field 1 (cut or awk or perl can extract the info for you - e.g. awk '{print $1}' prints field 1) for configuring the ip aliases. write the script so that it ignores blank lines and comments ('grep -v ^$\|^#' is a good start). i=1 for j in $( grep -v ^$\|^# /etc/virtual-hosts | awk '{print $1}' ) ; do ifconfig eth0:$i $j netmask 255.255.255.0 route add -host $j eth0:$i $i=$(( $i + 1 )) done then write scripts which generate config files for apache and proftpd using all three fields (username being used to derive the public_html and anon ftp dirs for the virtual host...subdirectories of ~username). how you handle virtual mail depends on which mailer you useif you use sendmail, then adding a line like @domain username to /etc/virtusertable and then running makemap hash virtusertable /etc/virtusertable will do the job. zone files for bind can be auto-generated too, using the first two fields...most virtual hosts will be identical except for IP address and domain name. write this so that it only generates a zone file if one doesn't already exist. finally, write a Makefile to tie it all together...so adding a new virtual host is as simple as editing /etc/virtual-hosts and typing make. you *can* do all this in sh/awk/sed/cut but doing it in perl will be much easier, especially where you need to use more than one field from /etc/virtual-hosts at a time. doing that in perl is trivial. in sh it is difficult. BTW, you can add as many extra field to /etc/virtusertable as you need...e.g. you could add a type field which defines whether a particular virtual host is mail, web, ftp, or all three. What I want seems to work this way, but I can't imagine that this is the right way to do it. And if I will ever get another subnetwork to add, how would I add it using the above method? I found that eth0:255 is the highest possible virtual network number. So I couldn't add any more? All you network-gurus: Please give me a hint or any pointer as to where I can find more info on that. you can increase this limit by modifying the kernel sources. or start using 2.1 series kernels. alternatively, stick another ethernet card in the machine and start using eth1:0 - eth1:255 aliases.the limit is per interface. if you've got more than 255 virtual hosts then you probably want another machine to host them on anyway. don't try to make one machine do too much. craig -- craig sanders -- Unsubscribe? mail -s unsubscribe [EMAIL PROTECTED] /dev/null
ppp over ssh?
anyone know how to get ppp to run over a ssh session? i.e. i want to start pppd on one machine, and make it use ssh rather than chat to initiate the connection. i can get pppd to use ssh -C -x -e none remote.host.name -l remoteuser instead of the usual 'chat' program. after login, the remote host starts pppd...that side of things is working fine. the trouble is that pppd on this side is not attaching to the ssh connectionreading the docs, there doesn't appear to be any way of making it do so. should i be setting up a named pipe or a socket or something to do this? any ideas/clues would be appreciated. thanks, craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: AFIO usage...
On Fri, 19 Jun 1998, Jay Barbee wrote: I was wondering how I would use AFIO to replace they way I use tar. Here is a sample: tar c -X /root/backup/exclude -f /mnt/scratch/linux/backup.tar / What I do not know how to do is exclude several file which are listed in the file exclude which looks like: --- /cdrom /home/public/old-system-backup/* cache* core --- The TAR command works, but I cannot figure out how to use AFIO in the same manner. here's a simple shell script i wrote before i found the afbackup program. note: i've updated it slightly to use the 'tempfile' command. i believe that this command is only available in hamm. modify as required if you're not using hamm play with this, it will do what you wanthowever, i strongly recommend that you investigate the afbackup package. it can do full and incremental backups - it's a complete backup system, whereas this script is just a quick-and-dirty hack to get the job done. ---cut here--- /usr/local/bin/afio.backup --- cut here--- #! /bin/sh # by Craig Sanders. This script is hereby placed in the public domain. # do whatever you want with it, i don't care :) TAPE=/dev/st0 LIST=$(tempfile) if [ $1 = --erase ] ; then mt -f $TAPE erase fi # list of directories to exclude from the backup EXPATHS=/proc /dev /cdrom /dos /mnt \ /tmp /usr/tmp /var/tmp \ /var/spool/news /var/spool/squid \ /usr/src/linux # Convert into the form of a regex that find can use. EXCLUDE=$( echo $EXPATHS | \ sed -e 's,^,\\\(^,' -e 's, ,$\\\)\\\|\\\(^,g' -e 's,$,$\\\),' ) # now construct the list of files to backup... find / -regex $EXCLUDE -prune -o -print | sort $LIST # and finally, do the backup! cat $LIST | afio -o -c 512 -s 149m -v -z -L /tmp/afio.log -Z -G 9 $TAPE # now clean up rm -f $LIST mt -f $TAPE offline ---cut here--- /usr/local/bin/afio.backup --- cut here--- craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 1.3.0 to hamm
On Mon, 22 Jun 1998, Eugene Sevinian wrote: Though I had asked this before, however I did not received exact answers until now. The problem is that I have 1.3.0 version installed and I would like to know, should I upgrate it to 1.3.1 and then use autoup.sh for shifting to hamm or I can go directly to hamm? you don't need to upgrade to 1.3.1 first. you can upgrade from 1.3.0 directly to hamm using autoup.sh or by following the howto. autoup.sh does everything that's in the howto (and more...it's more up-to-date) and is a lot easier :-) craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Please clarify...
On Thu, 18 Jun 1998, E.L. Meijer (Eric) wrote: The autoup.sh is ment to be used in bo-hamm upgrade, AFAIK. Normal updates are easy, just point dselect/apt to a mirror, get updates and let it install. Just fire up and tap enter a few times. It is usually a good idea to upgrade dpkg first with dpkg, in case there have been enhancements of the package system. for an upgrade from bo/rexx/buzz to hamm, DO NOT attempt to upgrade with dpkg. Doing so will break bash, resulting in a non-working system...recovering from this situation is difficult for an expert and probably impossible for a novice. DO NOT TRY THIS AT HOME. Use the HOWTO, or use autoup.sh which a) upgrades bash and libreadline and other stuff in the correct order, and b) upgrades dpkg to the latest version. my preferred method for upgrading to hamm is: 1. download autoup.sh and the tarball from http://debian.vicnet.net.au/autoup 2. read autoup.sh to get an understanding of what it is about to do. also read Scott's HOWTO. 3. run autoup.sh 4. when autoup has run, THEN (and only then) is it safe to run dselect to complete the upgrade. Use the 'mountable' method if you have a CD or a local mirror - it works better than the old 'mounted' method. Alternatively, use apt (but you'll need to install apt and several other packages by hand first) as mentioned yesterday, i have used this method to successfully (and painlessly) upgrade dozens of systems to hamm. it works. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Hamm Release?
On Thu, 18 Jun 1998, Earl Goodson wrote: Please answer a neewbie's questions...why is this LSL sales slip reporting that 2.0 is an Official Release? i have no idea why LSL are saying that. it probably means one of the following things: 1. they're lying 2. they are confused 3. they expect 2.0 will be released by the time they ship your CD 4. they will delay shipping your order until 2.0 has been released. LSL seem pretty honourable so i don't think 1. is very likely. option 2 is possible, but also unlikely. write or call them and ask what the story is - they are the only ones who know for sure what they mean. Did I get the wrong one, should I have ordered 1.3.1 instead, and will I be unable to get help for 2.0 here? you'll still get help for it here. Product IDProduct Name Unit Price Qty Item Total L000-039 Official Debian Rele $3.95 1 $3.95 ase 2.0 Weight (lbs.): 0.05 -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Please clarify...
On Tue, 16 Jun 1998, Steve Lamb wrote: Yes, but *there is no need for a re-install*! Debian has a great and superior upgrading mechanism, and your system will update cleanly through every version, even major version changes. The success varied and that mostly all users can upgrade easily...to hamm *without* reinstall also states that there were failures and that there will be users who cannot uprgrade w/o a reinstall. There is no need for a re-install is an aboslute. In your own words you have conceeded there will be cases that there will be a need for a reinstall. while it is theoretically possible that a dselect/dpkg upgrade will fail, i haven't seen it happen yet in several years of using and upgrading debian. i've done dozens of bo to hamm upgrades (using autoup.sh) without a problem. i've done a handful of rex to hamm upgrades without a problem (also using autoup.sh) (i know that people have done buzz to hamm upgrades with autoup.sh too - but i haven't done any myself) also, over the last few years i have done literally hundreds of debian upgrades on dozens of machines. i keep most of the machines i am responsible for in sync with the latest stuff in unstable - most machines get upgraded every week or two...some get upgraded every six months or so. in all of these upgrades i have not once run into any really serious problem - one which corrupted dpkg's package status info...the majority of problems encountered were minor, easily solved with a few minutes work (editing config files or resolving dependancies manually by installing/removing stuff with dpkg). occasionally an updated version of a package is extremely buggy and i have to revert to an earlier version. the only time i have ever had to reinstall from scratch was the result of hardware failure. dpkg isn't proof against a dead hard disk. so, to reiterate what i said earlier: in theory, you might occasionally need to re-install rather than upgradebut in practice that will only be necessary if your hardware fails. invest in a tape backup system and a UPS. in other words, empirical evidence supports the assertion that There is no need for a re-install craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Please clarify...
On Wed, 17 Jun 1998 [EMAIL PROTECTED] wrote: Can you give me or point me to 1.1 to 1.2 to 1.3, or 1.1 to 1.3, upgrade instructions? Thanks. imo, you'd be better off waiting for hamm (2.0) to be released - or order a pre-release hamm CD (there are several people who sell them...i think netgod - [EMAIL PROTECTED] - burns unstable CDs on request). hamm isn't released yet, but it is stable and high quality. it works. anyway, the autoup.sh script should be able to upgrade you from 1.1 to 2.0 without a problem...i know it was used to do that a few months ago, but i don't think a buzz upgrade has been tested since then (not many people are still running 1.1) craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
key repeat in X
anyone know how to turn repeating keys back on after some stupid program (like quake) has screwed it up? i don't want to logout and restart X (i shouldn't have to anyway: this is unix, not 'doze :) after checking out the x11 quakeworld client, i find that my keys don't repeat if i hold them down...very irritating. interestingly, if i switch to a text VC and login then they repeat...but when i switch back to X they don'tso there must be some X setting to turn it on/off. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: key repeat in X
On Thu, 11 Jun 1998, David Keegel wrote: ] anyone know how to turn repeating keys back on after some stupid ] program (like quake) has screwed it up? Check out the xset program. xset q will query status (so you know what is going on without changing anything). xset r on might fix your key repeat problem. thanks, that did it. i actually checked xset (i already use it to enable DPMS screen blanking) before posting my messagemustn't have looked hard enough because even though i didn't see any mention of key-repeat when i looked at it before, now that i know it's there it's easy to see. selective blindness...too much haste :-( craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Accessing html forms...
On Wed, 3 Jun 1998, Chris wrote: Sorry, I think we've got confussed. I know perl very well, and as Ralph said it has proved invaluable. However, what I want to do is ~send~ something to a html server via a POST operation. what you want is LWP (the libwww-perl package). I have used this to do both GET and POST requests from within a perl script (e.g. i wrote a qd hack to send the Subject: line of an email message sent to a certain address via a free WWW to SMS service). According to the manpage, LWP is capable of GET, PUT, POST, and HEAD methods. it can be used for many things, but it is ideal for writing perl web-bots. here's an example from 'man LWP'. An Example This example shows how the user agent, a request and a response are represented in actual perl code: # Create a user agent object use LWP::UserAgent; $ua = new LWP::UserAgent; $ua-agent(AgentName/0.1 . $ua-agent); # Create a request my $req = new HTTP::Request POST = 'http://www.perl.com/cgi-bin/BugGlimpse'; $req-content_type('application/x-www-form-urlencoded'); $req-content('match=wwwerrors=0'); # Pass request to the user agent and get a response back my $res = $ua-request($req); # Check the outcome of the response if ($res-is_success) { print $res-content; } else { print Bad luck this time\n; } The $ua is created once when the application starts up. New request objects are normally created for each request sent. BTW, there are several man pages for LWP. Read/print them all :-) craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build Debian Linux cluster?
On Thu, 7 May 1998, Alexander Kushnirenko wrote: component in the cluster within 2 hours. I have since found out listening to this list that the dpkg utility helps to make this chore a lot simpler, if you learn to use the command line interface (silly me). Could you please give more details about that. Imagine I need to update some package (perl for example) on 6 nodes simultaneously, what do I do? i don't know if this has been answered yet or not, but try this: 1. have one of the machines mirror the debian archives, including the non-US crypto-related stuff (secure shell - ssh - is essential). set up /etc/exports so that all local machines can NFS mount them. 2. set up all machines so that they NFS mount the debian main archive as /debian, and the debian non-US archive as /debian-non-US. 3. install ssh on all machines, and set them up to allow one machine to have password-less root access to all of the others. 4. when you need to install/upgrade a package, write a little script like this: #! /bin/bash # list of hosts to execute commands on hosts='host1 host2 host3 host4 .' # command(s) to run. can be multiline command if needed cmd='dpkg -iBE /debian/path/to/package.deb' for i in $hosts ; do ssh $i $cmd done if you need to do more complex things on each machine in turn, then start by write a shell or perl script to do the job, then copy it to each machine (using scp) and execute it on each machine. e.g. if you have written a script called fix-stuff.sh which understands the command line options foo and bar, then a little wrapper script like the following would copy it to all machines and execute it: #! /bin/bash # list of hosts to execute script on hosts='host1 host2 host3 host4 .' for i in $hosts ; do scp fix-stuff.sh $i:/tmp ssh $i /tmp/fix-stuff.sh foo bar done these samples could easily be made generic so that they got $cmd or the name of the script to copyexec from the command line. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
autoup.sh 0.27 released
available from the usual sites: http://debian.vicnet.net.au/autoup/ ftp://debian.vicnet.net.au/autoup/ and http://taz.net.au/autoup/ remember that the vicnet site has the tarball and other stuff. The taz site doesn't. changelog: v0.26: 1998-05-29 (Craig Sanders) - slang has moved from libs to base. - netbase must be installed before netstd, not on same line. - ALLPKGS was defined twice. once at start of script, and once when SEDSCRIPT had been set. so, existence of netstd and netbase wasn't being checked. changed to just process ALLPKGS with SEDSCRIPT. - merged in functionality of make-tarfiles.sh so i don't have to update it every time autoup gets updated. This will make no difference to most users, it is only useful for people maintaining autoup mirror sites. v0.27: 1998-05-29 (Craig Sanders) - changed 'ln' to 'ln -s' and 'tar cfz' to 'tar chfz' so that the tarball can be created even if the local debian mirror and the autoup directory are on different filesystems. and here's the diff: ---cut here--- --- autoup.sh 1998/04/21 00:10:08 0.25 +++ autoup.sh 1998/05/29 01:50:24 0.27 @@ -2,11 +2,21 @@ # upgrade a libc5 (bo) machine to libc6 (hamm). # -# $Id: autoup.sh,v 0.25 1998/04/21 00:10:08 root Exp $ +# $Id: autoup.sh,v 0.27 1998/05/29 01:50:24 root Exp $ # # based on Scott Ellis' excellent Debian libc5 to libc6 Mini-HOWTO # document at http://www.gate.net/~storm/FAQ/libc5-libc6-Mini-HOWTO.html +# +# Command line options: +# +# --make-tarfiles create tarball of debian packages INSTEAD of +# doing the upgrade. This option is *only* useful +# for maintainers of autoup mirror sites. End +# users can safely ignore this option. +# +# + # Author: Craig Sanders [EMAIL PROTECTED] # and many others. see changelog for details. # @@ -57,19 +67,24 @@ PKGS_LIBGPP=oldlibs/libg++27_*.deb libs/libg++272_*.deb \ libs/libstdc++2.8_*.deb PKGS_DPKG=base/dpkg_*.deb utils/dpkg-dev_*.deb -PKGS_SLANG=oldlibs/slang0.99.34_*.deb libs/slang0.99.38_*.deb +PKGS_SLANG=oldlibs/slang0.99.34_*.deb base/slang0.99.38_*.deb PKGS_LIBGDBM=oldlibs/libgdbm1_*.deb base/libgdbmg1_*.deb PKGS_PERLBASE=base/perl-base_*.deb PKGS_PERL=interpreters/perl_*.deb PKGS_MOREDPKG=interpreters/data-dumper_*.deb interpreters/libnet-perl_*.deb \ base/dpkg-ftp_*.deb admin/dpkg-mountable_*.deb -PKGS_NET=net/netbase_*.deb net/netstd_*.deb +PKGS_NETBASE=net/netbase_*.deb +PKGS_NETSTD=net/netstd_*.deb ALLPKGS=$PKGS_LDSO $PKGS_LIBC5 $PKGS_LIBC6 $PKGS_NCURSES $PKGS_LIBRL $PKGS_LIBRLG $PKGS_BASH $PKGS_LIBGPP $PKGS_DPKG $PKGS_SLANG - $PKGS_LIBGDBM $PKGS_PERLBASE $PKGS_PERL $PKGS_MOREDPKG $PKGS_NET + $PKGS_LIBGDBM $PKGS_PERLBASE $PKGS_PERL $PKGS_MOREDPKG + $PKGS_NETBASE $PKGS_NETSTD -cat __EOF__ +if [ $1 == --make-tarfiles ] ; then +answer=m +else + cat __EOF__ This script will install the packages necessary to ensure a safe upgrade to hamm. @@ -85,9 +100,11 @@ If you need to download the files via FTP, press 'f'. __EOF__ -echo -n if you have the files in the current dir, press 'c': (m/f/c) + echo -n if you have the files in the current dir, press 'c': (m/f/c) + + read answer +fi -read answer case $answer in m|M) @@ -236,9 +253,7 @@ echo checking that all needed files are available... # sanity check that we can find the packages -ALLPKGS=$PKGS_LDSO $PKGS_LIBC5 $PKGS_LIBC6 $PKGS_NCURSES $PKGS_LIBRL - $PKGS_LIBRLG $PKGS_BASH $PKGS_LIBGPP $PKGS_DPKG $PKGS_LIBGDBM - $PKGS_PERLBASE $PKGS_PERL $PKGS_MOREDPKG +ALLPKGS=$( echo $ALLPKGS | sed -e $SEDSCRIPT ) for i in $ALLPKGS ; do echo -n $(basename $i) @@ -255,6 +270,18 @@ echo all needed files found. echo +# make the tarball if called with --make-tarfiles + +if [ $1 == --make-tarfiles ] ; then +mkdir debfiles +cd debfiles +ln -s $ALLPKGS . +tar chfz ../autoup.tar.gz * +cd .. + exit 0 +fi + + # # libc5 # @@ -418,7 +445,8 @@ $DPKG $DPKG_ARGS $PKGS_MOREDPKG # and now netbase and netstd -$DPKG $DPKG_ARGS $PKGS_NET +$DPKG $DPKG_ARGS $PKGS_NETBASE +$DPKG $DPKG_ARGS $PKGS_NETSTD # paranoia says to run this at the end $DPKG --configure --pending ---cut here--- -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
MPPP with isdnutils
I'm having difficulty getting multi ppp running with the isdnutils package. i can get one channel working with no problems - the debian package for isdnutils is excellent...works out of the box. unfortunately, the documentation and faqs (including the stuff in the kernel source) are incomprehensible. they don't make any sense at all to me. I'd like to get a 2nd channel going. has anyone had any luck with MPPP on debian? btw, i'm running kernel 2.1.103, isdnutils 1:2.1.beta1-21, and have a NetJet ISDN card (Hisax type=20), and the latest 'unstable' debian. Craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian from the Stampede's POV
On Sat, 23 May 1998, Steve Lamb wrote: the same thing (while internally it does use .tgz and ar etc... I never said it was. I was pointing out that SLP could be. i doubt it. tar doesn't need crap tacked onto the end of it. The fact is that .tgz is great for archives (and backups... I use tar with my tape drive) but I (and many debian users) feel that dpkg makes a good packaging systemn and makes system adminitration allot easier (rpm does too, even tho most people here don't like to admit it :) ) Never said it wasn't. But what people who look at SLP and the fact that it is just a TGZ with information at the end are looking at is not just this system or that system it is all systems as a whole. IMO people who look at SLP and see that, just don't get the big picture. there is a lot more to a distribution than just compiling some binaries. RPMs are nice, but outside Red Hat they're not fun. DEB, same thing. Unless you have the package manager that comes along with it, they never really get used. wrong. they can be used by anyone with a brain who is willing to learn a simple command or two. ar and tar are on every unix so deb packages are no problem. rpm2cpio is easily compiled on any unix, and cpio is standard so rpms aren't much trouble either. this still doesn't get you anything which is worthwhile - in most cases it is too dangerous to install a redhat package on a debian system or debian package on stamped or slackware or slackware onto debian. as i said, there's a lot more to a distribution than just compiling some binaries. SLP, without the package manager, *CAN* be used by anyone who is used to tar. so what? like, big deal. in other words, who cares? what use is that? debian users are going to use dpkg because they don't want cruft from a .tar.gz or .slp screwing up their package-managed system. redhat users are going to use rpm because they don't want cruft from a .tar.gz or .slp screwing up their package-managed system. ditto for caldera and suse users. slackware users don't matter. in my experience, slackware users are either clueless newbies who will have trouble even with tar, or they are rabid do-it-yourselfers who wouldn't install someone else's pre-compiled binary even if they were paid to do it. stampede users matter even less - slp is their native package format, so the issue of foreign packages doesn't even arise. so, given all that, what *use* is this much touted ability to easily install on another system? what good does it actually do? btw, it is trivial to install a deb package on a non-debian systemit is a stupid thing to want to do (because .deb packages are designed for debian systems and may conflict with or overwrite curcial parts of your non-debian system...same as .rpm and .tar.gz and .slp packages are designed for their respective distributions). anyway, it's a stupid thing to want to do but it is easyas simple as: cd / ar x PACKAGE.deb data.tar.gz tar xfz data.tar.gz note the similarity in the command line arguments for ar and tar. from your other messages it seems as if you believe that 'ar' is some sort of weird, non-standard archiving format. it's not. it's been around for years. in fact, it was around long before tar. tar was based on ar, as a tape backup utility. ar == archive. tar == tape archive. in other words, any unix system will have ar on it. Even if we all just used .tgz archives and SLP, this makes the question of it moot because yes, you don't need the extra stuff you can just unpack it, but if you don't use SLP, then unpack it with .tar.gz...it is still possible that what you unpack will not intergrate well with your system Correct. But, again, my scope is beyond any one system. i think you haven't spent much (if any) time at all, thinking about the issues involved in managing multiple systems. It isn't the fact that they are available but the fact that most people are unaware of their use. You know, I've been using Linux for over two years and until this discussion I've never heard of ar? Until a discussion I had on the newsgroups about RPM a while back I was unaware of cpio. The while time I have used tgz. if you don't even know about these programs, then what makes you qualified to comment on then? having opinions is finebut please try to make them *INFORMED* opinions before spreading them around to others. Quite often, this is as simple as just reading and listening and learning something before opening your mouth - i.e. learn-by-lurking. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How can I make this done? Orig:Re: where's the 'man'?
On Sat, 23 May 1998, Jonah Kuo wrote: I have a win95 and two FreeBSD boxes ( called Fa and Fb) in my office, Fa has a modem connecting to Internet, Fb and win95 access Internet through Fa. Since only pppd comes with debian base system, all I can ^ this is incorrect. the debian base disks included drivers for many (all??) ethernet cards, ppp, slip, and probably plip too. if you have a linux-supported network card in your notebook, you should be able to do a network installthis will be MUCH less trouble than getting ppp running. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian 2.0 FAQ
On Sun, 24 May 1998, Wolfgang Gernot Bauer wrote: Im looking for the Debian2.0 upgrade-FAQ. Does anyone know where it is? if you are referring to the HOWTO, then you can get it from my autoup site. http://debian.vicnet.net.au/autoup/ or http://taz.net.au/autoup/ i don't know if Scott has updated this HOWTO lately or not...but there is a link to his site in the howto. in any case, you probably want to run the autoup.sh script rather than follow the HOWTO. autoup.sh does everything that's in the howto and more (and has been kept up to date). Btw. Is upgrading better than reinstalling? yes. that's one of the advantages of debian :-)you never have to reformat and reinstall ever again (barring hard disk failure, of course) craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What's the storywith 2.0?
On Mon, 4 May 1998 [EMAIL PROTECTED] wrote: [snip] a lot of users (note, users not only developers) have done this already and are very happy with the results. I am one of them, except for the fact GV doesn't work :[ I have had no problems. The upgrade was easy, everything worked as it always did. And I'm not a developer nor super-linux-savy. It's been less than a year since I got my debian 1.3.1 CD. gv works for menot that i use it often. i rarely need to view postscript of pdf files but when i do, gv is my preferred tool. [snip] the long time to debian 2.0 is actually a deviation from previous history - *caused* by the fact that we are switching to libc6. in the past, anyone could safely install a few 'unstable' packages on a 'stable' system. Just an ignorant question, how often do new libcs come out? What's the story with glibc (how is it different from libc6)? not very often. hopefullly it will be a LONG LONG time before we have to go through this again. libc6 *is* glibc. two names for the same thing. way back in the dim dark ages of linux history, the linux libc forked off from gnu libc (due to delays in getting necessary linux-specific patches incorporated in the libc). with the release of glibc 2 (which is known in the linux world as libc6), there is a move to a unified libc again. this is a Good Thing. Also, do the hamm install disks work yet, or is it better when installing from scratch to do a bo install and upgrade using the most excellent autoupgrade script? no idea at the moment. i haven't had to build a machine for several weeks now and haven't yet tried the hamm disks. i've used my debian 1.3 cd to install the base system, quit out of dselect *before* installing anything, get the box on the network, ftp autoup, run autoup, and then run dselect to install hamm. works for me. YMMV. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What's the storywith 2.0?
On Sun, 3 May 1998 [EMAIL PROTECTED] wrote: On Sun, 3 May 1998 [EMAIL PROTECTED] wrote: What is the real story with debian 2.0? IE when ill it be released? I have a 1.2 system that I want to upgrade, but I need it to be dependable, so I am waiting for the final release. From what you just said, you question would better be worded: When will 2.x be reasonably stable for a non-experimental user? As a non-developer, let me give you my best estimate: Never. this is garbage. hamm works. and it works a lot better than RH5 doesin fact, it was a lot more stable than RH5 even back in December when RH5 was released (IMO, hamm has been 'safe enough' for non-developers since around Nov last year - 90% of the work takes the first 90% of the time...the remaining 10% of the work takes the remaining 90% of the time :). if you're impatient, upgrade via ftp or buy an unofficial 'hamm' CD, there are several people who burn them. use my autoup.sh script to do the first stage of the upgradethere are a few packages which have to be upgraded in a precise order otherwise bash could break. there won't be that much difference between hamm now and hamm when it finally gets released - most of the differences will only be relevant to first-time installations, not upgrades. you can find the autoup.sh script and a .tar.gz file containing all the packages it needs at: http://debian.vicnet.net.au/autoup/ or ftp://debian.vicnet.net.au/autoup/ a lot of users (note, users not only developers) have done this already and are very happy with the results. yes, there are a few bugs. there will *always* be bugs. none are show-stoppers at the moment. even pre-release hamm is much better than *released* versions of other dists. History will probably repeat itself and the project will go even further away from this planet. Rather than focus on building a 2.0.x release that kicks ass, they will concentrate on playing with new toys. the long time to debian 2.0 is actually a deviation from previous history - *caused* by the fact that we are switching to libc6. in the past, anyone could safely install a few 'unstable' packages on a 'stable' system. this time around, that has *not* been possible: if you want one package from hamm then you have to do a complete upgrade to hammit's all or nothing. once we get hamm out the door, then we'll be back to where we used to be: upgrading individual packages from 'unstable' will be safe. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: upgrade to libc6 script?
On Tue, Apr 21, 1998 at 09:57:08AM -0400, Stephen Carpenter wrote: The one you link is the old version It is v.23 and I elieve from what I read this morning .25 was releaces According to the old versiont he latest version shoul dbe at: http://www.taz.net.au/autoup/autoup/ that url was a typo fixed in 0.24. it should be http://www.taz.net.au/autoup/ however, http://debian.vicnet.net.au/autoup is updated at exactly the same time as my home (taz) site, has lots more stuff (like copies of all the required .debs individually and in a .tar.gz archive), and has several Mbps of upstream bandwidth (which is infinitely better than my 64K ISDN connection to vicnet). personally, i think people would be crazy to use the taz site rather than the vicnet sitebut hey! some people actually *like* things slower and with less features. choose your protocol: http://debian.vicnet.net.au/autoup or ftp://debian.vicnet.net.au/autoup craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
autoup.sh v0.24 released (was Re: autoup.sh bug)
On Mon, Apr 20, 1998 at 07:15:23PM +0100, Philip Hands wrote: I did a bo--hamm upgrade over the weekend using your autoup.sh: $Id: autoup.sh,v 0.23 1998/03/26 15:31:10 root Exp root $ and got this when it got to installing dpkg: dpkg: regarding .../base/dpkg_1.4.0.22.deb containing dpkg, pre-dependency problem: dpkg pre-depends on libstdc++2.8 libstdc++2.8 is not installed. dpkg: error processing /mirror/Debian/dists/frozen/main/binary-i386/base/dpkg_1.4.0.22.deb (--install): pre-dependency problem - not installing dpkg which suggests to me that the libstdc++ needs to be added to the list of things that are installed before dpkg. I've just released version 0.24 which fixes this (and a few minor problems too). v0.24: 1998-04-21 (Craig Sanders) - added libstdc++, libslang0.99.34 (libc5), libslang0.99.38 (libc6), netbase, and netstd to the list of packages to install. - changed 'unstable' to 'frozen' in various places. - [EMAIL PROTECTED] reported that the downloading the files to /var/lib/dpkg/methods/ftp messes up the ftp method somehow. changed TRY to /tmp/autoup. -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: autoup.sh v0.25 released
BTW, i've also made some changes to my http://debian.vicnet.net.au/autoup site. 1. updated autoup.tar.gz to have the latest versions of all the needed packages. 2. made it accessible as ftp://debian.vicnet.net.au/autoup (some people requested this) 3. made a debfiles/ directory which contains all the individual packages in autoup.tar.gz for people who want to download them one at a time. if anyone mirrors this then they should probably just exclude the large binaries from their mirror and run the make-tarfiles.sh script locallysaves mirroring 14mb of stuff which they should already have in their local mirror. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: autoup.sh v0.25 released
I've just released version 0.24 which fixes this (and a few minor problems too). v0.24: 1998-04-21 (Craig Sanders) - added libstdc++, libslang0.99.34 (libc5), libslang0.99.38 (libc6), netbase, and netstd to the list of packages to install. - changed 'unstable' to 'frozen' in various places. - [EMAIL PROTECTED] reported that the downloading the files to /var/lib/dpkg/methods/ftp messes up the ftp method somehow. changed TRY to /tmp/autoup. make that 0.25. i forgot to disable the debugging stuff before releasing it. v0.25: 1998-04-21 (Craig Sanders) - remembered to disable debugging stuff so that the script actually does something. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
/etc/hosts and using make for system admin tasks (was Re: Reverse DNS lookup at telnet)
On Sun, 12 Apr 1998, Scott D. Killen wrote: I run a server with Debian 1.3.1 installed. This machine is set up as an internet gateway to a 3 bit subnet. Diald is installed for automatic dialup internet connections. My machine runs a caching name server that the machines on the subnet use as a nameserver. The problem is that when I telnet from a machine on the subnet, the server does a reverse lookup of the connecting machine's IP address, but it can't answer it's own request so the Internet link goes up. This makes telnet connections very slow... especially if the dialup connection doesn't work. How can I solve this problem? I want to either stop doing reverse lookups when answering telnet requests, or, ideally, I want to set up bind so it can answer reverse lookups for addresses on my subnet the simplest way is to just list the machines in your 3 bit subnet in /etc/hosts, and make sure that /etc/host.conf has order hosts,bind. reverse-lookups for any connections from ip addresses listed in /etc/hosts are resolved immediately. resolving other names/IPs is unaffected. remember to keep /etc/hosts up to date if any of the machines on your subnet change. note that this only helps for connections to your gateway machine. if there are other unix boxes on your lan which do reverse lookups for each connection then you will need to copy this hosts file to them too. use scp or rdist or rsync or something to do this. it could also be handy to have a Makefile in /etc so that you only have to type 'make' to do the copy. e.g. here's a simple /etc/Makefile which does this and a few other useful things. make is a very useful system administration tool. you can use it to automate the production of any file(s) from any other file(s), or even as the stamp-hosts example below shows execute certain commands only if certain file(s) have changed since the last time it was run. ---cut here---/etc/Makefile---cut here--- #! /usr/bin/make -f # default action all: cd /etc $(MAKE) targets targets: stamp-hosts aliases.db virtusertable.db mailertable.db stamp-hosts: hosts scp hosts machine1:/etc/hosts scp hosts machine2:/etc/hosts scp hosts machine3:/etc/hosts touch /etc/stamp-hosts aliases.db: aliases newaliases virtusertable.db: virtusertable makemap hash virtusertable virtusertable mailertable.db: mailertable makemap hash mailertable mailertable ---cut here---/etc/Makefile---cut here--- (btw, remember that the indented lines in the Makefile are indented with a TAB character, not spaces!) craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
autoup v0.22 released (finally)
this release merges code and ideas from rob and david and some stuff from myself. it's available from the usual sites: http://debian.vicnet.net.au/autoup/ and http://taz.net.au/autoup/ the vicnet site is, as usual, the recommended place to get it from. it's on a much bigger pipe than my little 56k modem link (which is connected via vicnet anyway since that's where i work :). i don't know how long it takes for other autoup sites to be updated...maybe they should wget mirror the vicnet site. changes in this version: v0.21.1.3: 1998-03-11 (Robert D. Hilliard [EMAIL PROTECTED]) - Robert fixed the ftp stuff, including the PKGS_LIBC5 problem. (rob's version was never released AFAIK, but it's what i based v0.22 on.) v0.22: 1998-03-26 (Craig Sanders) there were still problems with ncftp. probably caused by the fact that i have ncftp 3.0.0 beta 9 installed. anyway, i decided that futzing around with ncftp was waste of time when plain old ftp was good enough to do the job. i rewrote the ftp stuff like so: - removed the ncftp stuff - rewrote the ftp function so that it first creates an ftp script file which gets fed into ftp -i -n - got rid of the ftpftp function definition and $FTP_FUNC stuff and shuffled code around so that all the ftp stuff is inside the f) case. - deleted some cruft which didn't seem to be in use (e.g. stuff for lftp and wget, and the $ERROR_FILES variable) also: - Rob H wrote a good readme for autoup. i'll be including it on the autoup home page from now on. http://debian.vicnet.net.au/autoup/ is the fastest and most up-to-date site. http://taz.net.au/autoup/ is also always up-to-date but much slower. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Autoup script is dangerous?
On Wed, 25 Mar 1998, Janos Bujtar wrote: I am going to upgrade from libc5 to libc6 with autoup script on a live machine with bo system (sendmail, apache, etc) ..Is the upgrade this way safe? yes, it works well. unfortunately, the ftp functions don't work, so you'll have to download the required packages yourself. (one of these days i'll get around to fixing it. somebody emailed me a while ago and said they had a fix but i don't think they sent me a patch.) anyway, the best way to do that is to download both autoup.sh and autoup.tar.gz (a 7mb file containing all the required .debs) from: http://csanders.vicnet.net.au/autoup/ you'll also find a copy of scott's mini-howto there. read it before running the script. in fact, read the script too before running it. make a subdirectory (e.g. /tmp/autoup) and copy the .sh script and untar autoup.tar.gz into it. then run autoup.sh and tell it to use the current directory. once the script has finished, you'll need to complete the upgrade with dselect. use whichever method you like (ftp or mountable are recommended), but make sure you set it up to use: dists/unstable/main dists/unstable/contrib dists/unstable/non-free craig (author of autoup.sh) -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: is this the end of debian?
On Thu, 19 Mar 1998, DAVID B. TEAGUE wrote: I don't think one person is vital, but with many who don't want to market Debian. ... We _are_ going to miss Bruce. yes. but we *will* survive without him. btw, there is no one who doesn't want to market debian. there are many who dont want to become obsessed by marketing at the expense of technical superiority. it's a matter of keeping it in perspective, not a binary opposition. Face it, Red Hat has a bunch of Marketroids who are really good at getting the name out. (Even if they do have the reputation of pushing a product with broken packages out the door (before it is ready.)) We could use a couple of marketroids ourselves who push a fairly slickly packaged product - frozen at some point. yes, we need a marketing team. one which works with the developers and markets what we produce. not one which tries to give unreasonable orders to the developers. any marketing person who can't market a cool, superior product like what we have should bow their head in shame and get a job more suited to their talents. flipping burgers perhaps. The profits (and I stronly believe that there could be some significant funds generated like this) would be plowed back into funding developers. no, debian doesn't need profits. we're doing this for fun, not money remember? anyway, there isn't enough money to pay 300 developers what they're worth. i'll work on debian for free, but i won't do it for $5/hour. any donations received should be used to pay for any debian expenses, and the remainder used to support free other software projects. craig -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: I am leaving Debian
On Fri, 20 Mar 1998, Ian Perry wrote: Your statement that free software to the masses really isn't compatible with Debian is disturbing. i think he said that it is incompatible with his vision. I have spent a considerable amount of time over the last four months investigating and trialling Debian, over slackware and redhat, as we are in the near future about to install some serious hardware running under Linux. Debian, so far, is ahead. Can you please clarify your statement, as it appears to point to Debian going commercial, which will lead to all sorts of nasties. debian is NOT going commercial. our committment to free software and the free software community is as strong as ever. i hope that answers your question :-) craig PS: as you noted, we are way ahead in terms of quality and we intend to stay that way. we probably need some sort of evangelism/marketing team to spread the word about debian. this will probably be a good opportunity for users who want to contribute something back to debian but aren't confident about their coding skills...not everyone is a programmer, but everyone has some valuable skill they can contribute if they have the time or inclination. -- craig sanders -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: routing daemons for debian
On Sat, 14 Mar 1998, George Bonser wrote: I am in need of a routing daemon capable of ospf. I notice that debian does not include gated. Is there an alternative package capable of this that I am overlooking? you can find a (hamm/libc6) debian package of gated at http://debs.fuller.edu, christoph's unofficial debian archive. unfortunately, due to the merit gated licensing problems it will probably never make it into debian main but i'm glad it's been packaged even unofficially. btw, it works very nicely. re: your sig... Just be thankful that Microsoft does not manufacture pharmaceuticals. why? do you think MS are the only evil technologically-incompetent mega-corporation pushing crappy, inferior product on an unsuspecting populace? the pharmaceuticals industry makes MS look like a mere schoolyard bully. craig -- craig sanders -- E-mail the word unsubscribe to [EMAIL PROTECTED] TO UNSUBSCRIBE FROM THIS MAILING LIST. Trouble? E-mail to [EMAIL PROTECTED]
Re: [off-topic] not only a server, what hardware?
On Wed, 4 Mar 1998, Marcus Brinkmann wrote: I have managed to promote Debian in a small but expanding firm (currently a dozen systems). They have old Novell Net and Samba at the moment, but they like Linux and want to change somehow. They allow me to give them a wishlist for a Debian System, and they'll buy it, and now I really need some help here. TASKS: File Server, Print Server, Internet Connection per ISDN (so this would work as a Gateway Firewall) (light load), File Backup, If you can do it, i would suggest that you put the gateway/firewall on a separate box. scrounge up an old 386 or 486 (running debian, of course) if you have to. It's not a performance issue - a well configured debian box can easily handle all of those tasks - it's a security issue. the fewer services running on your firewall, the less likely it is that a newly discovered security hole can be exploited. something like this ought to do it: ^ (ISDN line to the internet) | | v +---+ ++ | 386 | | Server | +---+ ++(other machines) | | || | | +- eth0 - 192.168.1.0/24 (internal, firewalled LAN) one box *can* do the lot, but it greatly complicates the firewall and other security configuration. and probably to be used at Workstation, too (login, probably with X for Windows) see comments below about mixing WS Server functionality. If you could take a quick look at the following lists and comment on it, I would be very grateful. Pentium = 166, probably not so important RAM = 64, probably more (96?, 128?) more memory is good. much more important for fileserver performance than a few extra Mhz processor speed. SCSI discs, 2 or more each 2-4 GB. Is buslogic available in Germany? Other good brands? Mainboard ASUS Normal architecture or PS/2 (is PS/2 well supported?) PCI. What is a good Graphic card (they'll need a good one) is Matrox Millenium well supported? Other brands? anything that works. an S3 Trio-64 is good value for money...cheap and adequate for most needs. remember that this machine is primarily a server, not a workstation. mixing those two functions is OK if the user is the system admin and knows what they're doing (and how to avoid harming system performance/stability)however you can't trust a normal user to know that they really shouldn't be playing quake or real-video on the company file-server. What is a good backup device? DDS-2 or DDS-3 tape. don't bother with flimsy toys like ftape units. What is a good network card (they will switch to a faster network soon, at the moment they use NE2000 compatible cards, but I recall something with 100Mbps or so). PCI NE-2000 clones work well in my experience. they're not the fastest card around, but they're dirt cheap and easy to set up. Other things (as CD-ROM, Monitor, etc) I do not expect problems with. Should I? i've had problems with 24x CD-ROM drives. didn't bother figuring out why, i just swapped it with a W95 user for an old 8x cd-rom. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: [off-topic] not only a server, what hardware?
On Wed, 4 Mar 1998, Marcus Brinkmann wrote: I have managed to promote Debian in a small but expanding firm (currently a dozen systems). They have old Novell Net and Samba at the moment, but they like Linux and want to change somehow. They allow me to give them a wishlist for a Debian System, and they'll buy it, and now I really need some help here. TASKS: File Server, Print Server, Internet Connection per ISDN (so this would work as a Gateway Firewall) (light load), File Backup, and probably to be used at Workstation, too (login, probably with X for Windows) btw, you'll almost certainly want to run squid on this network. this means that if you split the gateway/firewall functions onto a separate machine you will need to either: 1. make the firewall powerful enough to run squid. this basically means at least a 486-66 with lots of memory - 32mb or 64mb minimum. the more you want it to cache, the more memory it needs. CPU speed isn't that big an issue with squid for small networks - memory and disk speed are. 2. run squid on the main server, give it an extra 32 or 64mb or so, and use IP masquerading on the gateway...you would probably have to run ipmasq anyway. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: Modifying Routing Tables on the fly
On Tue, 3 Mar 1998, Ian Perry wrote: ---cut here--- #!/bin/sh USER=$( who|grep ttyS1 | awk '{printf $1}') case $USER in fulltest) /sbin/route add 192.168.1.1 eth0;; esac --cut here-- glad to hear you're figuring it out :-) Again, Many thanks what you've got there will probably work for you, but unless i'm reading it wrong, it will allow ANY logged in user to follow that route while fulltest is logged in. IMO, a better way to do it would be to have the route there permanently (e.g. set up the route in /etc/ihit.d/network as normal, and firewall) , and use ipfwadm to selectively enable/disable access to the 192.168.1/24 network. as a VERY ROUGH example (modify to suit your requirements) in /etc/init.d/network: ---cut here--- # allow localhost [127.0.0.1] and the machine's IP address (eth0 # interface) to access the 192.168.1.0/24 network /sbin/ipfwadm -I -a accept -P any -S 127.0.0.1 -D 192.168.1.0/24 /sbin/ipfwadm -I -a accept -P any -S $IPADDR -D 192.168.1.0/24 ---cut here--- in /etc/init.d/ip-up ---cut here--- case $USER in fulltest) # first delete the deny rule ipfwadm -I -d deny -P any -S $5 -W $1 -D 192.168.1.0/24 # then add the accept rule ipfwadm -I -a accept -P any -S $5 -W $1 -D 192.168.1.0/24 ;; *) # first delete the accept rule (if any) ipfwadm -I -d accept -P any -S $5 -W $1 -D 192.168.1.0/24 # then add the deny rule ipfwadm -I -a deny -P any -S $5 -W $1 -D 192.168.1.0/24 ;; esac ---cut here--- and in /etc/ppp/ip-down: ---cut here--- case $USER in fulltest) ipfwadm -I -d accept -P any -S $5 -W $1 -D 192.168.1.0/24 ipfwadm -I -a deny -P any -S $5 -W $1 -D 192.168.1.0/24 ;; esac ---cut here--- note, these code snippets are just the bare bones of the idea. you'll need to adapt them to suit your needs. btw, it is possible (likely) that you don't actually need to delete the rules - i think that they may go away automatically when the ppp interface goes away (i.e. when the user disconnects). try it and see...if true, then it will simplify the scripting considerably, you probably wont even need to use /etc/ppp/ip-down at all. also note that this is all it should work but i haven't tested it or even done it. the purpose of this message is not to give you a magic spell that solves your problem but to illustrate a method which you can use to solve it yourself. play with it and find out.enjoy! craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: Modifying Routing Tables on the fly
On Mon, 2 Mar 1998, Ian Perry wrote: I am trying to modify a route table dependant on which user logs in through a dial-up connection. viz: route add 192.168.1.1 eth0 I have already got route add -net 192.168.0.0 netmask 255.255.0.0 lo to stop other users getting to the local network (other than what they are supposed to) this is what the /etc/ppp/ip-up script is for. e.g. ---cut here--- #!/bin/sh # # $Id: ip-up,v 1.1 1997/12/16 11:37:26 phil Exp $ # # This script is run by the pppd after the link is established. # It should be used to add routes, set IP address, run the mailq # etc. # # This script is called with the following arguments: #Arg Name Example #$1 Interface name ppp0 #$2 The ttyttyS1 #$3 The link speed 38400 #$4 Local IP number12.34.56.78 #$5 Peer IP number12.34.56.99 case $5 in 192.168.0.1)route add ..blah... ;; 192.168.0.2)ipfwadm -I .. ;; 192.168.0.3)blah blah blah blah line 2 blah line 3 ;; esac ---cut here--- this example executes the route add command if (and only if) the remote IP address is 192.168.0.1. it also has demonstrates a special ipfwadm (firewall/packet filter) rule for 192.168.0.2. e.g. say you have a service running on one of your machines which your users have to pay extra to get access to...actually, you'd probably do this based on user name rather than IP address - you could use $2 (the tty) to lookup the user name. you'd use /etc/ppp/ip-down to delete the ipfwadm rule when the interface died. the third case shows that multiple script lines can be executed for any case - ;; is used to end the case. I have set up the user's login shell to run the file to add the route and ip-down to remove the route. this wont work. I get the error message: SIOCADDRT : Operation not permitted. I gather this is because the user is not root. yep. Is there a way to safely change the routing table dependant on who logs in ? Any help would be appreciated. /etc/ppp/ip-up is executed whenever a ppp interface goes up, and /etc/ppp/ip-down is executed whenever a ppp interface goes down. These files are often shell scripts, but they don't have to bewrite them in perl or C or whatever you like. the debian ppp package comes with a sample script (similar to the example above) which doesn't do anything. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: Modifying Routing Tables on the fly
On Mon, 2 Mar 1998, Ian Perry wrote: Sorry, Maybe I did not explain it well enough. The remote IP stays the same for all users loggin in (there is only one dial-in port) The route table has to change according to the user, not by the remote IP. The only means I have of Identifying which user is logging in is the Login Name. If it were a different IP then, not a problem, I have done it on other nodes. The modem dialin line gets IP 12.45.67.89 This never changes, and any one of half a dozen people can use it. This is routes out onto node 192.168.1.127 on eth0 Only one user is permitted to get to machine 192.168.1.1 Can ip-up identify a user ?... not directly. you must have missed the bit in my reply where i (very briefly) discussed doing that. here it is again: it also has demonstrates a special ipfwadm (firewall/packet filter) rule for 192.168.0.2. e.g. say you have a service running on one of your machines which your users have to pay extra to get access to...actually, you'd probably do this based on user name rather than IP address - you could use $2 (the tty) to lookup the user name. you'd use /etc/ppp/ip-down to delete the ipfwadm rule when the interface died. the idea is to use the tty (in $2) to identify the username. something like: USER=$( w | grep $2 | awk '{print $1}' ) will probably work. test it to see if it really does work in all cases. adapt as necessary. once you've got the user name, you can do whatever you need...e.g: case $USER in fred) do this ;; joe) do that ;; esac or can you specify a different ip-up for each user ? no, there's one /etc/ppp/ip-up script. you can use if/then/else or case statements (or equivalent if you use another language) to decide what to do. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: root access and dselect | ftp
On Wed, 25 Feb 1998, David Stern wrote: Running an ftp client as root seems to be an exception to the rule about not running as root. actually that rule isn't a general prohibition against doing anything as root. it is advice about only running as root for system maintainence tasks. upgrading the system using dselect certainly qualifies as system maintainence. the idea is that by running as a non-priviledged user you can minimise the risk of problems, and also mimimise the severity of any problems which occur. problems includes buggy software, user mistakes, and malicious code (e.g. trojan horse programs or viruses) e.g. if you accidentally type rm -rf / as root you blow away the whole system. if you do it as a normal user the worst you can do is erase your own home directoryand in most cases, will suffer no damage at all because you will probably have noticed your mistake and hit Ctrl-C before rm gets to your home dir. another problem which you avoid by not running as root except when necessary is the risk of trojans or virusesmalicious programs like these can't affect your system if they don't have the permissions required to modify files. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: DEBIAN or REDHAT ?
On Thu, 26 Feb 1998, Ossama Othman wrote: I am currently using hamm and am very happy with it. I will be installing a new Linux system soon but everyone is telling me to install RedHat 5. Has anyone any opinions on this? I tend to be partial to Debian, especially with GNOME being integrated into hamm. However, I've never used Redhat. i'd say stick with what you already know and like. RH isn't as nicely integrated a system as debian is, and RH5 is from all reports i've read, quite buggy - even when compared to the hamm pre-release. they rushed it out the door before it was ready. I can't see that switching to RH5 would gain you anything. There's nothing in RH5 that isn't in hammin fact, hamm has a lot more packages available for it and is (almost) pure libc6, whereas RH5 is a mixture of libc5 and libc6 packages. debian isn't perfect (nothing is :), but if you're used to debian and like the way it works then you will probably find RH to be clumsy, frustrating and annoying. BTW, redhat are committed to gnome too - because RH are commercial they can't even distribute a working KDE (because of the Qt license problems) so they don't really have any choice. They're putting a lot of energy into supporting the gnome project. Also, I've been advised that RedHat puts out security fixes the next day after a CERT advisory is released. How is Debian when it comes to security and other patches? RH doesn't always come out with a fix the next day. Either does debian. Both RH Debian tend to be very prompt with security fixes - we both see good security as being vital. From what i've seen on the security lists, sometimes RH beats debian with a released patch, sometimes debian beats RHit works out about even. It's not really a race, anyway - all linux dists that i know of share their security patches around. When I was using bo, patches weren't released very often. Is this an indication that RedHat is more buggy, or is it an indication that Debian is more stable? my guess is that it's a combination of things: 1. you probably weren't looking in the right place. as someone else pointed out, the fix for ssh was released within a few days of the bug being discovered. ssh is crypto and therefore a dangerous munition...it can't be exported from the US so you won't find it on any of debian's US ftp servers. You can only find it in the free world - non-us.debian.org or mirrrors. 2. security fixes are announced on debian's web page. Look for the security link on http://www.debian.org/ 3. bo is an anomaly. or more precisely, the upgrade to hamm is an anomaly because it has taken so long and is such a big change. In the past we have been able to advise users to just upgrade individual packages to the version in 'unstable'. We haven't been able to do that this time around because the stable release (bo) and unstable (hamm) are based on different versions of the libc.Upgrading any individual package to the version in hamm requires upgrading the entire system to hamm. The procedure for doing this upgrade is quite well documented now (and there's even a script to do it automatically) but it's still a lot of work just to get one package upgraded. Fortunately, once hamm is released (code freeze is scheduled for mid-late march!), users will be able to easily upgrade any individual to the 'unstable' version, so we'll be back to normal. For example, when the CERT advisory for SSH-AGENT was release over a week ago, one of the OSes that responded to the CERT advisory was RedHat. Much to my dismay, Debian wasn't one of the OSes that was mentioned on the CERT list. I ended up compiling SSH on my own. sometimes they mention debian, sometimes they don't. ditto for redhat and slackware and other distributions. ditto for other unixes too. quite often, security problems on other unixes or other distributions aren't a problem on debian - either because we already fixed it or because the problem is only exploitable in specific environments. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: Debian Linux and Non-Free Packages.
On Tue, 24 Feb 1998 [EMAIL PROTECTED] wrote: if you want wide distribution and adoption of your software in the free software community, then use a free/open source toolkit. if you don't care about that then use motif or some other proprietary toolkit. The thing here is that no free toolkit meets my need. so help to make one that does meet your needs. or adapt your needs so that they can be achieved with the free software tools that are available. or both. if you do want lots of people to use your software, and lesstif or other toolkit isn't good enough for your needs then you should join the development team of one of the projects and contribute code until it is good enough. Great proposition. Unfortunately, both lesstif and gtk developers flamed me heavily while I was lurking on their lists :) May be it's my problem though... could be. it's hard to imagine that a lurker would get flamed. people usually only get flamed for saying something annoying. Anyway, if we recall from what we started, the GPL licensing which obstructs my development... that's one way of looking at it. a more accurate view is that you choose to develop software which is incompatible with DFSG licenses. It is your right to develop software however you choose, but you shouldn't complain when your own choices (made deliberately and with fore-knowledge of the licensing problems that you will encounter) prevent you from using GPL-ed software. If you want to use that software, you have to abide by the licensing restrictionsjust as anyone who wishes to use your software has to abide by your license. You win some, you lose some.it's up to you to weigh the pros and cons and make your own decision. The time may come when lesstif is mature enough so that my software can be safely recompiled and become compelely free. I still cannot reuse GPL'd code *now*. yes. that is one of the results of your choices. If you like, I can send you a pilot prototype so that you can compare its look with the one of, say, gzilla. no thanks, i'm not terribly interested in software which i can't modify if i need to. (i don't have motif so i can't recompile it even if it is free) Do you use Netscape? :) yes. netscape has enough utility value to offset the non-freeness. as i said, open source is only ONE of my criteria. a very important one, but not the only one. fortunately, netscape is being released as an open source project next month so this anomaly will be gone. i also have staroffice 3.1 installed. i don't use it. the only reason i have it installed is so that i can convert MS word documents that people occasionally send me no matter how many times i tell them to send plain text. Unfortunately StarOffice 3 can't understand Word documents from office 97 so it's uselessit'll be one of the first things that get removed next time i run out of disk space. maybe StarOffice 4 can do it - i'll try installing it next time i need to convert a Word doc. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: Debian Linux and Non-Free Packages.
On Tue, 24 Feb 1998 Alex Yukhimets [EMAIL PROTECTED]@krypton.stern.nyu.edu wrote: Great proposition. Unfortunately, both lesstif and gtk developers flamed me heavily while I was lurking on their lists :) May be it's my problem though... could be. it's hard to imagine that a lurker would get flamed. people usually only get flamed for saying something annoying. Oh, yeah. I was saying very annoying things. Like the one that gtk people broke the X naming conventions. And that the goal of lesstif is theoretically unachievable in a way it is stated on their homepage due to the nature of X... it may have had something to do with the *way* you said it. people tend to respond negatively to things that they perceive as an attack whether intentional or not. that's something to think about, anyway as i said, open source is only ONE of my criteria. a very important one, but not the only one. fortunately, netscape is being released as an open source project next month so this anomaly will be gone. Yes, but I wouldn't get that excited about it. Remember, Netscape uses Motif (and as any other decent Motif product, not only Motif, but some other add-on *commercial* widgets). which is why porting netscape to gtk is at (or near) the top of the open source netscape wishlist. craig -- craig sanders -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .