If I run this:
find /path/to/files/ -type f -mtime -2 -name *.xml.gz
I get the expected results, files with modify time less than two days old.
But, if I run it like this, with the print0 flag:
find /path/to/files/ -print0 -type f -mtime -2 -name *.xml.gz
I get older files included as well.
Order of operations
find /path/to/files/ -type f -mtime -2 -name *.xml.gz -print0
Thanks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
I have a string, 2012_10_16; let's call this $YESTERDAY
How can I rsync a file tree from a remote machine to the local one,
including *only* filenames that contain the matching string? I've
read the man page and googled around but can't seem to get the syntax
right. I either end up syncing all
Suppose you have server A and server B. Server B is running 60
seconds too fast, while server A is accurate. Is there a way to
gradually move server B's time back into sync with server A, without
making a drastic, immediate change to the clock? In other words, we
would like to 'smear' the
This is already how ntpd works. When you first start the service
(usually upon reboot), it will use 'ntpdate' to do a hard set of the
clock, then ntpd picks up and adjusts the clock back and forth to keep
it correct.
My understanding was that ntpd will use slewing for adjustments of
less
What I'm trying to avoid is abruptly resetting the clock from 12:06 to
12:05 all at once. Instead we want to slowly turn the clock back that
one minute, but spread the changes across several hours or days.
I think the -x option may be our solution; I R'd the FM and it says:
...If the -x
This snippet of code pulls an array of hostnames from some log files.
It has to parse around 3GB of log files, so I'm keen on making it as
efficient as possible. Can you think of any way to optimize this to
run faster?
HOSTS=()
for host in $(grep -h -o [-\.0-9a-z][-\.0-9a-z]*.com ${TMPDIR}/* |
*sigh*
awk is not cut. What you want is
awk '{if (/[-\.0-9a-z][-\.0-9a-z]*.com/) { print $9;}}' | sort -u
No grep needed; awk looks for what you want *first* this way.
Thanks, Mark. This is cleaner code but it benchmarked slower than awk
then grep.
real3m35.550s
user2m7.186s
sys
*sigh*
awk is not cut. What you want is
awk '{if (/[-\.0-9a-z][-\.0-9a-z]*.com/) { print $9;}}' | sort -u
I ended up using this construct in my code; this one fetches out
servers that are having issues checking in with puppet:
awk '{if (/Could not find default node or by name with/) { print
Anyone know how to get statistics on bonded interfaces? I have a
system that does not use eth0-3, rather we have bond0, bond1, bond2.
The members of each bond are not eth0-3, rather they are eth6, eth7,
etc. I didn't see anything in the man page about forcing sar to
collect data on specific
Anyone know how to get statistics on bonded interfaces? I have a
system that does not use eth0-3, rather we have bond0, bond1, bond2.
The members of each bond are not eth0-3, rather they are eth6, eth7,
etc. I didn't see anything in the man page about forcing sar to
collect data on specific
Anyone have a script or utility to convert an RTF file to ANSI? The
main idea here is to preserve the color codes that are specified in
the RTF file, so they can be displayed easily in a terminal window.
___
CentOS mailing list
CentOS@centos.org
This is kind of odd.
[scarolan@host:~]$ cat loremipsum.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec quis
ipsum sed elit laoreet malesuada. Quisque rhoncus dui vitae eros
euismod fermentum sollicitudin sem scelerisque. Nulla facilisi.
Maecenas mollis pulvinar euismod. Duis
2011/7/20 Lamar Owen lo...@pari.edu:
On Wednesday, July 20, 2011 03:23:58 PM Sean Carolan wrote:
[snip]
Where did all the letter n's go?
I can't duplicate the problem here on a CentOS 5.6 box. What locale are you
set to? Here's what I get (note that a copy from the e-mail you sent
[scarolan@server:~]$ echo $myvar
Lorem ipsum dolor sit amet, co sectetur adipisci g elit.
lots of letter !
Weird huh?
Ok, I'm a bonehead; I had this in my bash history:
IFS='\n'
That seems to have been the cause of the missing n's. Now the next
question would be, how can I include the \n
(No, I don't advocate perl for everything, but knowing more about the
problem can
help in determining a suitable solution.)
You're right, I gave up and used python instead. The basic idea here
was to gather together a long list of hostnames by grepping through a
few hundred files, check the
I am working on a sandbox machine that will allow users to play around
with building virtual machines, then blow them all away each night
with a cron job. I wrote a small script that uses the virsh command
to destroy the VMs, then remove the storage. For some reason the vm
name still shows up in
Did you try:
virsh undefine domain-id
where domain-id is your vm name
Perfect, thanks Earl! Here's the script in case anyone else might
find it useful. Please post any improvements if you can see a way to
improve it.
#!/bin/bash
# Removes all KVM virtual machines from this host
# First
Can anyone point out reasons why it might be a bad idea to put this
sort of line in your /etc/hosts file, eg, pointing the FQDN at the
loopback address?
127.0.0.1hostname.domain.com hostname localhost localhost.localdomain
___
CentOS mailing list
First, if your host is actually communicating with any kind of ip-based
network, it is quite certain, that 127.0.0.1 simply isn't his IP
address. And, at least for me, that's a fairly good reason.
Indeed. It does seem like a bad idea to have a single host using
loopback, while the rest of the
(Make sure you pick .dummy so as not to interfere with any other DNS.)
In theory you could leave off .dummy, but then you risk hostname being
completed with the search domain in resolv.conf, which creates the
problems already mentioned with putting hostname.domain.com in
/etc/hosts. (I have
The remote host's $TERM variable is in fact xterm. When I connect to
the screen session the $TERM variable is 'screen'.
Are you running screen locally or remotely?
Remotely. My work machine is a laptop, which is not powered on all
the time. Hence I use a remote box as a jumping-off point,
In this case, you might want to conditionally assign some reasonable
value on failure. Say:
tput -T $TERM init /dev/null 21 || export TERM=xterm
'tset -q' is another test which can be used.
The remote host's $TERM variable is in fact xterm. When I connect to
the screen session the
I really like gnu screen and use it everyday but there's one thing
that is a bit inconvenient, and that's the odd line wrapping and
terminal size issues that seem to pop up. The problem crops up when I
type or paste a really long command, and then go back and try to edit
it; the text starts to
You wouldn't by any chance be using PuTTY to access the session? If
so, you may need to play around with the terminal settings including
the scroll type so that it displays correctly. I don't recall the
specifics but a similar thing happened to me.
Actually, no I'm using gnome-terminal on
The subject just about says it all - I'm wondering if there is a way
to do a completely hands-off installation, including the reboot at the
end, without Press any key to continue?
___
CentOS mailing list
CentOS@centos.org
Use the 'reboot' option in your kickstart.
Isn't this the default anyway? I will try to specify it explicitly
and see how it works...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Sun, Oct 31, 2010 at 6:07 AM, Sean Carolan scaro...@gmail.com wrote:
Use the 'reboot' option in your kickstart.
Isn't this the default anyway? I will try to specify it explicitly
and see how it works...
Looks like that did the trick, thanks Markus
Maybe someone can help me sort this out. I want to block outbound
mail from my network based upon the recipient address. Internal
servers should still be allowed to send emails, but not to a few
specific addresses. I've tried creating some rules in
/etc/mail/access but to no avail. Is it
lefgifu with: sendmail access TO
http://www.feep.net/sendmail/tutorial/anti-spam/access_db.html
'The left hand side of each entry can optionally be prefixed
with one of the tags To:, From:, or Connect:.'
Yes, I have tried this. I have entries like this in my access file:
One silly thing (but needs to be asked):
Did you rebuild access.db after editing access?
Yes, the rebuild command is built into my init script. I just double
checked it.
I'm getting better results having changed the setting to REJECT
instead of DISCARD. I will investigate a bit further when
I'm not sure how much 64-bit support the kernel expects so there might be some
complications going that direction, but you can certainly install a 64-bit
system and run the 32-bit versions of the apps and have both versions of most
libraries available.
To bring some closure to this thread, I
I have a large (1.5TB) partition with millions of files on it. e2fsck has
been running nearly 12 hours and is still on Checking directory structure.
Any tips for speeding this along?
___
CentOS mailing list
CentOS@centos.org
Yep, same answer here, I had RHEL4.8 on a 2.6 TB MSA, and you just leave it
going over the weekend.
I kind of figured as much; we're letting ours run during the week so
that hopefully the partition will be ready for weekend backup jobs.
Thanks for the feedback.
On Tue, Aug 31, 2010 at 8:49 AM, Brent L. Bates blba...@vigyan.com wrote:
Use the XFS file system and never have to worry about fsck again. You'll
have a fast, more reliable, and more robust file system with over a decade and
exabytes of use under its belt that you will never have to wait
To extend his comment: There is a bug in e2fsck for filesystems with
many hardlinks. It could take *weeks* or longer, if it finishes at all,
to run on a large filesystem with lots of hardlinks.
http://www.mail-archive.com/scientific-linux-us...@listserv.fnal.gov/msg02180.html
Awesome. This
According to the release notes this bug has been fixed in version 1.40:
http://e2fsprogs.sourceforge.net/e2fsprogs-release.html#1.40
E2fsprogs 1.40 (June 29, 2007)
There was a floating point precision error which could cause e2fsck to
loop forever on really big filesystems with a large inode
I'm configuring some monitoring for a particular java/tomcat
application. We have noticed the occasional Cannot allocate memory
error. When this occurs apache still seems to return a 200 OK
status code. Anyone know how to configure this so that when java has
an error, apache will also return
Change to clamd (use clamdscan). Yes, clamscan needs quite a bit of RAM.
Kai
Thank you Kai, our performance looks a lot better now.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
We have a perl cgi script that accepts uploaded files and runs
clamscan on them. While observing the system performance I noticed
that each clamscan process consumes up to 250MB of RAM. Is this
normal for ClamAV? This seems like an enormous amount of RAM, for
simply scanning one file for
I'm monitoring some CentOS 5 servers running Sun Java. We have set things
up so 2048 MB of RAM are available for the base operating system, taking
into account the xMx and permgen settings. What we're seeing is the swap
space getting used up, and not released. Is this normal behavior?
I think Xms/x is java's heap space for program object storage. It doesn't
take
into account the space needed for the JVM itself. Top should show you the
actual memory usage - along with any other programs that might be using a lot.
One of our java developers indicated that the heap space
I'm pretty sure that's not true. Permgen is just part of the heap space and
none of that accounts for the executing part of the JVM. In any case, you
probably want to allow some free memory to be used for filesystem cache.
I'll read up on this some more. I'm not a java expert.
Are there
What am I doing wrong here? I need to be able to write to /var/cvs.
This used to work before I moved these groups into an LDAP directory
instead of /etc/group:
[scaro...@watcher:/var/cvs]$ touch test.txt
touch: cannot touch `test.txt': Permission denied
[scaro...@watcher:/var/cvs]$ ls -ld
What is the output of 'ls -l /var/cvs/test.txt' ?
Marko
No, it doesn't exist. Oddly I have another user called cfmaster who
can write files in there just fine:
[cfmas...@watcher cvs]$ pwd
/var/cvs
[cfmas...@watcher cvs]$ touch test.txt
[cfmas...@watcher cvs]$ id cfmaster
uid=5101(cfmaster)
having a group with the same name in both /etc/group and LDAP groups
would be the surest path to insanity. Likewise, for /etc/passwd and LDAP
users.
I just needed to log out and back in again. Thanks for all your help!
___
CentOS mailing list
On some systems, reboot is required? to access disk from SAN device.
This turned out to be a zoning issue. Although I had properly created the
zone, I had to add it to our Prod configuration to make it live. Once
that was done, the virtual tape library was recognized right away:
kernel:
On some systems, reboot is required? to access disk from SAN device.
At least this issue is on my Hitachi AMS san system.
Yes, we've tried a few reboots. I'll bet the testing on this d2d
device did not get as thorough QA on Linux as it did on Windows. I'll
post the solution here if HP is
Did you check the output of /proc/scsi/scsi?
Yea, it's empty.
I would do a SCSI rescan using
echo - - - /sys/class/scsi_host/hostX/scan
Tried this and also:
echo 1 /sys/class/fc_host/host0/issue_lip
Still, nothing is seen by the host. We have also tried changing the
port settings
Maybe one of you has experienced something like this before.
I have a host running CentOS5.3, x86_64 version with the standard
qla2xxx driver. Both ports are recognized and show output in dmesg
but they never find my storage device:
qla2xxx :07:00.1: LIP reset occured (f700).
qla2xxx
I believe you will need:
syslogd -a /home/username01/dev/log -a /home/username02/dev/log
-a /home/username03/dev/log -a /home/username04/dev/log - or
something like this. I don't know the syntax for multiples -a...
This seems very impractical, both from a security standpoint and the
fact
Maybe one of you can help. We have set up a CentOS server so that
each user who logs in via sftp will be jailed in their home directory.
Here's the relevant sshd_config:
# override default of no subsystems
Subsystem sftpinternal-sftp -f LOCAL2 -l INFO
Match Group sftponly
I solved a similar issue with jail and syslog adding a -a
/home/jail/dev/log parameter to syslog startup.
In our environment the chroot jail is /home/username. Does this mean
we need a /home/username/dev/log for each and every user? If the
daemon is chroot'd to /home/username wouldn't this
I have a large group of Linux servers that I inherited from a previous
administrator. Unfortunately there is no single sign-on configured so
each server has it's own local accounts with local authentication.
Normally I use ssh keys and a handy shell script to change passwords
on all these
If your script change passwords via ssh and usermod, why not at
the same time do a chage -d number username?
Thank you, I may end up doing it this way at least until we can
configure AD or LDAP authentication.
___
CentOS mailing list
CentOS@centos.org
# Turn off SACK
net.ipv4.tcp_sack = 0
and execute sysctl -p to apply it. You can also use sysctl -w
net.ipv4.tcp_sack=0 to turn it off temporarily. Our file transfers worked
just fine after the change.
I realize there are differences our situation and yours and this might not
work in
I'm not sure what would cause that, but I'd use rsync over ssh instead of sftp
anyway - and use the -P option to permit restarting.
If it were up to me, we'd take that route. The software the client is
using is WinSCP which does have a restart feature, however it's not
working for us. I'm
Tell him to switch WinSCP to SCP mode.
Kai
Tried that, it still fails the same way. Here's the short list of
what I've tried to troubleshoot this:
Used SCP via the gui and command line
Used SFTP via the gui and command line
Ran yum update to bring all packages up to date
Tried stock CentOS
Just an idea or thought on it. You never said what the file size was or did
you? My idea is that is, there not a file size limitation on transfer to
and from the server? I thought there was? Check you vsftpd.conf out or
what ever ftp server your running for the size limitation. Maybe some
Load balancer... is that set up to maintain connections, or will it, like
IBM's
WebSeal, go to whichever server is next/least used in the middle of a
connection?
It's set to use least connection but there is only one server behind
the virtual IP at the moment.
I'm reasonably sure at this
I have an SSH server that was set up for a client, and every time we
try to upload large files via SFTP or scp, the transfers speed quickly
slows to zero and gives a - stalled - status message, then
disconnects. Here is an example:
ftp put iTunesSetup.exe iTunesSetup.exe
Uploading
On Mon, Dec 21, 2009 at 7:06 PM, 唐建伟 myh...@gmail.com wrote:
I met the same as you, but always due to the bad network connection.
I should probably provide some more information, the server is a VMware
guest running CentOS 5.3. It's using the vmxnet driver for the eth0
connection. IPv6 is
I have an odd situation here, maybe one of you can help. We have a
script that runs via a cron job. It's purpose is to decrypt
PGP-encrypted files in a certain directory. I have tried the command
two different ways, both fail with the same error message:
gpg --decrypt $file
On Mon, Oct 19, 2009 at 2:41 PM, Spiro Harvey sp...@knossos.net.nz wrote:
Is the cron job running as a different user? eg; are you running gpg as
a non-privileged user and the cronjob as root?
The cronjob script runs from /etc/crontab. Let me try root's personal
crontab instead.
Typically this type of problem is caused by environment variables
that are set in a login shell, but are missing or different than
those set for jobs running under cron.
You nailed it, Bill. Running the cron from root's personal crontab
worked fine. Must have been environment variable
While having hard limits makes it safer, wouldn't it be better to control the
memory usage of the script instead of setting limits that would trigger an
out of memory...?
How would you control the memory usage of the script if it's run by
the root user?
But what if the program's memory use is dependent on lots of factors
which are not easily predictable.
And you want to avoid bringing the whole system to it's knees while swapping
and killing arbritrary other programs while one program is consuming all
of ram and swap.
In that case it's
I have a perl script which runs from a cron job. How would you limit
the amount of RAM that this script is allowed to consume? Is there a
ulimit setting that will accomplish this? If so does ulimit have to
be run each time the script is run, or is there a way to set it
permanently?
If you run it as a regular user, then maybe you can check out
/etc/security/limits.conf
Currently the script runs as the root user. I may be able to change
this, but wanted to find out whether there was some other way first.
Would it be possible to use a ulimit command within the perl script
First, install the perl module BSD::Resource
yum install perl-BSD-Resource
Then use it in your program like:
#!/usr/bin/perl
use BSD::Resource;
setrlimit(RLIMIT_VMEM, 1_000_000, 1_000_000);
# rest of the program that is limited to 1MByte now
Thanks, Paul. I knew I'd find an
/dev/sdb1 976760032 97808 976662224 1% /mnt/usbdrive
I am thinking of having three partitions instead of just one whole big 1 TB
thing, and then format all three partitions in ext3. I tried doing
fdisk, but cylinders are always confusing for me. Is there any GUI
tool that
I have a server that is undergoing some patching soon and would like
to make note of any files that have changed after the patching is
complete. Can you recommend a tool that uses md5sum snapshots to do a
quick before and after test, showing anything that's changed on a
particular file system?
You are missing the point, imho. I think the real issue, for me anyway,
is that Amazon is actively discouraging what is essentially a community,
in spite of the fact that they and many of their users rely on the
community to get things done, both work and play.
Indeed. The entire
So, unless they are happy to come back and start talking to us again I
highly recommend everyone not bother using EC2.
- KB
I had the exact same experience when trying to get a sales rep to talk
to me about hosting an application for my company. We need to know
that someone will be there to
I've been waiting for iPhone OS 3.0 before trying SSH (for Bluetooth
keyboard support:
http://www.flickr.com/photos/56083...@n00/3335201114/ -- but hopefully
with a fold-up keyboard)
Oh, an Iphone with a bluetooth keyboard would be perfect. One of the
main reasons I've stayed away from the
On Mon, May 4, 2009 at 1:41 PM, ja...@aers.ca wrote:
Touchterm is nice as it can be configured to launch screen (provided
your host has it installed) on connect so that if you switch away from
ssh on your iphone you don't have to start completely over when you
switch back.
Yes, a sucky
I'm up for a cell phone contract renewal and am considering upgrading
my handset. I looked at some devices at my local ATT store but
nothing really jumped out at me. I'm particularly interested in a
cell phone that has a reliable ssh client, with ssh-agent and public
key authentication
I use ConnectBot (http://code.google.com/p/connectbot/) on Android (I
have a T-Mobile G1). I absolutely recommend it. I have used it several
times in emergency situations.
Looks cool, if I wasn't stuck with ATT I would consider getting a G1.
Perhaps Samsung will come out with their Android
Last time I looked at it, I described the installation process as
only slightly less complicated than building a Saturn-V rocket out of
1960's era TV parts.
You were not kidding - I some how managed to get netdisco installed
using the CentOS installer script but there were several points where
I'll repeat my recommendation for OpenNMS. Getting started is as easy
as 'yum install' (almost...). And it can do about anything you'd want
in a monitoring system - including matching up those switch ports with
the connected devices.
Les, at first I didn't heed your advice because I figured
Back to my first email message when I thought you were already using
OpenNMS... You have to uncomment the Linkd service in
etc/service-configuration.xml, then restart opennms and give it some
time to probe. Then it should show from the 'View Node Link Detailed
Info' at the top left of a
I have a Cisco 6509 switch that I'm monitoring with SNMP from a
CentOS5 machine. SNMP polls are the only access I have to this
device, we are not allowed to log on via telnet.
How can I find out which port on the switch a particular server is
connected to? I was hoping that this is somehow
We have a six- or seven- year old cisco 3750 which is running an IOS
which doesn't have the newer MIB; for this switch, we must explicitly
query the MIB-II Bridge for each VLAN. I would hope that newer
relesaes of IOS wouldn't have this limitation.
This is exactly what I was missing. Thank
My notes: http://wiki.xdroop.com/space/snmp/Switching+Tables
Hi Dave, so using the example from your site above I tested a mac
address against one of our switches:
[scaro...@host:~]$ snmpwalk -v1 -c pub...@200 10.100.3.6
.1.3.6.1.2.1.17.4.3 | grep `hexmac2decoid 00:B0:D0:E1:BF:52`
None of our data center machines are
able to connect so perhaps this is a firewall or NAT issue? Anyway
here is the very un-descriptive error message:
SSL_connect: error::lib(0):func(0):reason(0)
Closing control socket
`ls' at 0 [Delaying before reconnect: 18]
Further
I am unable to find any documentation about this error message,
perhaps one of you has experienced this as well. We have an FTP
server that is configured to accept FTP transactions over SSL. The
server is working fine, as I am able to log in with lftp from my test
linux machine in the office.
I like Gnu screen, but the choice of CTRL-A as the command sequence is
extremely unfortunate. Like many other bash users, I use CTRL-A to
get back to the beginning of the line (emacs editing mode).
How do you all get around this problem? Also, I'm wondering if there
is an easy way to get mouse
Also, I'm wondering if there
is an easy way to get mouse scrolling to work when reviewing terminal
history in screen. It's a pain in the arse to CTRL-A then ESC to be
able to scroll back.
If anyone else is looking for mouse wheel scrolling in GNU screen,
here's the solution I found. I added
Anyone know if this is possible with GNU screen?
I would like to have a macro or keyboard shortcut whereby the
following actions are performed:
1. Open new screen window (CTRL-A C)
2. ssh to some $host
3. Rename current screen as $host (CTRL-A A $host)
I can see that typing screen while
On Mon, Feb 23, 2009 at 11:53 AM, Don Harper d...@duckland.org wrote:
Under bash, I have a function defined like so:
function ss () {
screen -t $1 ssh $*
}
Then, I simply type:
ss hostname
Nice, this is helpful. I used ssc instead because there appears to
be a built in ss command.
What do you use to keep your environment files like .bashrc,
.bash_profile, etc. synchronized across all your servers?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
You can use snmp and cacti to monitor some of the tomcat information.
You simply need to add a few configuration modifications.
See http://java.sun.com/j2se/1.5.0/docs/guide/management/SNMP.html\
Thank you all for the replies. We already use Nagios so I'm hoping
for a nagios-friendly
What do you use for monitoring your Apache Tomcat servers? I have used
jconsole to manually connect and look at the statistics. I'm wondering if
there are any standard tools for watching the health of the java process.
___
CentOS mailing list
Anyone have a function or script for uploading files from a web
browser with a bash script? I know this is possible to do with Perl,
I'm wondering if the same is possible using only bash.
___
CentOS mailing list
CentOS@centos.org
I think he wants to have a shell-script that can process upload-file-
forms, displayed in browsers.
AFAIK, the general rule is: don't do that (CGI programming with shell-
scripts).
Use something else (PHP as CGI, if you don't want to have mod_php).
Good to know, thanks for the info. I
I'm a bit baffled by this problem. Maybe there's a sendmail guru out there
who can help me out here. We have some end-users who need to receive
system-generated mail that originates from a java-based application on our
network. The java app sends the mail through our sendmail cluster, which
I'm a bit baffled by this problem. Maybe there's a sendmail guru out
there who can help me out here. We have some end-users who need to
receive system-generated mail that originates from a java-based
application on our network. The java app sends the mail through our
sendmail cluster, which
#1 - turn your sendmail logging/debugging setting up as high as
it will go for just long enough to capture some of these events.
(then turn it back to its previous setting)
#2 - try using script and then telnet to capture an SMTP session
(Done by hand) with the MTA at the receiving end.
Is there an easy way to configure sendmail to only send mail to
addresses in one particular domain?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
If it is 'your' domain, configure the sender(s) to use the intended
receiving server as the SMART_HOST but don't give it RELAY permissions in
the receiving access file. That way it can attempt to send to other
addresses but only ones local to the receiving machine will be accepted.
Thanks,
1 - 100 of 201 matches
Mail list logo