re: Fixing vinum after removing a HDD

2004-12-16 Thread orville weyrich
 Date: Thu, 16 Dec 2004 21:56:29 -0800 (PST)
 From: orville weyrich [EMAIL PROTECTED]
 Subject: re: Fixing vinum after removing a HDD
 To: [EMAIL PROTECTED]
 
 On Wed, 2004-02-11 at 17:02, Greg 'groggy' Lehey
 wrote:
   resetconfig
   The resetconfig command completely
 obliterates the vinum configu-
   ration on a system.  Use this command
 only when you want to com-
   pletely delete the configuration. 
  
  I'm completely baffled that people want to use
 this
 command so much.
  The correct sequence is to remove the drive,
 replace
 it with something
  else with a Vinum partition, and then start the
 defective objects.
  
  It depends on the volume structures as to whether
 the data can still
  be recovered.
 
 PLEASE PLEASE PLEASE elaborate on what volume
 structuers CAN be recovered after resetconfig and
 what
 strucuters cannot.
 
 I am trying to deal with *invalid* drives in
 dumpconfig and not getting any answers.
 
 Thanks
 
 
 
   
   
 __ 
 Do you Yahoo!? 
 Yahoo! Mail - You care about security. So do we. 
 http://promotions.yahoo.com/new_mail
 




__ 
Do you Yahoo!? 
Dress up your holiday email, Hollywood style. Learn more. 
http://celebrity.mail.yahoo.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Has anybody EVER successfully recovered VINUM?

2004-12-10 Thread orville weyrich
See below
--- Greg 'groggy' Lehey [EMAIL PROTECTED] wrote:
  create original.config.file
 
  Yes, that's basically it.
 
  I tried that already.  It did not have the desired
 effect.  It
  created TWO MORE plexes, that did not have disk
 space to back them
  up (because the drives were fully allocated by the
 original
  configuration.
 
 Correct.   This isn't the way to do it.  I've just
 posted the correct
 URL:
 http://www.vinumvm.org/vinum/replacing-drive.html

OK -- I have gotten to the point where I have a valid
plex (I did it by installing a large, cheap IDE drive
in the system and putting an entirely new plex on it).

But when I try to mount the drive, mount complains
that the volume needs to be fscked.  Fsck refuses.  I
CAN however mount the file system and see the expected
top level directories (liberty mysql orville reform 
tam).  But I cannot view contents inside the
direcotries.

(see selected transcript below).  

So is my data toast, or is there a way to fsck
/dev/vinum/raid?

SELECTED TRANSCRIPT:

S raid.p3.s0State: up   PO:0 
B Size:   2151 MB
S raid.p3.s1State: up   PO:  512
kB Size:   2151 MB
S raid.p3.s2State: up   PO: 1024
kB Size:   2151 MB
S raid.p3.s3State: up   PO: 1536
kB Size:   2151 MB
S raid.p3.s4State: up   PO: 2048
kB Size:   2151 MB
S raid.p3.s5State: up   PO: 2560
kB Size:   2151 MB
S raid.p3.s6State: up   PO: 3072
kB Size:   2151 MB
S raid.p3.s7State: up   PO: 3584
kB Size:   2151 MB
S raid.p3.s8State: up   PO: 4096
kB Size:   2151 MB
S raid.p3.s9State: up   PO: 4608
kB Size:   2151 MB
vinum - quit
bashful# mount /dev/vinum/raid /raid
WARNING: R/W mount of /raid denied.  Filesystem is not
clean - run fsck
mount: /dev/vinum/raid: Operation not permitted
bashful# fsck /dev/vinum/raid
** /dev/vinum/raid
BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH
THOSE IN FIRST ALTERNATE
/dev/vinum/raid: NOT LABELED AS A BSD FILE SYSTEM
(unused)  
bashful# mount -r /dev/vinum/raid /raid
bashful# ls /raid
liberty mysql   orville reform  tam
bashful# ls /raid/liberty
ls: /raid/liberty: Bad file descriptor
bashful# ls /raid/reform
/raid/reform
bashful# ls /raid/reform/*
ls: No match.
bashful# ls /raid/orville
ls: /raid/orville: Bad file descriptor




__ 
Do you Yahoo!? 
Send holiday email and support a worthy cause. Do good. 
http://celebrity.mail.yahoo.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Has anybody EVER successfully recovered VINUM?

2004-12-09 Thread orville weyrich
See below
--- Toomas Aas [EMAIL PROTECTED] wrote:
 I once had a problem with *invalid* vinum drive
 which was solved 
 following this advice from Greg Lehey:
 
 http://makeashorterlink.com/?V29425BF9

So the procedure that worked for you was???

resetconfig
create original.config.file

I have seen alusions to this procedure, and was
planning to try it as a last resort (last because of
the dire warnings that are associated with
resetconfig).  Nowhere however have I seen the above
two lines explicitly juxtaposed, so I have been
reluctant to try it.

Before I try the above, I had an idea last night that
I want to try: I will buy a large, cheap IDE drive
(big enough to hold an entire plex) and ADD it to my
configuration (with all subdisks on the IDE drive). 
At this point, I expect to be able to sync all
subdisks of the IDE plex against one or the other of
the existing plexi to obtain a completely functional
plex, at which point my RAID should come alive for me
to back it up.

Does this plan seem workable?




__ 
Do you Yahoo!? 
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Has anybody EVER successfully recovered VINUM?

2004-12-09 Thread orville weyrich
See below
--- Greg 'groggy' Lehey [EMAIL PROTECTED] wrote:

 On Thursday,  9 December 2004 at  7:06:35 -0800,
  --- Toomas Aas [EMAIL PROTECTED] wrote:
  I once had a problem with *invalid* vinum drive

  So the procedure that worked for you was???
 
  resetconfig

 This doesn't sound like what you want to do.  Which
 part of
 http://www.vinumvm.org/vinum/replacing-drive.html is
 a problem for you?

The problem is that I have two subdisks, raid.p0.s9
and raid.p1.s4 that originally were associated with
the drive ahc0t15.  But when I try to add a new drive
with the name ahc0t15, it does not associate itself
with the two subdisks.  Trying to start the subdisks
gives the error:

vinum - start raid.p0.s9
Can't start raid.p0.s9: Drive is down (5)

Dumpconfig shows that the drive associated with the
two subdisks is *invalid*:

sd name raid.p0.s9 drive *invalid* plex raid.p0 len
4402583s driveoffset 265s state crashed plexoffset 0s

but also shows the unused drive:

Drive ahc0t15:  Device /dev/da9s1e
Created on bashful.weyrich.com at Wed
Dec  8 17:24:07 2004
Config last updated Wed Dec  8
17:37:27 2004
Size:   4512230400 bytes (4303 MB)


The procedure replacing-drive.html did work in another
case.  It doesn't seem to work here.  

Is there some other way to tell vinum that drive
ahc0t15 should be associated with the subdisks
raid.p0.s9 and raid.p1.s4?




__ 
Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone. 
http://mobile.yahoo.com/maildemo 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Has anybody EVER successfully recovered VINUM?

2004-12-09 Thread orville weyrich
see below
--- Toomas Aas [EMAIL PROTECTED] wrote:

 orville weyrich wrote:
 
  So the procedure that worked for you was???
  
  resetconfig
 
 NO!!! At no point in the procedure did I do
 resetconfig.
 
  create original.config.file
 
 Yes, that's basically it.

I tried that already.  It did not have the desired
effect.  It created TWO MORE plexes, that did not have
disk space to back them up (because the drives were
fully allocated by the original configuration.

It took me a while to figure out how to undo that (I
had to stop plexi p0 and p1 before trying to stop and
remove the new plexi p2 and p3).



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Has anybody EVER successfully recovered VINUM?

2004-12-08 Thread orville weyrich
See below
--- Toomas Aas [EMAIL PROTECTED] wrote:

 orville weyrich wrote:
 
  In any event, the example shown in the link lacks
  context. Is the failed drive named data1?
  It is automagically replaced by a new drive named
  data3?
 
 In all cases I've replaced a failed drive, I have
 named the new drive 
 the same as old drive.


THANK YOU THANK YOU THANK YOU

That solved HALF my problem (i.e, it allowed me to
revive one of my two down disks.  This also shows that
vinum CAN handle a fault in both plexes at the same
time.

BUT BUT BUT

I still have another problem -- the same trick does
not work for the OTHER down disk.

vinum-dumpconfig shows:

sd name raid.p0.s9 drive *invalid* plex raid.p0 len
4402583s driveoffset 265s state crashed plexoffset 0s

Note the invalid where the name of my disk should
be.

Apparently VINUM has forgotten that raid.p0.s9 should
be associated with the disk named ahc0t15, so creating
a new disk named ahc0t15 does not associate with the
necessary subdisk.  (*nvalid* does not work as a disk
name).


How do I deal with THIS issue?





__ 
Do you Yahoo!? 
Yahoo! Mail - 250MB free storage. Do more. Manage less. 
http://info.mail.yahoo.com/mail_250
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Has anybody EVER successfully recovered VINUM?

2004-12-07 Thread orville weyrich
I have been trying to figure out how to get VINUM to
recognize a new disk after a disk failure, and no luck
at all.

I cannot find instructions in the official
documentation, nor in the FreeBSD Dairy.

Lots of places tell how to build a VINUM system. 
Nobody ever takls about how to recover from a disk
failure.

Can someone PLEASE help me recover?  I have already
posted complete information to this list, with no
answer.  I will give a short version now and provide
more info if requested.

I am running FreeBSD 4.10.  My current vinum list
includes the following:

S raid.p0.s2State: up   PO: 1024
kB Size:   2149 MB
S raid.p0.s3State: up   PO: 1536
kB Size:   2149 MB
S raid.p0.s4State: up   PO: 2048
kB Size:   2149 MB
S raid.p0.s5State: up   PO: 2560
kB Size:   2149 MB
S raid.p0.s6State: up   PO: 3072
kB Size:   2149 MB
S raid.p0.s7State: up   PO: 3584
kB Size:   2149 MB
S raid.p0.s8State: up   PO: 4096
kB Size:   2149 MB
S raid.p0.s9State: crashed  PO:0 
B Size:   2150 MB
S raid.p1.s0State: up   PO:0 
B Size:   2151 MB
S raid.p1.s1State: up   PO:  512
kB Size:   2151 MB
S raid.p1.s2State: up   PO: 1024
kB Size:   2151 MB
S raid.p1.s3State: up   PO: 1536
kB Size:   2151 MB
S raid.p1.s4State: obsolete (detached)
 Size:   2150 MB
S raid.p1.s5State: reborn   PO: 2560
kB Size:   2151 MB
S raid.p1.s6State: up   PO: 3072
kB Size:   2151 MB
S raid.p1.s7State: up   PO: 3584
kB Size:   2151 MB
S raid.p1.s8State: up   PO: 4096
kB Size:   2151 MB
S raid.p1.s9State: up   PO: 4608
kB Size:   2151 MB
S raid.p2.s0State: stalePO: 2304
MB Size:   2150 MB


The above represents a total of 10 drives, in a
striped raid configuration (half of each disk in each
plex).

Subdisks p0.s0 and p1.s5 are on one failed disk,
Subdisks p0.s9 and p1.s4 are on a second failed disk

Subdisks raid.p2.s0 and raid.p2.s1 are on the
replacement disk that I was trying to install.

I tried detaching the replacement subdisk and failed
subdisk, and then reattaching the replacement in the
failed position, and made things worse.


Can somebody PLEASE tell me two things:

(1) What sequence of steps SHOULD I have taken to
replace the disks (I promise I will test it and
document it for all to see).

(2) How can I recover NOW?  (I seem to recall reading
somewhere that it is actually possible to reset the
config and recreate it with the same config file as
used originally without destroying any data, and then
judicially use setstate to mark the valid subdisks as
up and the invalid ones as obsolete.  But this is a
drastic step that I don't want to take without some
guidance.

Please (grovel grovel) help me!

Thanks

orville.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


VINUM Disaster Recovery

2004-12-05 Thread orville weyrich
I have a 10 disk VINUM configuration and two of the
disks are trashed.  In theory there is still enough
redundant information to get things working again
without data loss.

Vinum has detected a configuration error (duh -- two
disks are toast, plus in recovery I accidently created
two more plexes) and taken upon itself to stop
configuration updates to prevent any further
corruption (thanks! :-).

At this point I have looked at
http://www.vinumvm.org/vinum/how-to-debug.html and
have run a command like the following:

( dd if=/dev/da9s1e skip=8 count=50 | tr -d
'\000-\011\200-\377' ; echo )  da9s1e.log 

on all 10 disks to obtain a file containing each
disk's on-disk configuration.  As hoped, eight of the
disks show an output similar to the attached file
da1s1e.log (differing only as expected in the first
line).

See attached flog file for a sample output.

PLEASE HELP CONFIRM MY PLAN (FOLLOWING) FOR PROCEEDING
-- I DO NOT WANT TO DO ANYTHING DISASTEROUS.

My thought is that I need to turn on updates, then
delete the two unwanted plexes raid.p2 and
raid.p3(which were accidentally created), detach the
corrupt sdisks, and then hopefully VINUM will forget
about the two disks that are toast (or do I somehow
have to tell VINUM to forget the disks?).

My plan is as follows:

First, selectively start vinum:

vinum - read /dev/da1s1e /dev/da2s1e /dev/da3s1e
/dev/da4s1e /dev/da5s1e /dev/da6s1e /dev/da7s1e
/dev/da8s1e

Second, enable configuration updates:

vinum-setdaemon 0

Third, save the configuration:

vinum-saveconfig

Fourth, stop and remove the two unwanted plexes and
all attached subdisks:

vinum-stop -f raid.p3
vinum-stop -f raid.p2
vinum-rm -r raid.p3
vinum-rm -r raid.p2

Fifth, stop and detach the corruped subdisks:

vinum-stop -f raid.p0.s0
vinum-stop -f raid.p0.s9
vinum-stop -f raid.p1.s4
vinum-stop -f raid.p1.s5

vinum-detach raid.p0.s0
vinum-detach raid.p0.s9
vinum-detach raid.p1.s4
vinum-detach raid.p1.s5


At this point I expect to have a functional volume
that can be mounted and backed up, prior to the next
step of reinstalling the crashed disks, creating new
subdisks, attaching them to the plexes, and
resynching. 

PLEASE CONFIRM MY APPROACH OR TELL ME WHERE I AM
WRONG!

Thanks

orville








__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

da1s1e.log
Description: da1s1e.log
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Sendmail mungs sender address

2004-12-04 Thread orville weyrich
I was running a FreeBSD 4.3 system.  It crashed and I
built up a new FreeBSD 4.10 system.  But I cannot get
Sendmail to work the way it did before.  

The relevant part of my network includes two hosts,
call them FIRE and BASHFUL.  FIRE is my firewall
machine,  It runs FreeBSD 4.3 and has not changed.  It
relays mail from my internal network to the world and
back.  BASHFUL is behind the firewall, is my new 4.10
machine, and is configured to use FIRE as its smart
host.   FIRE is known to the universe as
FIRE.MYDOMAIN.COM.

I want mail sent from BASHFUL to carry a sender
address of [EMAIL PROTECTED]  Instead it carries a
sender address of [EMAIL PROTECTED] (if sent
from pine on BASHFUL) or [EMAIL PROTECTED] (if
sent from a command-line invocation of sendmail
itself).  All changes to sendmail.cf and pine
configuration seem to be ignored).

What has changed that my outgoing mail is broken?  My
incoming mail works fine.

Thanks

Orville Weyrich



__ 
Do you Yahoo!? 
Yahoo! Mail - now with 250MB free storage. Learn more.
http://info.mail.yahoo.com/mail_250
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Problems with VINUM recovery

2004-12-04 Thread orville weyrich
I was trying to test my ability to recover from a disk
crash and managed to toast my system disk.  Therefore,
I do not have any relevant vinum history file or
messages file.  

What happened is that I purposely blew away the drive
named ahc0t15, and was trying to recover it.  

In error, I ran the create command A SECOND TIME with
the exact same configuration file as the first time,
and it created two aditional plexes, which it of
course couldn't fit onto the physical disks, except
for raid.p2.s0, raid.p2.s9, raid.p3.s5 and raid.p3.s5.

I decided that it wasn't working properly, and was in
the process of zapping my entire vinum volume to
recreate it and try again (I had decided that my plex
pattern was not the best), and had zapped the drive
ahc0t02, when I accidently blew away the system disk,
crashing my system.

The good news is that most of the important data I
care about on the system disk was copied to the vinum
volume just to put some data on the volume -- but now
I really WANT that data on the vinum volume, because
it is the most recent backup.

As it stands now, I want to:

(1) delete raid.p2 and raid.p3

(2) rebuild drive ahc0t02 to recieve revived
raid.p0.s0 and raid.p1.s5

(3) rebuild drive ahc0t15 to receive revived
raid.p0.s9 and raid.p1.s4

(4) revive raid.p0.s0 from the valid raid.p1.s0 

(5) revive raid.p0.9 from the valid raid.p1.s9

(6) revive raid.p1.s4 from the valid raid.p0.s4

(7) revive raid.p1.s5 from the valid raid.p0.s5

I think (hope!) all of this is possible.

My vinum volume was created on a vanilla FreeBSD 4.3
system.  The system has now been reloaded with a
FreeBSD 4.10 system in order to produce the vinum list
output attached (let me know if you have trouble
reading the file as attached).

Trying to rm raid.p2.s8 gives the message
Can’t remove raid.p2.s8: Device busy (16)
*** Warning configuration updates are disabled. ***

I am afraid to reenable configuration updates until I
am sure I know what I am doing.

Since I messed up in my previous attempts, and I am
now between a rock and a hard place, I need some
guidance regarding how to recover -- the failed drill
has suddenly become real :-(

I have looked at the document
http://www.vinumvm.org/vinum/replacing-drive.html and
am not sure how to apply it to the above scenario.

Please please help ... please

orville weyrich

VINUM LISTING
===

8 drives:
D ahc0t03   State: up   Device /dev/da1s1e
Avail: 2152/4303 
MB (50%)
D ahc0t04   State: up   Device /dev/da2s1e
Avail: 2152/4303 
MB (50%)
D ahc0t09   State: up   Device /dev/da3s1e
Avail: 2152/4303 
MB (50%)
D ahc0t10   State: up   Device /dev/da4s1e
Avail: 2152/4303 
MB (50%)
D ahc0t11   State: up   Device /dev/da5s1e
Avail: 2152/4303 
MB (50%)
D ahc0t12   State: up   Device /dev/da6s1e
Avail: 2152/4303 
MB (50%)
D ahc0t13   State: up   Device /dev/da7s1e
Avail: 2152/4303 
MB (50%)
D ahc0t14   State: up   Device /dev/da8s1e
Avail: 2152/4303 
MB (50%)
D ahc0t02   State: referenced   Device 
Avail: 0/0 MB
D *invalid* State: referenced   Device 
Avail: 0/0 MB
D ahc0t15   State: referenced   Device 
Avail: 0/0 MB

1 volumes:
V raid  State: up   Plexes:   4
Size: 21 GB

4 plexes:
P raid.p0 S State: corrupt  Subdisks:10
Size: 21 
GB
P raid.p1 S State: corrupt  Subdisks:10
Size: 21 
GB
P raid.p2 S State: faulty   Subdisks:10
Size: 21 
GB
P raid.p3 S State: corrupt  Subdisks:10
Size: 21 
GB

40 subdisks:
S raid.p0.s0State: crashed  PO:0  B
Size:   2151 
MB
S raid.p0.s1State: up   PO:  512 kB
Size:   2151 MB
S raid.p0.s2State: up   PO: 1024 kB
Size:   2151 MB
S raid.p0.s3State: up   PO: 1536 kB
Size:   2151 MB
S raid.p0.s4State: up   PO: 2048 kB
Size:   2151 MB
S raid.p0.s5State: up   PO: 2560 kB
Size:   2151 MB
S raid.p0.s6State: up   PO: 3072 kB
Size:   2151 MB
S raid.p0.s7State: up   PO: 3584 kB
Size:   2151 MB
S raid.p0.s8State: up   PO: 4096 kB
Size:   2151 MB
S raid.p0.s9State: crashed  PO: 4608 kB
Size:   2151 
MB
S raid.p1.s0State: up   PO:0  B
Size:   2151 MB
S raid.p1.s1State: up   PO:  512 kB
Size:   2151 MB
S raid.p1.s2State: up   PO: 1024 kB
Size:   2151 MB
S raid.p1.s3State: up   PO: 1536 kB
Size:   2151 MB
S raid.p1.s4State: obsolete PO: 2048
kB Size:   
2151 MB
S raid.p1.s5State: crashed  PO: 2560 kB
Size:   2151 
MB
S raid.p1.s6State: up

FDISK Woes

2004-12-04 Thread orville weyrich
I did a stupid thing and zapped my system FreeBSD 4.3
system disk (/dev/ad0) with FDISK and Label (in
/stand/sysinstall)instead of zapping /dev/da0 that I
intended to do.  Needless to say, my system got sick.

I have built a new FreeBSD 4.10 system on another disk
and have the old (zapped) system disk installed in the
system (but not mounted).

Is there a way to reconstruct the disk label?  (I
need to find the locations of the / (ad0s1a), /usr
(ad0s1f), and /var (ad0s1e), and then write a suitable
disk label without destroying the data on the disk.

I feel so dumb doing this, but it seems strange that
FDISK would allow me to zap the label of a mounted
disk.

Anyway, please help me save my system!

TIA

orvile.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Sendmail mungs sender address SOLVED

2004-12-04 Thread orville weyrich
Thanks to those who replied.  I solved the problem by
building a new sendmail.cf file with the following
added features not found in the stock freebsd.cf:

define(`SMART_HOST',`FIRE')
FEATURE(accept_unresolvable_domains)
MASQUERADE_AS(`MYDOMAIN.COM')

and tweeking my /etc/hosts file on BASHFUL

--- orville weyrich [EMAIL PROTECTED] wrote:

 I was running a FreeBSD 4.3 system.  It crashed and
 I
 built up a new FreeBSD 4.10 system.  But I cannot
 get
 Sendmail to work the way it did before.  
 
 The relevant part of my network includes two hosts,
 call them FIRE and BASHFUL.  FIRE is my firewall
 machine,  It runs FreeBSD 4.3 and has not changed. 
 It
 relays mail from my internal network to the world
 and
 back.  BASHFUL is behind the firewall, is my new
 4.10
 machine, and is configured to use FIRE as its smart
 host.   FIRE is known to the universe as
 FIRE.MYDOMAIN.COM.
 
 I want mail sent from BASHFUL to carry a sender
 address of [EMAIL PROTECTED]  Instead it carries a
 sender address of [EMAIL PROTECTED] (if sent
 from pine on BASHFUL) or [EMAIL PROTECTED]
 (if
 sent from a command-line invocation of sendmail
 itself).  All changes to sendmail.cf and pine
 configuration seem to be ignored).
 
 What has changed that my outgoing mail is broken? 
 My
 incoming mail works fine.
 
 Thanks
 
 Orville Weyrich
 
 
   
 __ 
 Do you Yahoo!? 
 Yahoo! Mail - now with 250MB free storage. Learn
 more.
 http://info.mail.yahoo.com/mail_250
 ___
 [EMAIL PROTECTED] mailing list

http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]
 




__ 
Do you Yahoo!? 
Read only the mail you want - Yahoo! Mail SpamGuard. 
http://promotions.yahoo.com/new_mail 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]