Re: Problem with dump over SSH: Operation timed out

2007-08-13 Thread Nikos Vassiliadis
On Saturday 11 August 2007 02:28, Kenny Dail wrote:
> > Thank you for those suggestions, it's appreciated. Although I get the
> > same results with setting those values both on the server and on the
> > client. SCP starts full speed, but at 20% of the 200 MB file it starts
> > to stall. All ICMP traffic was open on both firewalls at that time.
>
> I had something similar to this happen to me once when I traded out low
> end Linksys router for an enterprise grade one. Large transfers were ok
> with the low end router, but died horribly with the "good" router. It
> was a FreeBSD4.11 server at the time, and in the end it turned out that 
> the increase in bandwidth was directly related to the stall, putting qos
> on the traffic back down to the previous speeds made the stalling go
> away. I never did find out if it was a crappy NIC or crappy disk drives,
> or crappy cofiguration on the server.

I would try throttling too. I have seen too ADSL modem/routers dropping
high traffic connections. You said you have a cable modem, which does a
much simpler job than an ADSL modem/router, but I wouldn't trust it 
anyway... As you said you did manage to get the dump to your computer at 
home, so assuming that you have less bandwidth at home, the high traffic 
situation between the two offices, could be the problem.

Give ipfw & duymmynet a try, it should be very easy to throttle your
connection.

HTH

Nikos

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Kenny Dail
> Thank you for those suggestions, it's appreciated. Although I get the same 
> results with setting those values both on the server and on the client. SCP 
> starts full speed, but at 20% of the 200 MB file it starts to stall. All ICMP 
> traffic was open on both firewalls at that time.

I had something similar to this happen to me once when I traded out low
end Linksys router for an enterprise grade one. Large transfers were ok
with the low end router, but died horribly with the "good" router. It
was a FreeBSD4.11 server at the time, and in the end it turned out that  the
increase in bandwidth was directly related to the stall, putting qos on
the traffic back down to the previous speeds made the stalling go away.
I never did find out if it was a crappy NIC or crappy disk drives, or
crappy cofiguration on the server.
-- 
Kenny Dail 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Bram Schoenmakers
Op vrijdag 10 augustus 2007, schreef Nikos Vassiliadis:

Hi,

Some more info:

> But, if a router in the path
> is filtering all ICMP traffic then the problem will remain.

No most probably not. I live a few 100m from the office, having the same type 
of internet connection and the same provider. The traceroute from the client 
to office/my house is identical until the last but one hop. And I just 
succeeded to dump it to my own computer (running Gentoo Linux, I think the 
same modem, and a pretty default router in between).

So either the cable modem or the server (running IPF) at the office is the 
culprit then.

Kind regards,

-- 
Bram Schoenmakers

What is mind? No matter. What is matter? Never mind.
(Punch, 1855)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Bram Schoenmakers
Op vrijdag 10 augustus 2007, schreef u:

Hi,

> This really looks like a broken PMTU discovery.
>
> Is this still the case? After the firewall changes you did?
> Things may be different now. But, if a router in the path
> is filtering all ICMP traffic then the problem will remain.
>
> Try this on both hosts:
> sysctl net.inet.tcp.path_mtu_discovery=0
> sysctl net.inet.tcp.mssdflt=1000  (just to be safe,
> the default on 4.x is 1400, which can be big
> on 6.x is just 512)
>
> Please, just use scp to do your testing.
> When you rule out the possibility of a problematic
> network, you will add the (gzip|bzip2) & dump parts.
>
> Nikos

Thank you for those suggestions, it's appreciated. Although I get the same 
results with setting those values both on the server and on the client. SCP 
starts full speed, but at 20% of the 200 MB file it starts to stall. All ICMP 
traffic was open on both firewalls at that time.

Hmmm

Kind regards,

-- 
Bram Schoenmakers

What is mind? No matter. What is matter? Never mind.
(Punch, 1855)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Alex Zbyslaw

Bram Schoenmakers wrote:

If you can write (and compress if short of disk space) the dump 
locally and


try an scp to your remote host as Nikos is suggesting, that will narrow
down the problem a bit.  Any other large file will do: doesn't have to be a
dump.
   



As I wrote in my initial mail:

==
* Downloading the very same big file over SCP causes problems too, below some 
SCP debug output. The connection drops quickly after it gained a reasonable 
download speed.
 

Sorry, didn't pick up the thread until late in the day.  If this is the 
case, gzip over bzip2 is not likely to be the answer, nor is any SSH 
keepalive option (though they'd be easy to try *just in case*).


Other than Nikos' PMTU suggestions I don't have much to offer.

 You could try another ethernet card if you have one to see if that 
makes any difference;


 or you can do some creative monitoring with tcpdump to see what 
traffic is being sent (try to exclude the actual ssh transfer);


 can you try the scp with *no* firewall in place;  this is straw clutching.

Presumably *some* data does actually arrive at the other end?

--Alex

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Nikos Vassiliadis
On Thursday 09 August 2007 19:51, Bram Schoenmakers wrote:
> Op donderdag 09 augustus 2007, schreef Alex Zbyslaw:
>
> Hello,
>
> > Bram Schoenmakers wrote:
> > ># /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
> > >[EMAIL PROTECTED] \
> > >dd of=/backup/webserver/root.0.bz2
> >
> > bzip2 is darned slow and not always much better than gzip -9.  It
> > might be that ssh is just timing out in some way (I've seen that but
> > not with ethernet dumps specifically).  Can you try the test using
> > gzip -9 instead of bzip?  If that works, then look for ssh options
> > that affect timeouts, keepalives etc.  In particular,
> > ServerAliveInterval 60 in a .ssh/config stopped xterm windows dying on
> > me to certain hosts.  YMMV :-(
> >
> > If you have the disk space then you could try without any compression
> > at all; or try doing the compression remotely:
> >
> >/sbin/dump -0 -a -C 64 -L -h 0 -f - / | \
> > /usr/local/bin/ssh [EMAIL PROTECTED]
> > \
> > "gzip -9 > /backup/webserver/root.0.gz"
> >
> > Otherwise:
> >
> > Nikos Vassiliadis wrote:
> > >1) Can you dump the file locally?
> > >
> > >2) Is scp working?
> >
> > If you can write (and compress if short of disk space) the dump
> > locally and try an scp to your remote host as Nikos is suggesting,
> > that will narrow down the problem a bit.  Any other large file will
> > do: doesn't have to be a dump.
>
> As I wrote in my initial mail:
>
> ==
> * Downloading the very same big file over SCP causes problems too, below
> some SCP debug output. The connection drops quickly after it gained a
> reasonable download speed.

This really looks like a broken PMTU discovery.

Is this still the case? After the firewall changes you did?
Things may be different now. But, if a router in the path
is filtering all ICMP traffic then the problem will remain.

Try this on both hosts:
sysctl net.inet.tcp.path_mtu_discovery=0
sysctl net.inet.tcp.mssdflt=1000  (just to be safe,
the default on 4.x is 1400, which can be big
on 6.x is just 512)

Please, just use scp to do your testing.
When you rule out the possibility of a problematic
network, you will add the (gzip|bzip2) & dump parts.

Nikos
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-10 Thread Nikos Vassiliadis
On Thursday 09 August 2007 17:39, Alex Zbyslaw wrote:
> Nikos Vassiliadis wrote:
> >Keep in mind that dump(8) uses UFS2 snapshots. I don't know
> >the current status, but in the past, snapshots were not working
> >that good.
>
> This statement is far too general and IMHO does a disservice to those
> who worked on snapshots.

The last thing that I want is to be disrespectful to people
making FreeBSD happen. So, I apologize.

Nikos
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Bram Schoenmakers
Op donderdag 09 augustus 2007, schreef Alex Zbyslaw:

Hello,

> Bram Schoenmakers wrote:
> ># /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
> >[EMAIL PROTECTED] \
> >dd of=/backup/webserver/root.0.bz2
>
> bzip2 is darned slow and not always much better than gzip -9.  It might
> be that ssh is just timing out in some way (I've seen that but not with
> ethernet dumps specifically).  Can you try the test using gzip -9
> instead of bzip?  If that works, then look for ssh options that affect
> timeouts, keepalives etc.  In particular, ServerAliveInterval 60 in a
> .ssh/config stopped xterm windows dying on me to certain hosts.  YMMV :-(
>
> If you have the disk space then you could try without any compression at
> all; or try doing the compression remotely:
>
>/sbin/dump -0 -a -C 64 -L -h 0 -f - / | \
> /usr/local/bin/ssh [EMAIL PROTECTED]
> \
> "gzip -9 > /backup/webserver/root.0.gz"
>
> Otherwise:
>
> Nikos Vassiliadis wrote:
> >1) Can you dump the file locally?
> >
> >2) Is scp working?
>
> If you can write (and compress if short of disk space) the dump locally and
> try an scp to your remote host as Nikos is suggesting, that will narrow
> down the problem a bit.  Any other large file will do: doesn't have to be a
> dump.

As I wrote in my initial mail:

==
* Downloading the very same big file over SCP causes problems too, below some 
SCP debug output. The connection drops quickly after it gained a reasonable 
download speed.

Read from remote host office.example.com: Connection reset by peer
debug1: Transferred: stdin 0, stdout 0, stderr 77 bytes in 103.3 
seconds
debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.7
debug1: Exit status -1
lost connection
==

That was just a file generated with 'dd if=/dev/zero of=zeroes bs=1024k 
count=200' . So no, SCP doesn't work.

I haven't tried gzip -9 yet, although it looks like a workaround than a 
solution to the real problem.

Kind regards,

-- 
Bram Schoenmakers

You can contact me directly on Jabber with [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Alex Zbyslaw

Nikos Vassiliadis wrote:


Keep in mind that dump(8) uses UFS2 snapshots. I don't know
the current status, but in the past, snapshots were not working
that good.

This statement is far too general and IMHO does a disservice to those 
who worked on snapshots.


There were (and maybe even are, but I haven't seen a problem report in 
ages) issues with large numbers of snapshots or with large (active?) 
filesystems, but in that case *dump would never have started* as the 
snapshot wouldn't have completed.


I'm still running 5.4 which is pretty "in the past" and have no issue 
with dump -L sending the files over the ethernet either compressing 
locally or remotely.  (Well, I do, but only with one ethernet driver and 
it's either a driver or a hardware fault and nothing to do with dump or 
snapshots).


Other 5.4 systems I run use snapshots on a daily basis for other 
purposes and again have no problems.


Bram Schoenmakers wrote:

# /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh 
[EMAIL PROTECTED] \

   dd of=/backup/webserver/root.0.bz2


bzip2 is darned slow and not always much better than gzip -9.  It might 
be that ssh is just timing out in some way (I've seen that but not with 
ethernet dumps specifically).  Can you try the test using gzip -9 
instead of bzip?  If that works, then look for ssh options that affect 
timeouts, keepalives etc.  In particular, ServerAliveInterval 60 in a 
.ssh/config stopped xterm windows dying on me to certain hosts.  YMMV :-(


If you have the disk space then you could try without any compression at 
all; or try doing the compression remotely:


  /sbin/dump -0 -a -C 64 -L -h 0 -f - / | \
   /usr/local/bin/ssh [EMAIL PROTECTED] 
\   
   "gzip -9 > /backup/webserver/root.0.gz"


Otherwise:

Nikos Vassiliadis wrote:


1) Can you dump the file locally?

2) Is scp working?


If you can write (and compress if short of disk space) the dump locally and try 
an scp to your remote host as Nikos is suggesting, that will narrow down the 
problem a bit.  Any other large file will do: doesn't have to be a dump.

--Alex


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Nikos Vassiliadis
On Thursday 09 August 2007 16:43, Bram Schoenmakers wrote:
> Op donderdag 09 augustus 2007, schreef u:
> > Try using a much lower MTU, something like 1400 or perhaps lower,
> > just for testing. You should configure this, on both client and
> > server.
> >
> > I'm not familiar with ipf to give the exact rule, but I would allow
> > ALL ICMP traffic, at least for testing purposes. I think this is
> > correct:
> > pass out quick proto icmp from any to any
> > pass in quick proto icmp from any to any
> >
> > somewhere above the "block in log quick on re0 all" rule.
> >
> > Hope this helps a bit
> >
> > Nikos
>
> Thank you for your answer.
>
> I have added the 'pass in for icmp' rule to the firewall (pass out did
> already exist). There was a noticable improvement, the /usr dump came
> much further than before. But at about 80% there was the timeout again.

Strange, is it possible that the filesystem is corrupted and
dump cannot continue and quits?

Keep in mind that dump(8) uses UFS2 snapshots. I don't know
the current status, but in the past, snapshots were not working
that good.

1) Can you dump the file locally?

2) Is scp working?

>
> I tried lowering the MTU value at the server side, but nearly all other
> network traffic stopped working, so that is not the way to go.

Ofcourse, this could be a problem. MTU must be the same
across the ethernet segment. And obviously your upstream
router is administered by your ISP.

Nikos
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Bram Schoenmakers
Op donderdag 09 augustus 2007, schreef u:

> Try using a much lower MTU, something like 1400 or perhaps lower,
> just for testing. You should configure this, on both client and server.
>
> I'm not familiar with ipf to give the exact rule, but I would allow
> ALL ICMP traffic, at least for testing purposes. I think this is
> correct:
> pass out quick proto icmp from any to any
> pass in quick proto icmp from any to any
>
> somewhere above the "block in log quick on re0 all" rule.
>
> Hope this helps a bit
>
> Nikos

Thank you for your answer.

I have added the 'pass in for icmp' rule to the firewall (pass out did already 
exist). There was a noticable improvement, the /usr dump came much further 
than before. But at about 80% there was the timeout again.

I tried lowering the MTU value at the server side, but nearly all other 
network traffic stopped working, so that is not the way to go.

Kind regards,

-- 
Bram Schoenmakers

What is mind? No matter. What is matter? Never mind.
(Punch, 1855)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Jerry McAllister
On Thu, Aug 09, 2007 at 10:25:41AM +0200, Bram Schoenmakers wrote:

> Dear list,
> 
> There is a problem with performing a dump from our webserver at the data 
> centre to a backup machine at the office. Everytime we try to perform a dump, 
> the SSH tunnel dies:
> 
> # /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh 
> [EMAIL PROTECTED] \
> dd of=/backup/webserver/root.0.bz2
> 
>   DUMP: Date of this level 0 dump: Wed Aug  8 20:58:51 2007
>   DUMP: Date of last level 0 dump: the epoch
>   DUMP: Dumping snapshot of /dev/da0s1a (/) to standard output
>   DUMP: mapping (Pass I) [regular files]
>   DUMP: mapping (Pass II) [directories]
>   DUMP: estimated 60746 tape blocks.
>   DUMP: dumping (Pass III) [directories]
>   DUMP: dumping (Pass IV) [regular files]
> Read from remote host office.example.com: Operation timed out
>   DUMP: Broken pipe
>   DUMP: The ENTIRE dump is aborted.

Note:I have been getting something that looks very similar when I
try to dump a large file system - actually not all that large, just
about 30 GB - over the net to a different machine.  The one with the
tape is running a quite old FreeBSD - around 4.9 I think - and can't
be upgraded at the moment.  The one I am attempting to dump is on 6.1 - 
which I want to move to 6.2, but have been stalling because I haven't 
been able to get a good dump.

I don't have anything to add to Bram's facts here, just that the 
timeout like this is happening on another system too.

jerry


> 
> Here are some facts about the situation:
> 
> * The client (where the dup takes place) runs FreeBSD 6.2-RELEASE-p4
> * The server (at the office) runs FreeBSD 6.1-RELEASE
> * Both hosts have ipf installed
> * Some IPF rules from the client:
> 
>   pass out quick on bge0 proto tcp from any to any flags S keep state
>   pass out quick on bge0 proto udp from any to any keep state
>   pass out quick on bge0 proto icmp from any to any keep state
>   pass out quick on bge0 proto gre from any to any keep state
>   pass out quick on bge0 proto esp from any to any keep state
>   pass out quick on bge0 proto ah from any to any keep state
> 
>   block out quick on bge0 all
>   pass in quick on bge0 proto tcp from any to any port = 22 flags S keep 
> state
> 
>   block return-rst in log quick on bge0 proto tcp from any to any
>   block in quick on bge0 proto tcp all flags S
>   block return-icmp-as-dest(port-unr) in log quick on bge0 proto udp from 
> any 
> to any
>   block in log quick on bge0 all
> 
> * Some IPF rules from the server:
> 
>   pass out quick on re0 proto tcp from any to any flags S keep state
>   pass out quick on re0 proto udp from any to any keep state
>   pass out quick on re0 proto icmp from any to any keep state
>   pass out quick on re0 proto gre from any to any keep state
>   pass out quick on re0 proto esp from any to any keep state
>   pass out quick on re0 proto ah from any to any keep state
>   
>   pass out quick on re0 proto tcp from any to any port = 22 keep state
> 
>   pass in quick on re0 proto tcp from any to any port = 22 flags S keep 
> state
> 
>   block return-rst in quick on re0 proto tcp all flags S
>   block in quick on re0 proto tcp all flags S
>   block return-icmp-as-dest(port-unr) in log quick on re0 proto udp from 
> any to 
> any
>   block in log quick on re0 all
> 
> * I've tried with TCPKeepAlive off
> * Setting ClientAlive{Interval,CountMax} on the server did not improve things.
> * Setting ServerAlive{Interval,CountMax} on the client neither, although I 
> got 
> a different error: 
> 
>   DUMP: Date of this level 0 dump: Wed Aug  8 21:05:26 2007
>   DUMP: Date of last level 0 dump: the epoch
>   DUMP: Dumping snapshot of /dev/da0s1f (/usr) to standard output
>   DUMP: mapping (Pass I) [regular files]
>   DUMP: mapping (Pass II) [directories]
>   DUMP: estimated 429177 tape blocks.
>   DUMP: dumping (Pass III) [directories]
>   DUMP: dumping (Pass IV) [regular files]
> Received disconnect from xxx.xxx.xxx.xxx: 2: Timeout, your session not 
> responding.
>   DUMP: Broken pipe
>   DUMP: The ENTIRE dump is aborted.
> 
> * A dump from the client machine to another server works fine. The receiving 
> host has a similar internet connection as the office (cable).
> * A dump from another webserver of ours, running FreeBSD-4.10-RELEASE in 
> another data centre can dump fine to the office with the same construction. 
> This webserver uses IPFW.
> * Uploading a big file (200M) over SFTP to the 6.2 webserver causes no 
> problems.
> * Downloading the very same big file over SCP causes problems too, below some 
> SCP debug output. The connection drops quickly after it gained a reasonable 
> download speed.
> 
>   Read from remote host office.example.com: Connection reset by peer
>   debug1: Transferred: stdin 0, stdout 0, stderr 77 bytes in 103.3 seconds
>   debug1: Bytes per second: stdin 0.

Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Nikos Vassiliadis
On Thursday 09 August 2007 11:25, Bram Schoenmakers wrote:
> Dear list,
>
> There is a problem with performing a dump from our webserver at the data
> centre to a backup machine at the office. Everytime we try to perform a
> dump, the SSH tunnel dies:
>
> # /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
> [EMAIL PROTECTED] \
> dd of=/backup/webserver/root.0.bz2
>
>   DUMP: Date of this level 0 dump: Wed Aug  8 20:58:51 2007
>   DUMP: Date of last level 0 dump: the epoch
>   DUMP: Dumping snapshot of /dev/da0s1a (/) to standard output
>   DUMP: mapping (Pass I) [regular files]
>   DUMP: mapping (Pass II) [directories]
>   DUMP: estimated 60746 tape blocks.
>   DUMP: dumping (Pass III) [directories]
>   DUMP: dumping (Pass IV) [regular files]
> Read from remote host office.example.com: Operation timed out
>   DUMP: Broken pipe
>   DUMP: The ENTIRE dump is aborted.
>
> Here are some facts about the situation:
>
> * The client (where the dup takes place) runs FreeBSD 6.2-RELEASE-p4
> * The server (at the office) runs FreeBSD 6.1-RELEASE
> * Both hosts have ipf installed
> * Some IPF rules from the client:
>
>   pass out quick on bge0 proto tcp from any to any flags S keep state
>   pass out quick on bge0 proto udp from any to any keep state
>   pass out quick on bge0 proto icmp from any to any keep state
>   pass out quick on bge0 proto gre from any to any keep state
>   pass out quick on bge0 proto esp from any to any keep state
>   pass out quick on bge0 proto ah from any to any keep state
>
>   block out quick on bge0 all
>   pass in quick on bge0 proto tcp from any to any port = 22 flags S keep
> state
>
>   block return-rst in log quick on bge0 proto tcp from any to any
>   block in quick on bge0 proto tcp all flags S
>   block return-icmp-as-dest(port-unr) in log quick on bge0 proto udp from
> any to any
>   block in log quick on bge0 all
>
> * Some IPF rules from the server:
>
>   pass out quick on re0 proto tcp from any to any flags S keep state
>   pass out quick on re0 proto udp from any to any keep state
>   pass out quick on re0 proto icmp from any to any keep state
>   pass out quick on re0 proto gre from any to any keep state
>   pass out quick on re0 proto esp from any to any keep state
>   pass out quick on re0 proto ah from any to any keep state
>
>   pass out quick on re0 proto tcp from any to any port = 22 keep state
>
>   pass in quick on re0 proto tcp from any to any port = 22 flags S keep
> state
>
>   block return-rst in quick on re0 proto tcp all flags S
>   block in quick on re0 proto tcp all flags S
>   block return-icmp-as-dest(port-unr) in log quick on re0 proto udp from
> any to any
>   block in log quick on re0 all
>

These rules deny incoming ICMP in general, Path MTU discovery will
be broken.

[snip]
> * Maybe the MTU value was the cause, but setting them to 1472 on both
> sides didn't improve the situation as well.

Try using a much lower MTU, something like 1400 or perhaps lower,
just for testing. You should configure this, on both client and server.

I'm not familiar with ipf to give the exact rule, but I would allow
ALL ICMP traffic, at least for testing purposes. I think this is
correct:
pass out quick proto icmp from any to any
pass in quick proto icmp from any to any

somewhere above the "block in log quick on re0 all" rule.

Hope this helps a bit

Nikos



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Problem with dump over SSH: Operation timed out

2007-08-09 Thread Peter Boosten
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bram Schoenmakers wrote:
> Dear list,
> 
> There is a problem with performing a dump from our webserver at the data 
> centre to a backup machine at the office. Everytime we try to perform a dump, 
> the SSH tunnel dies:
> 
[snip]
> * The client (where the dup takes place) runs FreeBSD 6.2-RELEASE-p4
> * The server (at the office) runs FreeBSD 6.1-RELEASE

Last week I did a dump from 6.2 to 6.2 _over the internet_ without any
problems, so IMHO it's not OS related.

Peter
- --
http://www.boosten.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGutj4rvsez6l/SvARAglMAJ9Qtj9HmBiKTG2FHC0GdK9/NYEnzwCgn0v3
ww1Wctmai/y0Y8VZuFDW3mI=
=6z4D
-END PGP SIGNATURE-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"