Re: [Veritas-bu] Any gotchas with 10 GB Ethernet?
Actually it is the 5220 that does and the 5240 that does not. The T5220 has the 10GbE circuitry on the CPU. With the T5240 that was displaced by the circuitry for the CPUs to talk to each other, and the 10GbE is on another chip. If you hunt round for the architecture white papers there are block drawings. We have a number of T5220s with the XAUI cards and optical 10GbE SFP+. Goes along briskly especially with Jumbo Frames. William D L Brown -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Larry Sent: 20 February 2010 02:14 To: VERITAS-BU@mailman.eng.auburn.edu Subject: [Veritas-bu] Any gotchas with 10 GB Ethernet? Oh yeah, and if you haven't bought the servers yet, Sun has some with dedicated busses for 10Gb NICs. Haven't played with them yet, but it's definitely worth considering. I want to say the 5240 has them while the 5220 doesn't. +-- |This was sent by la...@hawkeyes.org via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu --- This e-mail was sent by GlaxoSmithKline Services Unlimited (registered in England and Wales No. 1047315), which is a member of the GlaxoSmithKline group of companies. The registered address of GlaxoSmithKline Services Unlimited is 980 Great West Road, Brentford, Middlesex TW8 9GS. --- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Any gotchas with 10 GB Ethernet?
Oh yeah, and if you haven't bought the servers yet, Sun has some with dedicated busses for 10Gb NICs. Haven't played with them yet, but it's definitely worth considering. I want to say the 5240 has them while the 5220 doesn't. +-- |This was sent by la...@hawkeyes.org via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Any gotchas with 10 GB Ethernet?
I don't know if this has changed since 5.1, but at 5.1 and earlier you would need to use /usr/openv/netbackup/NET_BUFFER_SZ (yes, SZ) and put a value greater than 64KB in there. It really doesn't matter what you use, unless you're going over a LFN, 1048576 will work in most cases. It's not like making it too big is going to hurt anything. And, contrary to an old rumor, it is not nor has it ever been necessary to set this value to the same thing on client and server. Most people won't see any affect at all by doing this (unless you're running really old NBU and doing duplication) until they get to about 4x trunked Gb or higher. +-- |This was sent by la...@hawkeyes.org via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Any gotchas with 10 GB Ethernet?
I haven't followed the whole thread so excuse me if my message isn't relevant but I read in a performance tuning guide you should set your NET_BUFFER_SZ to 4 times the size of your SIZE_DATA_BUFFERS Also - not mandatory but recommended to match the client and media server network buffers The feature is very useful for high latency lan/wan links where you will definitely see a difference. I have also used and tested it with jumbo frames which yielded significantly improved results (although jumbo frames on their own will make a big difference) -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Larry Sent: 20 February 2010 02:10 To: VERITAS-BU@mailman.eng.auburn.edu Subject: [Veritas-bu] Any gotchas with 10 GB Ethernet? I don't know if this has changed since 5.1, but at 5.1 and earlier you would need to use /usr/openv/netbackup/NET_BUFFER_SZ (yes, SZ) and put a value greater than 64KB in there. It really doesn't matter what you use, unless you're going over a LFN, 1048576 will work in most cases. It's not like making it too big is going to hurt anything. And, contrary to an old rumor, it is not nor has it ever been necessary to set this value to the same thing on client and server. Most people won't see any affect at all by doing this (unless you're running really old NBU and doing duplication) until they get to about 4x trunked Gb or higher. +-- |This was sent by la...@hawkeyes.org via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Any gotchas with 10 GB Ethernet?
Hello All, We are in the process of implementing 2 new Solaris media servers with 10 GB Ethernet. Are there any gotchas on the OS side or the NetBackup side I should be aware of? Any buffer settings we need to tweak or new touch files, etc? Thanks in advance. Tom Tschida Boston Scientific ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Any gotchas with 10 GB Ethernet?
This is what we use, though it is not special to 10GbE; I've no doubt many people will have their own schemes. It's from a script that checks settings against our design: NETWORK TUNING PARAMETERS TCP parameters: tcp_wscale_always:current: 1 recommended: 1 tcp_tstamp_if_wscale: current: 1 recommended: 1 tcp_xmit_hiwat: current: 1048576 recommended: 1048576 tcp_recv_hiwat: current: 1048576 recommended: 1048576 tcp_cwnd_max: current: 2097152 recommended: 2097152 tcp_max_buf: current: 4194304 recommended: 4194304 tcp_time_wait_interval: current: 6 recommended: 6 tcp_conn_req_max_q: current:8192 recommended:8192 Sendpipes Recvpipes: route x.x.x.x recvpipe: current: 1048576 recommended: 1048576 route x.x.x.x sendpipe: current: 1048576 recommended: 1048576 I'd say the hiwat settings are the most important, and tcp_cwnd_max and tcp_max_buf must be raised to allow for that. We are also implementing Jumbo Frames which in testing made about 60-80% increase in throughput using iperf. Slightly harder to test with NetBackup as more moving parts like disks involved, and for many clients we cannot set it as we don't have a separate backup LAN, and some applications don't want to try it. William D L Brown -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Tschida, Tom (STP) Sent: 01 February 2010 22:29 To: veritas-bu@mailman.eng.auburn.edu Subject: [Veritas-bu] Any gotchas with 10 GB Ethernet? Hello All, We are in the process of implementing 2 new Solaris media servers with 10 GB Ethernet. Are there any gotchas on the OS side or the NetBackup side I should be aware of? Any buffer settings we need to tweak or new touch files, etc? Thanks in advance. Tom Tschida Boston Scientific ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu --- This e-mail was sent by GlaxoSmithKline Services Unlimited (registered in England and Wales No. 1047315), which is a member of the GlaxoSmithKline group of companies. The registered address of GlaxoSmithKline Services Unlimited is 980 Great West Road, Brentford, Middlesex TW8 9GS. --- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu