On 12/20/2015 08:36 AM, Mimiko wrote:
zpool create -f -m none -o ashift=12 zfspool raidz2
wwn-0x5xxx . (all disks).
On 12/21/2015 11:20 PM, Mimiko wrote:
> One channel is connected to 8 x 2TB the other channel to 8 x 1TB.
...
> zfs list
> NAME USED AVAIL
On 12/21/2015 11:20 PM, Mimiko wrote:
One [RAID] channel is connected to 8 x 2TB the other channel to 8 x 1TB.
What is the part number of your cables?
JBOD for software raid [on Wheezy].
I thought the 'zpool create ...' invocation showed all 16 drives in one
raidz2 pool (?). This is not
On 21.12.2015 09:58, David Christensen wrote:
On 12/20/2015 08:36 AM, Mimiko wrote:
The HDD's are connected thru SuperMicro SAS RAID AOC-USASLP-L8i
(PCI-E).
http://www.supermicro.com.tr/AOC-USASLP-L8i.cfm.htm
So, one RAID card, eight SAS channels, and one 2 TB and one 1 TB drive
per
On 12/19/2015 07:24 PM, David Christensen wrote:
> I ran iperf ... ~950 MB/s.
Oops -- make that 950 Mbps.
On 12/20/2015 09:13 AM, Mimiko wrote:
I've tested using iperf the file server to the windows server and I've
got 400Mbits.
Something is wrong.
The other
On 12/21/2015 12:24 AM, David Christensen wrote:
Download a live CD image, burn two discs/ USB flash drives, and try again:
https://www.debian.org/CD/live/
debian-live-8.2.0-amd64-standard.iso
I just realized that the live disc might not have iperf...
Perhaps Knoppix:
On 12/20/2015 08:36 AM, Mimiko wrote:
If LUKS you mean Linux Unified Key Setup, then I don't use [it].
Okay.
The HDD's are connected thru SuperMicro SAS RAID AOC-USASLP-L8i
(PCI-E).
http://www.supermicro.com.tr/AOC-USASLP-L8i.cfm.htm
So, one RAID card, eight SAS channels, and one 2 TB
I've tested using iperf the file server to the windows server and I've
got 400Mbits. The other linux server got 600Mbits. Also I've tested
iperf between windows server and another same type supermicro with
hyper-v and got around 650Mbits.
I've tested samba speed from windows to other linux
On 20.12.2015 02:36, Frank Pikelner wrote:
> There appear to be driver issues discussed in other threads with
> respect to Intel driver and slow throughput due to interrupts and CPU
> offloading. May want to review driver parameters and look at trying a
> few changes.
Yes, I've read about that.
Mimiko wrote:
> After reviewing the results of test, I've modified smb.conf. I've added
> max protocol = SMB2 and removed SO_RCVBUF=8192 SO_SNDBUF=8192 from
> socket options. The read speed from this server increased to 40MB/s, the
> write speed to this server increased to
Hello.
After reviewing the results of test, I've modified smb.conf. I've added
max protocol = SMB2 and removed SO_RCVBUF=8192 SO_SNDBUF=8192 from
socket options. The read speed from this server increased to 40MB/s, the
write speed to this server increased to 30MB/s.
On 19.12.2015 00:43, Dan
On 12/19/2015 07:48 AM, Mimiko wrote:
After reviewing the results of test, I've modified smb.conf. I've added
max protocol = SMB2 and removed SO_RCVBUF=8192 SO_SNDBUF=8192 from
socket options. The read speed from this server increased to 40MB/s, the
write speed to this server increased to
There appear to be driver issues discussed in other threads with
respect to Intel driver and slow throughput due to interrupts and CPU
offloading. May want to review driver parameters and look at trying a
few changes.
Frank
On Sat, Dec 19, 2015 at 11:24 AM, Sven Hartge
Hello.
I've bonded two onboard Intel 82576 Gigabit networks on an supermicro
server for load balancing (round-robin). It is working, but transfer
rate is about 10-20MB/s, while on same type of server the same
configuration in windows I get around 100MB/s.
cat /proc/net/bonding/bond0
> Am 18.12.2015 um 12:35 schrieb Mimiko :
>
> Hello.
>
> I've bonded two onboard Intel 82576 Gigabit networks on an supermicro server
> for load balancing (round-robin). It is working, but transfer rate is about
> 10-20MB/s, while on same type of server the same
On 18.12.2015 16:32, Michael Beck wrote:
> Any lost packets?
ifconfig
bond0 Link encap:Ethernet HWaddr
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:654308767 errors:0 dropped:5238 overruns:0 frame:0
TX packets:761897714 errors:0
On Fri, Dec 18, 2015 at 07:44:58PM +0200, Mimiko wrote:
> iperf -c ip
>
> Client connecting to ip, TCP port 5001
> TCP window size: 23.5 KByte (default)
>
> [ 3] local ip port
On 12/18/2015 09:44 AM, Mimiko wrote:
ifconfig
bond0 Link encap:Ethernet HWaddr
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:654308767 errors:0 dropped:5238 overruns:0 frame:0
TX packets:761897714 errors:0 dropped:0 overruns:0
On 12/18/2015 03:35 AM, Mimiko wrote:
I've bonded two onboard Intel 82576 Gigabit networks on an supermicro
server for load balancing (round-robin). It is working, but transfer
rate is about 10-20MB/s, while on same type of server the same
configuration in windows I get around 100MB/s.
How did
18 matches
Mail list logo