Hi Volker...

The enclosure is the 5 bay '9558U3' from PBtech.

NUC end USB is:  (lspci)

00:14.0 USB controller: Intel Corporation 8 Series USB xHCI HC (rev 04)
00:1d.0 USB controller: Intel Corporation 8 Series USB EHCI #1 (rev 04)

At the enclosure end:  (lsusb)

Bus 002 Device 002: ID 152d:0539 JMicron Technology Corp. / JMicron USA
Technology Corp. JMS539 SuperSpeed SATA II 3.0G Bridge

I had a conversation with a friend who was doing a similar thing with
good results but I put it on the back burner as primary storage on USB
just sounds wrong!

Then I had a customer job that needed 10TB+ of slack space for so I went
shopping on the basis that I could bill some of it out and/or Trademe
the lot if it didn't pan out.  (I already had the NUC.)

I originally used 5x 3TB Seagate Baracuda drives and I got OK
performance on read but it saw-toothed down as low as 10MB/s write on
sustained transfers so I flicked them and put in 5x4TB WD red's.  (The
friend who suggested this used 4TB WD reds as well)

Disk read performance to one of the LVM volumes with the code path
LVM->mdadm->USB :

root@nuc:~# hdparm -Tt /dev/orico/files
/dev/orico/files:
 Timing cached reads:   13002 MB in  2.00 seconds = 6505.64 MB/sec
 Timing buffered disk reads: 470 MB in  3.00 seconds = 156.55 MB/sec
root@nuc:~#

More than enough to make one client on a 1GB/s network happy.

Small file write performance is as you'd expect:

root@nuc:/data/files# dd if=/dev/zero of=outputtest bs=8k count=10k
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 0.0681586 s, 1.2 GB/s

ie: straight into ram. :-)

Larger files pretty respectable given the code path and software raid
over USB:

root@nuc:/data/files# dd if=/dev/zero of=outputtest bs=8k count=100k
102400+0 records in
102400+0 records out
838860800 bytes (839 MB) copied, 15.4919 s, 54.1 MB/s

Which equates to about 550Mb/s over the wire.  Ish, rule of thumb, sorta.

If it were multi-user with large files I'm picking it'd get IO bound but
as a big storage bucket with mini hypervisor bolted on the side it's neat.

The only issue I've had is sometimes the USB / mdadm stack would not
come up in time for LVM on restart so I've a script in rc.local to kick
the LVM into life and restart the NFS server after a 10s delay which
solved that issue.

If you need grunt in a NAS/hypervisor you're obviously better off with
direct attached storage and a xeon/i7 but it's a nice geeky option for
light processor loads ans saving a few $ on power every month. :-)


Cheers, Chris H.


sdb,c and d are the

On 28/06/16 11:47, Volker Kuhlmann wrote:
> On Mon 27 Jun 2016 22:17:34 NZST +1200, Chris Hellyar wrote:
> 
> Wow I'm impressed. Could you give some more info re model numbers and
> approx price, for the Orinoco tower too?
> 
> What exactly is the USB3 hardware you are using (chipsets, where)? Only
> recently Linux-USB3 was very painful when it comes to throughput, esp
> sustained, with some driver locking up, timing out, and auto-resetting
> frequently.
> 
> Ta,
> 
> Volker
> 

_______________________________________________
Linux-users mailing list
[email protected]
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users

Reply via email to