My apologies, I meant to say “Mark-Jan”!

> On Mar 9, 2019, at 10:40 AM, Joe Martin <[email protected]> wrote:
> 
> Hi Mark, 
> 
> I am intrigued by your response and have obtained a tree view for my system 
> as you suggested to Paul.  I’m unfamiliar with the tree view and don’t 
> understand how to check the number of PCIe lanes that are available to the 
> disk controller and disks and how to check how many PCIe bridges are in 
> between on my motherboard configuration.  
> 
> I have a screenshot of the tree view showing my 10G ethernet connection (but 
> it is 220KB in size so I didn’t attach it here) but I am not familiar with 
> how to determine what you asked about from the tree and what to do about the 
> configuration in any case.  Is the configuration fixed and not changeable, in 
> any case?  
> 
> If so, then perhaps your alternative suggestion regarding booting from a USB 
> stick into a ramdisk is a viable route?  I’m unfortunately not familiar with 
> the details of how to do that so perhaps a couple of brief comments about 
> implementing that process would help me understand better if that’s the only 
> viable alternative to pursue given the present hardware configuration?  
> 
> Joe
> 
>> On Mar 9, 2019, at 5:14 AM, Mark-Jan Bastian via USRP-users 
>> <[email protected]> wrote:
>> 
>> Hi Paul,
>> 
>> I can record from the X310 to disk to nvme x4 PCIe at 800 MB/sec 
>> for a few minutes. There is still a risk of O 's appearing.
>> 
>> First thing to check is the number of PCIe lanes available to the disk
>> controller and disks, and how many and which PCIe bridges are in between
>> on your motherboard configuration. Try to avoid other traffic over these
>> PCIe bridges. lspci -vt for a tree view.
>> 
>> Then one can do benchmarking from DRAM to disk. Perhaps you will not need
>> a filesystem for your very simple storage purpose.
>> You can ultimately just boot from some other media (USB stick or CD-ROM
>> loaded into a ramdisk) just to make sure there is absolute no need to 
>> read-access any other data on said disks, via cached pages or otherwise.
>> 
>> Hickups by system management mode or other unexpected driver interrupt 
>> sources
>> should be minimized. Other networking code and chatter might need be reduced,
>> just as SMM related thermal management events in the BIOS.
>> First tune everthing for maximum performance, then optimize for very 
>> constant 
>> write performance.
>> 
>> Mark-Jan
>> 
>> On Sat, Mar 09, 2019 at 12:32:05PM +0100, Paul Boven via USRP-users wrote:
>>> Hi,
>>> 
>>> I'm trying to record the full X310 bandwidth, for a few hours, without any
>>> missed samples. Which of course is a bit of a challenge - does anyone here
>>> already achieve this?
>>> 
>>> We're using a TwinRX, so initially I wanted to record 2x 100MS/s (from both
>>> channels), which amounts to 800MB/s, 6.4Gb/s. At first I tried uhd_rx_cfile,
>>> but have been unable to get it to a good state without showing an 'O' every
>>> few seconds at these speeds.
>>> 
>>> As a recorder I have a SuperMicro 847 chassis with 36 disks (Seagate
>>> Ironwolf 8TB T8000VN0022, 7200rpm). In this particular server, the disks are
>>> connected through an 'expander' backplane, from a single HBA (LSI 3008). CPU
>>> is dual Xeon 4110, 2.1 GHz, 64 GB of ram.
>>> 
>>> At first I tried a 6 disk pool (raidz1), and eventually ended up creating a
>>> huge 36 disk ZFS stripe, which in theory should have no trouble with the
>>> throughput, but certainly kept dropping packets.
>>> 
>>> Note that recording to /dev/shm/file works perfectly without dropping
>>> packets, until the point that the memory is full.
>>> 
>>> Given that ZFS has quite a bit of (good) overhead to safeguard your data, I
>>> then switched to creating a mdadm raid-0 with 18 of the disks (Why not 36? I
>>> was really running out of time!)
>>> 
>>> At that point I also found 'specrec' from gr-analyze, which seems more
>>> suitable. But, even after enlarging its circular buffer to the largest
>>> supported values, it would only average a write speed of about 300MB/s.
>>> 
>>> In the end I had to settle for recording at only 50MS/s (200MB/s) from only
>>> a single channel, a far cry from the 2x 6.4Gb/s I'm ultimately looking to
>>> record. Although I did get more than an hour of perfect data out of it, over
>>> time the circular buffer did get fuller in bursts, and within 2 hours it
>>> exited due to exhausting the buffers. Restarting the application made it
>>> work like fresh again, with the same gradual decline
>>> in performance.
>>> 
>>> Specrec, even when tweaking its settings, doesn't really take advantage of
>>> the large amount of memory in the server. As a next step, I'm thinking of
>>> adapting specrec to use much larger buffers, so that writes are at least in
>>> the range of MB to tens of MB. From earlier experiences, it is also
>>> important to flush your data to disk often, so the interruptions due to this
>>> are more frequent, but short enough to not cause receive buffers to
>>> overflow.
>>> 
>>> In terms of network tuning, all recording was done with MTU 9000, with wmem
>>> and rmem at the recommended values. All recordings were done as interleaved
>>> shorts.
>>> 
>>> Does anyone have hints or experiences to share?
>>> 
>>> Regards, Paul Boven.
>>> 
>>> _______________________________________________
>>> USRP-users mailing list
>>> [email protected]
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>> 
>> _______________________________________________
>> USRP-users mailing list
>> [email protected]
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>> 
> 


_______________________________________________
USRP-users mailing list
[email protected]
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to