Based on the logs, this is simply an issue with the available storage space.
As per the disk statistics size: 10994870251520 => Size of /dev/sdb used: 10555075652608 => Used space on the disk. There is some reserved space and also a slight overhead with fragmentation. There were a few recent fixes for space utilization, but this isn't the case here. Its a case of simply either erase/recycle/delete some of the older vcartridges. Note that there is a sizing mismatch and you would run into this issue from time to time for the following reason Number of cartridges : 198 Size of vcartridges: 2.5 TB In theory we would need around 500 TB of physical space to store data written to all the vcartridges. The deduplication ratio is only 2:1 and also compression is disabled. As of now for 20 TB of virtual tape data we are utilizing around 10 TB of disk. You should try with 100 or 200 GB size vcartridges (set MaxVCartSize=100 in /quadstorvtl/etc/quadstor.conf) then reduce the number of vcartridges to match the expected size of vcartridge data. For example for the current dedupe ratio, you can create around 200 such cartridges and then a policy which rotates the older vcartridges. For now the suggestion is to delete some of the older vcartridges, create a smaller sized vcartridge and repeat this till all the vcartridges are of the smaller size. Then based on the deduplication ratios, you can easily either increase or decrease the number of vcartridges On Tue, Feb 7, 2017 at 8:44 PM, Gary Eastwood <horon1...@gmail.com> wrote: > Many thanks, > I have sent over the logs as requested. > I did find that article too which led me to run through and update from > 3.0.6 to 3.0.11 just in case :) > > > On Tuesday, 7 February 2017 11:38:30 UTC, quadstor wrote: >> >> This seems to similar to >> >> https://forums.veeam.com/tape-f29/vtl-unable-to-start-tape-backup-session-startsessiontape-t21014.html >> which infact was with Veeam + QUADStor VTL (the open source edition >> 2.2.x series) >> >> We will take a look at the fix added to 2.2.x and see if that is >> missing from the current code. >> >> Does this error occur at the start of a backup or midway ? >> >> On Tue, Feb 7, 2017 at 5:05 PM, QUADStor VTL Support >> <vtlsupp...@quadstor.com> wrote: >> > Please send us the diagnostics logs (HTML UI -> System Page -> Run >> > Diagnostics -> Submit) to vtlsupp...@quadstor.com >> > >> > The VTL software does not use any filesystems except for the install >> > files which reside under /quadstorvtl/ . xfs_repair wouldn't help >> > unless you have /quadstorvtl on /dev/sda1 and you suspect a corrupt >> > filesystem. The error here however occurs in the VTL IO path, it seems >> > that the VTL doesn't send a response which Veeam expects. >> > >> > >> > On Tue, Feb 7, 2017 at 4:49 PM, Gary Eastwood <horon1...@gmail.com> >> > wrote: >> >> Running Veeam v9.5 and have started to get the following error on all >> >> our >> >> tape jobs: >> >> >> >> 07/02/2017 11:12:57 :: WriteTapeHeader failed >> >> Tape fatal error. >> >> Tapemark error: Tape error: '23' (Data error (cyclic redundancy >> >> check).) >> >> Tape fatal error. >> >> >> >> Have checked all Veeams local storage for consistency using chkdsk as >> >> they're windows OS's and no errors. >> >> >> >> Ran xfs_repair on Quadstor server and got the following output: >> >> >> >> [root@QuadStorVTL prep]# umount /dev/sda1 >> >> [root@QuadStorVTL prep]# xfs_repair -n /dev/sda1 >> >> Phase 1 - find and verify superblock... >> >> Phase 2 - using internal log >> >> - zero log... >> >> - scan filesystem freespace and inode maps... >> >> - found root inode chunk >> >> Phase 3 - for each AG... >> >> - scan (but don't clear) agi unlinked lists... >> >> - process known inodes and perform inode discovery... >> >> - agno = 0 >> >> - agno = 1 >> >> - agno = 2 >> >> - agno = 3 >> >> - process newly discovered inodes... >> >> Phase 4 - check for duplicate blocks... >> >> - setting up duplicate extent list... >> >> - check for inodes claiming duplicate blocks... >> >> - agno = 0 >> >> - agno = 1 >> >> - agno = 2 >> >> - agno = 3 >> >> No modify flag set, skipping phase 5 >> >> Phase 6 - check inode connectivity... >> >> - traversing filesystem ... >> >> - traversal finished ... >> >> - moving disconnected inodes to lost+found ... >> >> Phase 7 - verify link counts... >> >> No modify flag set, skipping filesystem flush and exiting. >> >> [root@QuadStorVTL prep]# mount /dev/sda1 >> >> >> >> Not sure what's expected there but it doesn't seem to have flagged any >> >> errors unless I'm doing it wrong. >> >> >> >> Was running version 3.0.6 of the VTL software, however have upgraded it >> >> to >> >> 3.0.11 but still getting the same error. >> >> >> >> Any help would be greatly appreciated >> >> >> >> -- >> >> You received this message because you are subscribed to the Google >> >> Groups >> >> "QUADStor VTL" group. >> >> To unsubscribe from this group and stop receiving emails from it, send >> >> an >> >> email to quadstor-vtl+unsubscr...@googlegroups.com. >> >> For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups > "QUADStor VTL" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to quadstor-vtl+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "QUADStor VTL" group. To unsubscribe from this group and stop receiving emails from it, send an email to quadstor-vtl+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.