Zitat von Marco van Wieringen <[email protected]>:
<lst_hoe02 <at> kwsoft.de> writes:>> >> Officially the query should return two items I myself use the following >> (which is a variation on the build in CopyUnCopiedJobs) > > I will try with two results from SQL query. From reading the manual > i thouhgt we only need the jobid: > > SQLQuery > The SQLQuery selection type, used the Selection Pattern Dir Job as > an SQL query to obtain the JobIds to be migrated. The Selection > Pattern must be a valid SELECT SQL statement for your SQL engine, > and it must return the JobId as the first field of the SELECT. > Adding the starttime to the select let the copy start as expected. Do you need a traceback anyway? If yes i will try to revert the change later and try to create a useful dump later.Ok great that it works it might indeed be interesting to see what happens when you don't give it that data. Its a bug indeed as it should behave correctly and not crash the SD but good that you have a working solution now. At least make a bug report and if you have a trace great but you can also give in the bug report in way to reproduce the above info.
Strange enough i can now not reproduce the crash. I'm nearly 100% sure the only thing changed was the second parameter, but now when i revert it the copy job works anyway... Could it be related to the fact that on my first try a new volume has to be labeled/recycled which is now not the case anymore?
Another question: Is it recommended/possible to use data spooling for copy to tape?That kind of depends, for most situations its not faster, most people don't have faster spooling disks then there normal disks but I could imagine that SSDs would help there but they also may wear then faster. Currently spooling is not very elegant its kind of single threaded and as such halves your throughput e.g. read, write, read, write etc. It has some advantage that only one Job can despool at a time so if you use it things won't interleave your data on tape which is nicer when restoring etc from tape as then it doesn't have to read a lot of data and skip it as it doesn't belong to a certain backup.
Ok, we use it exactly that way. We have a SSD as spool and a big (slower) disk array for (daily) incremental backups which will than be copied to (disaster) tape. We mostly used spooling for improved concurrency from multiple clients to not stress the tape device with start-stop loops, but for the copy part our main weak point looks like no concurrent read from the same disk device...
Thanks Andreas -- You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. For more options, visit https://groups.google.com/d/optout.
smime.p7s
Description: S/MIME Cryptographic Signature
