On 4/6/2010 11:26 AM, Kern Sibbald wrote:
> Hello,
>
> The problem is that there can be anywhere from 1 to 20 or so different
> writers, so there can be up to 20 (maybe even more) different backups
> followed by metadata, and then when the whole backup is done, there is yet
> more metadata for the "backup" rather than just for a writer, and that backup
> metadata must precede all the writer metadata.  Consequently, I don't think
> that closing a Volume at each instance of metadata will be practical.
>    

I see. I was under the impression that the metadata would be spooled 
locally by the FD and only sent once at the very end.

> I've decide to put the metadata in the catalog, which really disgusts me, but
> I see no other choice to have a general solution that will work with both
> disk and tape Volumes (as well as future devices such as the cloud).
>    

Won't that make for a really huge catalog? What if it were placed in the 
catalog temporarily, then a secondary job was triggered by the end of 
the backup job that wrote the catalog record(s) to a volume and on 
success deleted the catalog record(s)? The secondary job would have to 
somehow be associated with the backup job, though, so that a restore 
first restored the catalog record(s) and then proceeded with the restore.

> Kern
>
> On Tuesday 06 April 2010 16:35:58 Josh Fisher wrote:
>    
>> On 4/2/2010 10:36 AM, Kern Sibbald wrote:
>>      
>>> Hello,
>>>
>>> Thanks to those who sent in suggestions.  While talking to Eric about
>>> this, I came up with the solution that appeals to me the most:
>>>
>>> We add new functionality between the FD and the SD.
>>>
>>> 1. FD asks SD to Open named Spool file (all subsequent data will go
>>> there) 2. SD sends back spool name.
>>> 3. FD can ask to stop spooling
>>> 4. The FD can then start spooling with a new name (item 1) if it wants.
>>> 5. The FD can ask the SD to commit a specified  named spool file.
>>> 6. The SD sends back the status.
>>> 7. For the moment, only one spool can be open at a time.
>>>        
>> I don't know if this is feasible, but perhaps somehting like the
>> following is possible:
>>
>> 1. FD send file data as usual to SD
>> 2. FD tells SD it is done with data and requests to begin sending meta info
>> 3. SD receives request from FD, closes the last volume written to, and
>> acquires another volume not already written to by this job
>> 4. SD ACKs FD's request and FD begins transmitting meta info in same
>> manner as file data
>> 5. Job is finished in normal manner, allowing FD to properly close VSS
>> session
>>
>> This would be independent of spooling and allow the entire backup with
>> meta info to be within the context of a single VSS session. Also, since
>> the meta info is always on a different volume, it allows the meta info
>> to be read first during a restore.
>>
>>
>> ---------------------------------------------------------------------------
>> --- Download Intel® Parallel Studio Eval
>> Try the new software tools for yourself. Speed compiling, find bugs
>> proactively, and fine-tune applications for parallel performance.
>> See why Intel Parallel Studio got high marks during beta.
>> http://p.sf.net/sfu/intel-sw-dev
>> _______________________________________________
>> Bacula-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/bacula-devel
>>      
>
>    

------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Bacula-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-devel

Reply via email to