Hi 

In my bareos setup i have 3+ virtual devices per client. This 3+ virtual 
devices per client, have one media type that points to one physical 
location(directory).
Every client has at least 4 pools, 2 pools use one virtual device(Incremental, 
Differential), and 2 pools use this two devices (1 pool/one device(virtual)).
In this setup I have one media type for this 3+ virtual devices and these 4+ 
pools.
Every of this 3+ virtual devices, points to one directory per client.
This 4+ pools per client  have assigned these 3+ devices.
So if I keep one media type per physical location(directory), i have no problem 
with restoring using any of these 3+ devices.
But now i discovered that this 4+ pools per client makes things more 
complicated. I would like to reduce above pools setup to around 13 pools.
But this reducing require keeping many different media types in one pool. I 
found information in bareos manual that it could make problems e.g. with 
migration.
Here is some information about that from manual:

Each Pool into which you migrate Jobs or Volumes must contain Volumes of only 
one Media Type Dir Storage .

Bareos permits Pools to contain Volumes of different Media Types. However, when 
doing migration, this is a very undesirable condition. For migration to work 
properly, you should use Pools containing only Volumes of the same Media Type 
for all migration jobs.

Above, we discussed how you could have a single device named FileBackupSd 
Device that writes to volumes in /var/lib/bareos/storage/. You can, in fact, 
run multiple concurrent jobs using the Storage definition given with this 
example, and all the jobs will simultaneously write into the Volume that is 
being written. Now suppose you want to use multiple Pools, which means multiple 
Volumes, or suppose you want each client to have its own Volume and perhaps its 
own directory such as /home/bareos/client1 and /home- /bareos/client2 ... . 
With the single Storage and Device definition above, neither of these two is 
possible. Why? Because Bareos disk storage follows the same rules as tape 
devices. Only one Volume can be mounted on any Device at any time. If you want 
to simultaneously write multiple Volumes, you will need multiple Device 
resources in your Bareos Storage Daemon configuration and thus multiple Storage 
resources in your Bareos Director configuration. Okay, so now you should 
understand that you need multiple Device definitions in the case of different 
directories or different Pools, but you also need to know that the catalog data 
that Bareos keeps contains only the Media Type and not the specific storage 
device. This permits a tape for example to be re-read on any compatible tape 
drive. The compatibility being determined by the Media Type (Media Type Dir 
Storage and Media Type SdDevice ). The same applies to disk storage. Since a 
volume that is written by a Device in say directory /home/bareos/backups cannot 
be read by a Device with an Archive Device Sd Device = /home/bareos/client1, 
you will not be able to restore all your files if you give both those devices 
Media Type Sd Device = File. During the restore, Bareos will simply choose the 
first available device, which may not be the correct one. If this is confusing, 
just remember that the Directory has only the Media Type and the Volume name. 
It does not know the Archive Device Sd Device (or the full path) that is 
specified in the Bareos Storage Daemon. Thus you must explicitly tie your 
Volumes to the correct Device by using the Media Type.

Could you explain me why can't I keep many different media types in one pool, 
if  media type points to right virtual device(directory)?
Tell why it might be a problem with migration in my case, when media type 
points to right directory and volume?
Please help me , Thank you in advance.




-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to