That's not huge but I understand from experience that it takes time to copy.
We employ various methods to maintain our farm of TDB stores.
Apply the update to more than one TDB at a time, think loose DTC.
Periodically export the TDB to streaming flat file and import. The file can be 
compressed which is CPU but saves on the network. If you name your graphs to be 
a period of time eg the month you can just export the latest month.
Our preferred way for HA is to restrict additions to streamed files and 
timestamp the file which acts like a journal so the entire TDB can be rebuilt 
just by importing the files.


Dick
-------- Original message --------From: "Dimov, Stefan" <[email protected]> 
Date: 23/12/2017  00:19  (GMT+00:00) To: [email protected] Subject: Re: 
Operational issues with TDB 
Our TDB now is about 32G and I see that some of its files are almost 5G (single 
file size), but let’s assume it can/will grow 3, 4, 5 times …

On 12/22/17, 2:16 PM, "Dick Murray" <[email protected]> wrote:

    How big? How many?
    
    On 22 Dec 2017 8:37 pm, "Dimov, Stefan" <[email protected]> wrote:
    
    > Hi all,
    >
    > We have a project, which we’re trying to productize and we’re facing
    > certain operational issues with big size files. Especially with copying 
and
    > maintaining them on the productive cloud hardware (application nodes).
    >
    > Did anybody have similar issues? How did you resolve them?
    >
    > I will appreciate if someone shares their experience/problems/solutions.
    >
    > Regards,
    > Stefan
    >
    

Reply via email to