Dear Kenneth,

Thank you vey much for the help abd assistance in recommending Microlite BackupEdge.

I found another sophisticated application 'cpio' however due to its lack of GUI and complex switchers and syntak - this is of likke help right now

If you have the time, perhaps in a private email are you able to shed some light on the above win a private email.

It was a difficult ask and only probable a few would understand fault tolerance.

At other times I have buck copies my working PC to an NFS Drive and similar things have happened as not even Konqueror seems not to have a verify option and  things sometimes happen.

I really am not after a file compare option to test files copies/written so. I need to further my understanding of the O/S I am now committed to and devotes countess hours of reading new material to learn.

Just if you have time. It helps me along the road with a new O/S and stops me feeling quite so inadequate in a new environment.

You probable have guessed I have been around for a long time by now in IT now arrox 25 years.

I am happy if the explanation is a bunch of URL's and you write to me personally rather than clog the list of a high level descurrion

Kind Regards

Scott


On Wed, 2006-11-01 at 23:28 -0500, Kenneth Schneider wrote:
On Thu, 2006-11-02 at 14:14 +1000, Intrusion Detection Account 000
wrote:
> Ken you may be the only one who can shed any light on my question as
> it is difficult and needs an answer from a long standing UNIX/Linux
> user.
> That being said anyone else who knows the answer or can contribute
> please jump in
> 
> Linux coming from a Unix Server now being a desktop solution I was
> wondering what fault tolerance there is in respect to verifying reads
> and writes to the HDD or NFS Server.
> 
> Some server O/S employ HOTFIX to ensure a file is written correctly to
> the hard Disk and Transitional Tracking.
> 
> Are any of these left over in a Linux Workstation. This goes with my
> question about verification of files.
> 
> I was faithfully backing up my /home directory and sub-directories and
> list my entire system which was my fault and a long story. When it
> came to recover the backup achieve the achieve header had been
> corrupted. I used KDAR to do the backup and it has no verify options
> and KDAR could not read the achieve file.
> 
> My question to you is are there in inbuilt file integrity listed above
> to ensure every write and read are performed actually and checked by
> any O/S fault tolerance systems
> 

I tried to use kdar and had nothing but problems with it. I think the
problems you experienced are with the application kdar and not with the
linux system it self. If _reliable_ backups are a must I suggest using a
commercial backup program like Microlite BackupEdge. It employs good
verification to ensure a reliable backup has been preformed. You can
download a 60 day eval for free to test.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to