Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-16 Thread Christian Völker

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Hi all,

I just want to share my experience. I tried first with default rsync
-avH which ended up in all files transferred but hardlink creation was
not feasable due to rsync consuming loads of memory.
So I tried the two scripts I mentioned in my first post which checked
the hardlinks, wrote them into a file an the scond script took this file
and recreated the hardlink on the destination. In theory. In practice
the file was created in a reasonable amount of time (approx 12hrs) but
when I started the recreation the script used up to 15GB of memory (RAM)
and ran for 36hrs before I decided to stop this attempt.

In the end, Holger Parplies was so kind and send me a BackupPC related
script which worked similary but in a much more efficient way. After
24hrs I had all my hardlinks created and the highest amount of memory
consumed was "sort" which took nearly 3GB max.

I guess Holger will release this script too public soon and announce here.


So I have moved my pool (1.5TB of data) from ext3 to xfs now.

Thanks and greetings!


Christian

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iEYEARECAAYFAlZKyJ4ACgkQ0XNIYlAXmzuL9ACZAb4XoXGjqx8GN45hsU+4lMcS
0BsAnjSzIfz1LUPmkzdL9SSNOG7uhwTf
=nP+p
-END PGP SIGNATURE-


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-12 Thread higuita
Hi

On Mon, 9 Nov 2015 20:02:23 +0100, Christian Völker 
wrote:
> I want to transfer my pool from ext4 to xfs. The pool is around 1.3TB
> with approx 15 hosts backing up.

I haven't test with a  big pool, but you can try using plain
old tar, as it also takes care of hardlinks

cd /var/lib/backuppc; tar cf - . | (cd /mnt ;tar xf -)

It will take time, but it will read and transfer all the files
and hardlinks will be referenced inside the tar and later rebuild.
Again, i will not know how it will behave with so many files/hardlinks,
but tar is one old backup tool where almost all corner cases were taken
care, so it should work.

You can pipe to a ssh host "cd /mnt ;tar xf -" if you want to
transfer the files to other server

Good luck
higuita
-- 
Naturally the common people don't want war... but after all it is the
leaders of a country who determine the policy, and it is always a 
simple matter to drag the people along, whether it is a democracy, or
a fascist dictatorship, or a parliament, or a communist dictatorship.
Voice or no voice, the people can always be brought to the bidding of
the leaders. That is easy. All you have to do is tell them they are 
being attacked, and denounce the pacifists for lack of patriotism and
exposing the country to danger.  It works the same in every country.
   -- Hermann Goering, Nazi and war criminal, 1883-1946


pgpSqtf3vNvWk.pgp
Description: OpenPGP digital signature
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-11 Thread Holger Parplies
Hi,

Christian Völker wrote on 2015-11-11 21:10:56 +0100 [Re: [BackupPC-users] Best 
to Move a Large Pool to Different FS?]:
> [...]
> What does the BackupPC_tarPCCopy do in detail? Would it be faster?

it computes the pool file names from file contents. If I'm not mistaken, this
involves reading and decompressing each complete file. It won't be fast in any
case. "Faster"? Maybe, maybe not. I wish I'd find some free time, because the
task is really not *that* difficult, it's just not something general purpose
software can handle well.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-11 Thread Christian Völker
Hi all,

I am still not done with the migration.
As stated in my initial post I started the way which is described here:

http://roland.entierement.nu/blog/2013/12/02/rsyncing-a-backuppc-storage-pool-efficiently.html
So I already have transferred all cpool data to the new storage. "Just" missing 
now the hardlinks.
I started the restore-hardlinks.py from above site.
It went up to 14GB of memory (RAM!) used. Even the BackupPC machine had 8GB I 
added 24GB of swapfile which ended in the machine swaping heavily (io waits 
were low, though) so I stopped after 36hours.
As /var/lib/BackupPC is a drbd device I created a second machine on a different 
ESX host and assigned 32GB of memory to it. I started the restore now an hour 
ago and it went quickly to the 14.4GB of memory usage with low CPU and low IO 
waits. Restore hardlinks is now running (at least without the need of swap) and 
I hope it will come to an end soon.
The tools is recommended to offer an efficient way to copy the data. If 14.4GB 
memory usage appears to be efficient, how much would the full rsync -H use?

Stephen, I am unsure about your recommendation


> 6. Copy the pc directory using BackupPC_tarPCCopy:
>  su -c /bin/bash backup
>  cd /srv/BackupPC
>  mkdir -p pc
>  chown backup pc
>  cd pc
>  /opt/BackupPC/bin/BackupPC_tarPCCopy /old/BackupPC/pc | tar xvPf - 
What does the BackupPC_tarPCCopy do in detail? Would it be faster?

Greetings

Christian




--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-10 Thread stephen

Christian,

If you can resign yourself to staying on ext4, dumping and restoring the 
filesystem may be your best bet. You can resize it after (and/or before if 
needed) to the desired size.


If that's not possible, the second best solution is to stand up the new 
server from scratch with no data, disable backups on your old server, wait 
for the new server to build a sufficient backup history (1, 2, 6 months, 
whatever), then kill off the old server. If you do this, consider starting 
with BackupPC v4 on the new server to eliminate this problem in the future.


But it is possible to move the backups if you're determined. Here's my 
cookbook method that I've successfully used in the past. The last time I 
did this, I moved ~6TB cpool on a server with 4GB of RAM. It did take days 
(about 5 days if I remember, over 1Gb enet) during which time I stopped as 
many processes as possible, including BackupPC, in order to keep the data 
static.


Also note that I install BackupPC to /opt/BackupPC and store my BackupPC 
data in /srv/BackupPC, which are non-standard locations. Adjust to suit 
your needs. Also note that this worked *for me*. I cannot guarantee it will 
work for you. YMMV; good luck.


-

cat migrate-from-old-server


BackupPC uses hard links within the (c)pool filesystem for deduplication. 
This makes normal copying with rsync difficult (it uses too much RAM 
keeping track of inodes attempting to preserve hard links).


The following procedure works. It assumes only the cpool has valid data. If 
not compressing, adjust to use pool instead. If using both, do both.


0. Install BackupPC on the new server. Disable BackupPC on new and old 
servers. The data should be quiescent while being manipulated.


1. Configure the new storage on the new server. The use of external (SAS, 
fc, iscsi) array is also highly recommended to ease future mobility. the 
use of LVM is also highly recommended (lvextend, lvremove ftw). If there is 
a chance of wanting to shrink the filesystem, remember that XFS can grow, 
but cannot shrink, therefore you may wish to consider ext4 if in doubt.


2. Temporarily mount the old storage on the new server. This may require 
being creative. It may also require a multi-step process (that is, running 
these steps on the old server to create a new, more mobile, datastore which 
can be moved to the new server). Hint: DAS is fast but NFS does work.


3. Ensure that new storage is mounted at /srv/BackupPC and the old storage 
is mounted at /old/BackupPC.


4. Open a screen session, as root, on the new server. This is recommended 
because some of these steps take a long time to complete.


5. Copy the cpool using any technique, ignoring hard links. We'll use tar:
 cd /old/BackupPC
 tar -cpf - . | ( cd /srv/BackupPC ; tar -xvpf - )

6. Copy the pc directory using BackupPC_tarPCCopy:
 su -c /bin/bash backup
 cd /srv/BackupPC
 mkdir -p pc
 chown backup pc
 cd pc
 /opt/BackupPC/bin/BackupPC_tarPCCopy /old/BackupPC/pc | tar xvPf -

This can be done without mounting both sets of storage on the new server 
(piping commands over ssh) but with additional complexity.


7. Copy the BackupPC status file (/var/BackupPC/status.pl) and 
/etc/BackupPC directory from old to new server; set permissions 
identically. If you make changes, consult and understand docs (insufficient 
retention period(s) can quickly remove data you just took care to copy!).


8. Start BackupPC on new server. Test GUI, confirm settings are correct. 
Test restores, backups. Fix problems. Lather, rinse, repeat. Fin! Enjoy 
favorite beverage.


Cheers, Stephen

On Mon, 9 Nov 2015, Christian Völker wrote:


Hi all,

I want to transfer my pool from ext4 to xfs. The pool is around 1.3TB
with approx 15 hosts backing up.

Well, the obvious rsync -avH is called to be too memory consuming
because of the hardlinks.

So I started the way listed here:
http://roland.entierement.nu/blog/2013/12/02/rsyncing-a-backuppc-storage-pool-efficiently.html

The rsync of the cpool is done (took quite a while!)

I started the "store-hardlinks.pl" and up to now top tells me:

top - 20:01:12 up  3:32,  2 users,  load average: 1.36, 1.08, 1.03
Tasks: 107 total,   1 running, 106 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.3%us,  0.8%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.2%si,
0.0%st
Mem:   8061568k total,  7778968k used,   282600k free,   442732k buffers
Swap:  4063228k total, 7672k used,  406k free,12848k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
1644 root  20   0 5760m 5.5g 1008 D  1.3 71.5   7:09.30 store-hardlinks

So it consumes already nearly 6GB of memory!

Anyone else having a better idea how to transfer in an reasonable amount
of time with reasonable memory consumption?

Greetings

Christian


--
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily 

[BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-09 Thread Christian Völker
Hi all,

I want to transfer my pool from ext4 to xfs. The pool is around 1.3TB
with approx 15 hosts backing up.

Well, the obvious rsync -avH is called to be too memory consuming
because of the hardlinks.

So I started the way listed here:
http://roland.entierement.nu/blog/2013/12/02/rsyncing-a-backuppc-storage-pool-efficiently.html

The rsync of the cpool is done (took quite a while!)

I started the "store-hardlinks.pl" and up to now top tells me:

top - 20:01:12 up  3:32,  2 users,  load average: 1.36, 1.08, 1.03
Tasks: 107 total,   1 running, 106 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.3%us,  0.8%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.2%si, 
0.0%st
Mem:   8061568k total,  7778968k used,   282600k free,   442732k buffers
Swap:  4063228k total, 7672k used,  406k free,12848k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 1644 root  20   0 5760m 5.5g 1008 D  1.3 71.5   7:09.30 store-hardlinks

So it consumes already nearly 6GB of memory!

Anyone else having a better idea how to transfer in an reasonable amount
of time with reasonable memory consumption?

Greetings

Christian


--
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily query your data on Hadoop in a 
more interactive manner. Teradata is also now providing full enterprise
support for Presto. Download a free open source copy now.
http://pubads.g.doubleclick.net/gampad/clk?id=250295911=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/