[BackupPC-users] Partial backup is not saved

2019-04-29 Thread Tapio Lehtonen
I split a host to two backups, because dump directory contains virtual
machine dump files which are big and are taken every night. So I needed
shorter keep time for those backups to not run out of disk on the backup
host. But this new backup setup does not seem to work. I takes a long
time but XferLOGbad shows this:

>   create   644   0/0 100609398737 
> rottank/bupool/dump/vzdump-qemu-5099-2019_04_25-00_16_31.vma.lzo
>   pool 644   0/0   11286 
> rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.log
> finish: removing in-process file 
> rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.vma.lzo
> Child is aborting
> Done: 89 files, 910772510921 bytes
> Got fatal error during xfer (aborted by signal=ALRM)
> Backup aborted by user signal
> Not saving this as a partial backup since it has fewer files than the prior 
> one (got 89 and 89 files versus 176002)

Does the last line mean BackupPC things this backup ought to have 176002
files, and since only 89 files got backed up this backup is thrown away?

The same host is backed up twice, I added exclude for that dump/
director and this new backup only backs up that dump/ directory.


-- 
Tapio Lehtonen
OSK Satatuuli http://satatuuli.fi/
<>___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC writes to disk very slow

2018-12-03 Thread Tapio Lehtonen
Out of curiosity, have you experimented with other filesystems? I have 
couple relatively large setups (pool at ~ 9 terabytes) with ext4 and 
those still crunch backups happily.




Previously I have used EXT4, only problem was once running out of 
inodes. But I find it hard to believe XFS with default options would be 
so much worse than any other file system with default options.


XFS has been a bit hit and miss for me, I know many swear by it, but 
where I've tested it, I've hit all kinds of random problems to a degree 
that I haven't bothered with it any more.




I have another host, same model and same RAID card. But disks are two 
300 GB SAS disks in RAID1. There I got 11,2 MB/sec reading from 
100Mbit/sek ethernet, and 127 MB/s with dd reading from /dev/zero. I got 
the speed down to 23.9 MB/s using 4k blocksize.


I'll have to examine the problem host more.

I've been recently experimenting with separating the pc and pool 
directories so pc directories are on SSD storage and the pool on HDDs. 
Jury is still out on real life speedups on this, but it seems possible 
with BPC4.


Interesting approach. Have to experiment when I set up next BackupPC host.

--
Tapio Lehtonen
OSK Satatuuli
<>___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC writes to disk very slow

2018-12-02 Thread Tapio Lehtonen

On new BackupPC host backups go very slow. I believe I have determined
the network connection is not at fault, from Windows machine
Speedtest.net shows a little less than gigabit speeds and from this
backuppc host speedtest.cli shows over 800 Mbits / sek.

But writing to disk is slow, about 3MBytes / second. Disk system is hw
RAID 1 with two 4 TB SATA disks, and file system is XFS. In fact it is
not just BackupPC, if I download large files the speed drops down to
about 3 MBytes / sek.

01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS
2108 [Liberator] (rev 05)

root@fuji:/srv# xfs_info /dev/sdb2
meta-data=/dev/sdb2  isize=512agcount=4, 
agsize=242689472 blks

 =   sectsz=512   attr=2, projid32bit=1
 =   crc=1finobt=1 spinodes=0 rmapbt=0
 =   reflink=0
data =   bsize=4096   blocks=970757888, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0 ftype=1
log  =internal   bsize=4096   blocks=474002, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

I ran Phoronix Test Suite, results:
http://taleman.fi/Fujitsu_RX300/fuji-diskbench-2018-11-29/

Those measurements show 4 k block size random IO speeds down to 1
MByte / sek. With larger block sizes in sequential writes speeds are
very good, better than I need with a gigabit connection.

Is my setup somehow sub-optimal? Why would BackupPC write 4k blocks
and use random IO? I noticed the slowness with the first big backup,
disk was almost empty so BackupPC could just write stream to disk
which to my mind should go with disk media speed, which should be >
100MBytes / sek even on this slowish SATA disks.

I did not think disk system speed would be an
issue, so I did not think about blocksize, separate log device etc
when creating the file system. I chose XFS so I do not need to worry
about inodes running out.

Setup is defaults, BackupPC on Debian GNU/Linux 9.6.

Why is writing to disk so slow?

What can I do to make it faster? The current situation is annoying, I
get a gigabit connection so backups go fast and end up with system
where 0.03 GBit/sec connection would be just as good.

Possible solutions?
===

Host has also two SSD disks, OS is installed there. Disks have about
150 GB unpartitioned space, so I could put XFS log or journal or
something there. But looks like these must be set up during mkfs time,
so now I need xfsdump, make better fs, xfsrestore.

Bigger block size? I have not yet looked into what are sizes of files
that bet backed up. Backup client hosts are web servers, e-mail
servers and name servers. The performance tests indicate small block
sizes are slow in random IO.

--
Tapio Lehtonen
OSK Satatuuli
<>___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-users Digest, Vol 147, Issue 11

2018-10-18 Thread Tapio Lehtonen

backuppc-users-requ...@lists.sourceforge.net kirjoitti 18.10.2018 klo 15.08:




How to do offline archive for some host(s) of backupc backups?

Is there any tool to do this? Rsync of /backuppc-pool is not option.

There is chapter Archive Functions in BackupPC Documentation. Click the 
link on left sidebar of BackupPC browser interface.


--
Tapio Lehtonen
OSK Satatuuli
<>___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Which file system for data pool?

2018-08-14 Thread Tapio Lehtonen
I'm building a BackupPC host, with two SSD disks and two 4 TB rotating 
disks connected to LSI Logic / Symbios Logic MegaRAID SAS 2108 
[Liberator] (rev 05). Operating system is Debian GNU/Linux 9.5. The plan 
is to put OS on SSD disks with RAID1, and BackupPC data pool to rotating 
disk with RAID1. SSD disks are 240 GB each, I'm open to suggestions how 
to use part of them for cache or journal device or something.


That disk controller has reasonably performant RAID with battery backup, 
so I prefer using those features. Thus ZFS is not good, my understanding 
is ZFS should be used with plain host bus adapters.


I'm thinking XFS, so inode allocation is not a problem (previously I 
asked in this mailing list how to recover from out of inodes). What I 
read indicate XFS is equal or better than Ext4 for most features.


I could not find recent recommendations for file system used with 
BackupPC. Those old ones I found say ReiserFS is good, it probably is 
but not much maintained recently.


So, any recommendation for file system?
--
Tapio Lehtonen
OSK Satatuuli
<>--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Out of inodes, how to recover?

2018-05-15 Thread Tapio Lehtonen
Backing up a single desktop computer, BackupPC host ran out of inodes at 
15 million inodes used. Reading old messages from this list I was 
surprised to learn BackupPC uses only one (or two?) inodes per backed up 
file, no matter how  many hard links are used. I am thus at a loss to 
explain how come 15 million inodes is not enough for backing up this 
single desktop computer.


But to recover from this situation, I plan to reduce the number of full 
backups to keep, that is set $Conf{FullKeepCnt} to something lower.


Question: The host is already out of inodes on backuppc partition, can 
it still remove old backups now obsolete since lower FullKeepCnt? And 
old files are removed and inodes made available? I assume this happens 
during nightly cleanup runs, so I have to wait until next day to find  out.


From what I learned from reading discussions, copying the pool to 
larger disk with more inodes is not feasible. Also it seems to me 
converting the current ext4 filesystem to some other filesystem that 
does not have a hard limit for inodes is iffy. So it is time to start 
planning a new backuppc host and use lower bytes-per-inode or use a 
filesystem with dynamic inodes.


--
Tapio Lehtonen
OSK Satatuuli
<>--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_serverMesg has no effect? (Tapio Lehtonen)

2017-07-24 Thread Tapio Lehtonen



Date: Sat, 22 Jul 2017 10:42:08 -0500
From: Les Mikesell <lesmikes...@gmail.com>
To: "General list for user discussion, questions and support"
<backuppc-users@lists.sourceforge.net>
Subject: Re: [BackupPC-users] BackupPC_serverMesg has no effect?
Message-ID:
<caoagvpz_tpb326et782zlh47h2hfcmkmjw84j4kne5bh4on...@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Jul 22, 2017 at 10:21 AM, Tapio Lehtonen
<tapio.lehto...@satatuuli.fi> wrote:

Running backuppc 3.3.0-2+deb8u1 on Debian GNU/Linux 8.8 Jessie.

Used command
/usr/share/backuppc/bin/BackupPC_serverMesg backup 192.168.42.2 humppa
backuppc 1
to force full backup.

Not sure about the command line problem, but I used to occasionally
force full runs to start on a host or two as I was leaving work on
Fridays using the web interface so they would have the weekend to
complete if necessary - and to correct any time-skew that might have
happened.  This was slightly before the blackout time would end so
there would not be a problem with concurrent scheduled runs.

I used to do just that, but figured it would be convenient to let cron 
do the job. And users say they want to work late in the evening, and a 
running backup slows the server down too much.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_serverMesg has no effect?

2017-07-22 Thread Tapio Lehtonen

Running backuppc 3.3.0-2+deb8u1 on Debian GNU/Linux 8.8 Jessie.

Used command
/usr/share/backuppc/bin/BackupPC_serverMesg backup 192.168.42.2 humppa 
backuppc 1

to force full backup.

The request appears in the log but seems to have no effect.

2017-07-21 15:07:47 Unknown command backuphumppa 
  backuppc 1
2017-07-21 15:09:38 User backuppc requested backup ofhumppa 
  (192.168.42.2)
2017-07-21 16:00:00 Next wakeup is 2017-07-21 17:00:00
2017-07-21 17:00:00 Next wakeup is 2017-07-21 18:00:00
2017-07-21 18:00:00 Next wakeup is 2017-07-21 19:00:00
2017-07-21 19:00:00 Next wakeup is 2017-07-21 20:00:00
2017-07-21 20:00:00 Next wakeup is 2017-07-21 21:00:00
2017-07-21 20:00:26 Started incr backup onalice 
  (pid=9291, share=/)
2017-07-21 20:13:03 Started incr backup onalice 
  (pid=9291, 
share=/opt/Varasto3)
2017-07-21 20:28:53 Started incr backup onalice 
  (pid=9291, 
share=/opt/Varasto4)
2017-07-21 20:29:06 Started incr backup onalice 
  (pid=9291, 
share=/opt/Varasto)
2017-07-21 20:30:15 Started incr backup onalice 
  (pid=9291, 
share=/opt/Varasto1)
2017-07-21 20:30:16 Finished incr backup onalice 

2017-07-21 20:30:16 Running BackupPC_linkalice 
  (pid=9488)
2017-07-21 20:30:20 Finishedalice 
  (BackupPC_linkalice 
)
2017-07-21 21:00:01 Next wakeup is 2017-07-21 22:00:00
2017-07-21 21:45:02 User backuppc requested backup ofhumppa 
  (192.168.42.2)
2017-07-21 22:00:01 Next wakeup is 2017-07-21 23:00:00
2017-07-21 23:00:01 Next wakeup is 2017-07-22 01:00:00
The second request at 21:45 is from cron. I was expecting the backup to 
start at next wakeup or when blackout period ends at the latest.


The backup for humppa finally starts at 01:48, but this is from normal 
schedule.

2017-07-22 01:44:29 Started incr backup onhumppa 
  (pid=10391, share=/)
2017-07-22 01:48:51 Finished incr backup onhumppa 

I tried to force the full backup to run friday night, but seems the 
serverMesg backup does not have any effect. I googled for old mailing 
list entries, and found about serverMesg and the syntax it uses. I also 
read this is the way backups should be started from command line, and 
not using BackupPC_dump.


The reason I am trying to do this from cron is to correct the tendency 
of the backup start times to creep forward. Eventually the full backups 
start so late in the monday morning they are still running during 
working hours. I remember I read about this in this mailing list 
sometime previously, but did not find that list entry now that I tried 
to do it myself.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Ran out of inodes

2017-07-14 Thread Tapio Lehtonen
Running BackupPC 3 on Debian Wheezy. Ran out of inodes on 250 GB
filesystem, max inodes was 15 million. Can the nightly cleanup now run
and maybe release some idodes from the oldest backups?

Since the filesystem is Ext4 I can not increase max inodes. Would it
reduce the need of inodes if I reduced the number of backups to keep?

My quess is users have lots of e-mails stored and since those tend to be
small they eat up the inodes.

-- 
Tapio Lehtonen
OSK Satatuuli http://satatuuli.fi/
<>--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up Windows 7 Pro, not enough permissions

2017-05-29 Thread Tapio Lehtonen

16.05.2017, 01:32, Holger Parplies kirjoitti:

Hi,

Michael Stowe wrote on 2017-05-15 09:58:08 -0500 [Re: [BackupPC-users] Backing 
up Windows 7 Pro, not enough permissions]:

On 2017-05-15 05:20, Tapio Lehtonen wrote:

[...]
The share C is shared to network and user backuppc has full rights.
[...]


There are a few important things to understand here.  One is that not
all transport mechanisms are created equal when backing up Windows
systems.


what a nice way to say that :-).

I'm not a Windoze expert at all, but I vaguely remember you (Michael) stating
in the past that there is a difference between the default C$ share and an
explicitly created share ... assuming my memory is correct, might that be
the issue here? I.e. does anything change if you (Tapio) explicitly create
a share and use that (or use the C$ share if you were previously using
something different)?



I created a  new share Users, for users home directories. I gave 
permissions to that share to backuppc-user. That seems to have done the 
trick, now the home directories are backed up including the 
subdirectories therein.


Seems there is a difference with the pre-existing C-share and a share I 
create myself.


Now only the users home directories are backed up, not other files, but 
home directories are the important ones. I think I will do without 
backing the OS-files and installed programs.




At any rate, these reasons are why I personally switched to a
combination of rsync and vshadow (to handle open files) and put together
a package to install the proper files on the client side.


If I were in the unlucky situation of having to deal with Windoze hosts,
that is what I would use. From what I have read on this list, it seems
to be the best choice.

Regards,
Holger




--
Tapio Lehtonen
OSK Satatuuli
<>--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backing up Windows 7 Pro, not enough permissions

2017-05-15 Thread Tapio Lehtonen

Server is Debian 8.7, backuppc version 3.3.0-2+deb8u1

Backing up Linux computers works.

Have not been able to backup Windows 7 Professional. I have on windows 
local backuppc user, that user belongs to groups administrator and 
backup operator. The share C is shared to network and user backuppc has 
full rights.


Still the home directories of other users are empty. Only
C/Users/backuppc has content.

So, Backuppc server can read files from the client, but not all files. 
Seems I can not figure any  more ways to give permissions to backuppc.


That Windows 7 is client in workgroup, the server is a Linux host 
running Samba 4.2.14. Maybe this is not relevant.


--
Tapio Lehtonen
OSK Satatuuli
<>--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] PingCmd and PingPath seem to have no effect?

2012-08-30 Thread Tapio Lehtonen
Need to backup host that does not answer to ping. Tried to modify 
PingCmd and/or PingPath, but looks like modifications has no effect.


I added to /etc/backuppc/pc/hostname file theses, tried several 
permutations:


#$Conf{PingCmd} = '/bin/true';
$Conf{PingPath} = '/bin/echo';

Then using the General tip from Wiki, I can see BackupPC_dump is still 
using /bin/ping and stops, because it does not get an answer.


I did get the backup by setting the client to answer to pings, then 
BackupPC workd first try. But I would prefer not have it answer pings.


How to stop BackupPC from stopping if host does not answer to ping?

--
Tapio Lehtonen
OSK Satatuuli
attachment: tapio_lehtonen.vcf--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/