I think you're having some confusion about what that clause in authorized_keys
does. According to [1], it basically executes the command for you. So when the
backuppc user creates the ssh connection, it runs:
'/usr/bin/rrsync /'
on the target. You don't need to "initiate" the rsync in this way
Hi Ian,
I'm not sure why it would suddenly stop working for you as that seems like a
completely legitimate solution.
FWIW, this is the command I use for unpingable clients:
$Conf{PingCmd} = '/bin/echo $host';
Best of luck!
From: Ian via BackupPC-users
Sent:
Hi Tony,
If you're stuck at the ssh part, you won't be successful running a backup as
making the connection is the first step.
What happens when you try to ssh to this client from the backuppc account on
the backup server? Assume you've already tried ssh-copy-id but it's failing.
I'm having
This is one I found but have not tried:
https://gist.github.com/phoenix741/99a5076569b01ba5a116cec24a798d5f
It mentions being updated for updated for 4.x in 2017 which is when 4.0 was
released.
From: backu...@kosowsky.org
Sent: Thursday, November 17, 2022 8:44 AM
Hi Edward,
I'm not sure how much help this list can be as we are just a group of people
who administrate our own installations of BackupPC. It's a very flexible
solution that can handle massive amounts of backup and runs maintenance-free
for years, so it's not a surprise if it got forgotten
You'll need to prepend your command with the following:
$sshPath -q -x -l backuppc $host curl...
or
$sshPath -q -x -l root $host curl...
Click on the hyperlink DumpPreUserCmd for more details and examples, such as:
'$sshPath -q -x -l root $host /usr/bin/dumpMysql';
Hi Chiel,
Since you're targeting /var, the directory to exclude would be /log, not
/var/log. Be sure to add it directly under the BackupFilesExclude config for
the /var share.
From: chiel
Sent: Thursday, February 24, 2022 7:16 AM
To:
I don't know exactly what kind of reporting you're looking for, but I've been
using this for years:
https://github.com/moisseev/BackupPC_report
I have an ansible script that pulls the code from above, appends a .pl
extension to the executable, then puts it into /usr/share/BackupPC/bin/.
Then
Hi, Ghislain! That's a lot fancier than I was going for, but I can certainly
use what you've shared to solve my question.
Thank you for responding!
From: Ghislain Adnet
Sent: Thursday, May 27, 2021 7:41 AM
To: Mike Hughes
Cc: General list for user discussion
the configs are edited via the GUI.
Thanks for any tips!
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project
Hi Dave,
You can always break a backup job into multiple backup 'hosts' by using the
ClientNameAlias setting. I create hosts based on the share or folder for each
job, then use the ClientNameAlias to point them to the same host.
From: Dave Sherohman
Sent:
On Tue, Mar 16, 2021 at 4:01 PM Les Mikesell wrote:
> On Tue, Mar 16, 2021 at 2:22 PM Mike Weber
> wrote:
> >
> > I have some switches, with no ssh smb or sshfs
> >
> > I want to pull a config via http...
> >
> > can backuppc support http+https se
I have some switches, with no ssh smb or sshfs
I want to pull a config via http...
can backuppc support http+https server pulls?
Information Systems and Technology Director
(814) 273-8440 - SKF - Meet With Me: https://calendly.com/skfsupport -
Get Help: tick...@skyking.onedesk.com
Hi Taste (nice domain name ),
Are you including the $sshPath prefix as shown in the example?
$Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host /usr/bin/dumpMysql';
From: Taste-Of-IT
Sent: Friday, August 21, 2020 5:34 PM
To:
For what it's worth, I was able to resolve this (for now) by updating CPAN
itself. I noticed when running certain commands that it complained that CPAN
was at version 1.x and version 2.28 was available. It suggested running:
install CPAN
reload cpan
But those are clearly not bash
-users] [BackupPC-devel] BackupPC 4.4.0 released
On Thu, Jun 25, 2020 at 10:02 AM Mike Hughes
mailto:m...@visionary.com>> wrote:
Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How can I
fix this while waiting for the patch to roll out?
Well, I'm not sure how to clean
Daily report:
https://github.com/moisseev/BackupPC_report
From: backu...@kosowsky.org
Sent: Thursday, June 25, 2020 8:42 AM
To: General list for user discussion
Subject: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing
It would be great if
Wednesday, June 24, 2020 12:42 PM
To: General list for user discussion, questions and support
Cc: Craig Barratt
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released
On Wed, Jun 24, 2020 at 12:19 PM Craig Barratt via BackupPC-users
mailto:backuppc-users@lists.sourceforge.net>
_64
From: Richard Shaw
Sent: Tuesday, June 23, 2020 10:47 AM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released
On Tue, Jun 23, 2020 at 8:24 AM Mike Hughes
mailto:m...@visionary.
Thanks so much Richard! Will COPR installations auto-update via yum repository
updates or do we need to specifically run a COPR update manually?
From: Richard Shaw
Sent: Monday, June 22, 2020 7:01 PM
To: General list for user discussion, questions and support
is already doing this and how you sorted it out.
Thanks!
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http
using BackupPC version 3.3.0
Thanks,
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http
Hi Gandolf,
This is what I use to clean up disk space:
nohup /usr/share/BackupPC/bin/BackupPC_nightly 0 255 &
If I want to watch it work I'll use this:
tail nohup.out -F
Usually finishes in 5-10 minutes.
--
Mike
On Thu, 2019-12-19 at 16:08 +0100, Gandalf Corvotempesta wrote:
Hi to
and the output of DumpPreUserCmd is irrecoverable. This makes it
very hard to troubleshoot a problem with no log output.
@CraigBarratt, Please consider reversing/revising this update!
On Wed, 2019-08-28 at 08:33 -0500, Mike Hughes wrote:
> On Thu, 2019-08-15 at 14:54 -0500, Mike Hughes wr
uot; is on /dev/md2. Both on Linux
> (Ubuntu 18.04LTS) mdadm arrays.
>
> Am I wrong? Doesn't "include" override any "exclude" settings?
>
>
>
> On 9/29/19 9:02 AM, Mike Hughes wrote:
> > No, that is the default setting in BPC. So if your /home i
it helps.
>
> Kind regards,
>
> Jamie
> --
> From: Mike Hughes [mailto:m...@visionary.com]
> Sent: 30 September 2019 11:54
> To: General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> Subject: Re: [BackupPC-use
',
'--log-format=log: %o %i %B %8U,%8G %9l %f%L',
'--stats',
'--acls',
'--xattrs'
];
On the client machines in /etc/sudoers:
backuppc ALL=NOPASSWD: /usr/bin/rsync --server *
Kind regards,
Jamie
--
From: Mike Hughes [mailto:m...@visionary.com<mailto:m...@visionary.com>]
Se
No, that is the default setting in BPC. So if your /home is on a separate
partition you either need to remove that setting, or add the /home partition as
a backup Target in addition to /.
Whichever is your best option is up to you.
On Sep 29, 2019 06:27, Bob Wooden wrote:
Thanks, Michael.
On Thu, 2019-09-19 at 15:20 +0100, Cogumelos Maravilha wrote:
> There's only a problem, the /home folder in some server aren't not
> getting rsynced.
...
> --one-file-system
Have you verified that the systems which are not working as expected
don't have separate partitions for /home?
anyone has a better solution.
Thanks!
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Pro
This seems like a dumb problem to have but I'm struggling to figure out how to
tell BackupPC to ssh into the cygwin client as a local user.
The backuppc account was created locally on the machine to be backed up, thus
it does not exist in AD. I am able to log in using other AD accounts with no
On Wed, 2019-09-04 at 23:00 -0700, Craig Barratt via BackupPC-users wrote:
In addition to the higher log level, it would be helpful to see the rsync
command being run. Is there anything in the XferLOG file?
Craig
On Wed, Sep 4, 2019 at 6:44 PM Michael Huntley
mailto:mich...@huntley.net>>
No responses. Too much detail? Let me rephrase it:
Windows rsync backup no worky!
plz halp!!!
:-D
On Thu, 2019-08-15 at 14:44 -0500, Mike Hughes wrote:
Working to add a few Windows clients to our BackupPC system. I have
passwordless ssh working and when I try kicking off a backup using the GUI
need to
confirm that the backup will hard-fail on an error from my user cmd, which is
required for me to be aware that a database or table dump was not successful.
On Thu, 2019-08-15 at 14:54 -0500, Mike Hughes wrote:
I use a DumpPreUserCmd to kick off a database dump script. This script used
to
the BackupPC logfile:
https://github.com/backuppc/backuppc/issues/285
I guess my question is, was this intentional? Do I need to manage my process
logfile outside of BackupPC?
Second question: are we still sending a termination signal if
UserCmdCheckStatus exits with a failure?
Thanks!
--
Mike
$ uname -r
3.0.7(0.338/5/3)
openssh 8.0p1-2
rsync 3.1.2-1
Any help appreciated!
Thanks!
--
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wik
On Wed, 2019-06-05 at 11:51 -0600, David Wynn via BackupPC-users wrote:
> debug1: Sending command: 192.168.1.6 rsync --server --sender
> -slHogDtprcxe.iLsfxC
> sh: 192.168.1.6: not found
This line tells us that your shell (sh) is trying to run the command:
"192.168.1.6" That looks like an IP
>From: Karol Jędrzejczyk
>I'm using BackupPC to back up a bunch of servers only. In this scenario
>any transfer error is critical.
Hi Karol,
I like to receive reports of any failures so I install BackupPC_report [1] on
the BackupPC server then I explore the logs from any that error out. I call
backed up in the past.
Thx
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Hi Daniel,
It sounds like the exclusion rules aren’t working as you expect. If they were I
don’t think you’d see the errors even if the pool files had a problem since
they’d never be compared. I ran into problems when I set up mine too. If I
recall my confusion was around identifying the share
Yeah Ed, sorry this isn't clearer on the main page but Craig discourages
building from source. If you're on Cent/RHEL, use this repo instead:
https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/
From: Ed Burgstaler
Sent: Monday, November 26, 2018 12:38
To:
Hi Steve,
It looks like they are stored using reverse deltas. Maybe you’ve already seen
this from the V4.0 documentation:
* Backups are stored as "reverse deltas" - the most recent backup is always
filled and older backups are reconstituted by merging all the deltas starting
with the
>Jamie Burchell wrote on 2018-10-30 09:31:13 - [[BackupPC-users] BackupPC
>administrative attention needed email incorrect?]:
>> [...]
>> Yesterday, I received the following email from the BackupPC process:
>> [...]
>> > Yesterday 156 hosts were skipped because the file system containing
>> >
On Mon, Oct 22, 2018 at 1:21 PM Mike Hughes
mailto:m...@visionary.com>> wrote:
A host was created to duplicate the cpool from another BackupPC server. It was
set to skip compression and successfully filled the uncompressed pool with
~300GB of data. On the advice of others, I decided
A host was created to duplicate the cpool from another BackupPC server. It was
set to skip compression and successfully filled the uncompressed pool with
~300GB of data. On the advice of others, I decided to use rsync in a cronjob
instead, so my intent was to delete this host and its data.
t 8:52 PM, Mike Hughes wrote:
>
> Another related question: Does it make sense to use rsync's compression when
> transferring cpool? If that data is already compressed, am I gaining much by
> having rsync try to compress it again?
> Thanks!
> From: Mike Hughes
> Sent: Friday,
Another related question: Does it make sense to use rsync's compression when
transferring cpool? If that data is already compressed, am I gaining much by
having rsync try to compress it again?
Thanks!
From: Mike Hughes
Sent: Friday, October 12, 2018 8:25 AM
a remote copy of the cpool,
pc and conf directories, to a place that BackupPC doesn't back up.
Craig
On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes
mailto:m...@visionary.com>> wrote:
Hi BackupPC users,
Similar questions have come up a few times but I have not found anything
relating to r
the
synced pool individually without having to pull down the whole cpool and
reproducing the entire BackupPC server.
How do others manage on-prem and off-site backup synchronization?
Thanks,
Mike
___
BackupPC-users mailing list
BackupPC-users
ml
This is just a basic framework to keep in mind when helping others learn
something that you already know. Adapt it as necessary to fit the situation.
I hope this helps.
Sincerely,
Mike
From: Michael Stowe
Sent: Sunday, September 16, 2018 11:39:51 AM
To:
er is on a large enough partition.
From: Mike Hughes
Sent: Friday, August 24, 2018 09:56
To: backuppc-users@lists.sourceforge.net
Subject: RE: [BackupPC-users] ver 4.x split using ssd and hdd storage - size
requirements?
>-Original Message-
>From: Johan Ehnberg mailto:jo...@moln
>-Original Message-
>From: Johan Ehnberg
>Sent: Friday, August 24, 2018 09:25
>To: backuppc-users@lists.sourceforge.net
>Subject: Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size
>requirements?
>
>On 08/24/2018 04:52 PM, Mike Hughes
I think I've discovered a new level of failure. It started off with these
errors when attempting to rsync larger files:
rsync_bpc: failed to open
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: No space
left on device (28)
rsync_bpc: mkstemp
I'm seeing out-of-storage errors from rsync_bpc when transferring large (4 GB)
files:
rsync_bpc: failed to open
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: No space
left on device (28)
The volume supporting the 'pc' folder has 6 GB free and lives an SSD under
It’s a cloud service so I’m less concerned with the results of the thrashing
From: Greg Harris
Sent: Tuesday, August 14, 2018 08:53
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] Which file system for data pool?
You are probably already aware of
>-Original Message-
>From: Johan Ehnberg
>Sent: Tuesday, August 14, 2018 07:29
>To: backuppc-users@lists.sourceforge.net
>Subject: Re: [BackupPC-users] Which file system for data pool?
>
>
>On 08/14/2018 02:44 PM, Tapio Lehtonen wrote:
>> I'm building a BackupPC host, with two SSD disks
Mystery solved. The defaults included --one-file-system. Removed that and all
partitions are being backed up
From: Mike Hughes
Sent: Monday, August 13, 2018 08:54
To: 'General list for user discussion, questions and support'
Subject: RE: RsyncClientCmd --> RsyncSshArgs
Hi BackupPC us
Hi BackupPC users,
Just curious if anyone else has made the changes necessary to use a non-root
user account in the 4.X versions and run into any difficulty with incomplete
backups?
Thank you!
From: Mike Hughes
Sent: Friday, August 10, 2018 14:39
To: backuppc-users@lists.sourceforge.net
Transitioning from BPC 3.x to 4.x there seem to be some syntactic changes
regarding rsync & ssh commands. I was able to follow documentation that I think
lived on SourceForge where it was suggested to configure a non-root account for
ssh'ing and running rsync under sudo. After some
ook in the release page for each project to download
>> the latest tarballs. That will allow you to skip the first few steps.
>> But, yes, you still need a working compiler environment to build rsync-bpc
>> and backuppc-xs.
>>
>> Craig
>>
>> On Wed, Aug
: [BackupPC-users] installation help
Mike,
First, I would strongly recommend using the rsync-bpc 3.0.9 branch. The head
is based on rsync 3.1.2 but it hasn't seen as much testing and hasn't been
released. You should do a "git checkout 3.0.9" before running ./configure, eg:
git c
results installing the version identified in the Debian-based script?
Thank you!
From: Craig Barratt via BackupPC-users
Sent: Wednesday, August 8, 2018 00:37
To: backuppc-users@lists.sourceforge.net
Cc: Craig Barratt
Subject: Re: [BackupPC-users] installation help
Mike,
You have to build and install
I am new to git-based installations and need some help. I cloned the three
projects and I cd into the backuppc folder to run perl configure.pl and it
replies with:
[Cent-7:root@hostname backuppc]# perl configure.pl
You need to run makeDist first to create a tarball release that includes an
I am running backuppc version 3.
dpkg shows 3.3.1-2ubuntu3.1
server is Ubuntu 16.04
client is Ubuntu 14.04
It run the transfer for a short period then stop/hangs.
If run it from the command line with -f -v option I see going thought 1609
files and stopping at the same point each time (did it
Now that I've poked this again, even a network error such as the client
went away while backup was in progress will leave behind these files. A
really unstable client could leave a lot of orphan rsyncTmp's.
Mike
like that.
Mike
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
to a
disk via Backuppc_tarCreate for offsite storage that's painful.
Mike
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https
I use the smb transfer method. I guess it may not be loaded if another
method was used.
I have to ask because I made this mistake. Did you issue the smbstatus
command on the linux server running backuppc?
-Original Message-
From: terryls [mailto:backuppc-fo...@backupcentral.com]
Sent:
Mine stopped working when the backuppc server performed and auto update
and changed the samba client.
I used Ariel Sanguinetti's solution to fix it.
-
just in case this helps someone, I've to downgrade samba to 4.1.6 by
doing:
apt-get install samba=2:4.1.6+dfsg-1ubuntu2
This fixed my problem, thank you Ariel.
From: Ariel Sanguinetti [mailto:ar...@trustedtranslations.net]
Sent: Monday, April 25, 2016 4:42 PM
To: backuppc-users@lists.sourceforge.net
Subject: [SPAM] Re: [BackupPC-users] Issues with SMB shares
Importance: Low
just in case this helps someone,
On 13-12-27 12:28 AM, Prem wrote:
Hi David,
I am also using KVM and VM for the backuppc running on Ubuntu 12.04.
It can run without issues but based on my observatio, you need to
roughly allocate 1 core for 1 client that requires backup. For example
if you have 10 clients then its better to
On 13-12-18 07:42 AM, niraj_vara wrote:
HI
I have centos 5.6 and backuppc version is 3.2.1. its working fine. I want
to set like when I add a backup pc the backuppc first time take the full
backup and then after everytime it will take only the increamental backup.
what goal are you
On 13-12-16 10:06 AM, Mark Rosedale wrote:
I'm working on bringing back a backuppc instance. It is very large 3+TB. The
issue I'm having is e2fsck is taking an extremely long time to finish. It is
stuck on the checking directory structure. We are going on 48 hours.
So I'm wondering what
PaxHeaders-directories go away, which were
present in all directories.
Thanks for your support
Mike
On 06/28/2013 07:43 AM, Craig Barratt wrote:
Mike,
Is there
a way to increase the debug reporting level
wrote:
On Thu, Jun 27, 2013 at 2:48 PM, Mike Bosschaert insomn...@gmail.com wrote:
For some reason I CAN make incremental backups (which do report errors on
the long directory names). But the process does not crash.
That probably just means there aren't any new files to write there.
One detail
Thanks Craig and Stephen for taking the
time to dive into this, and excuse for my late reply (have been
out of town, could not access the backupserver).
My tar version is 1.22 (2009) as far as I could test, it supports
file and link-names longer than 100
/ --exclude=cache/
--exclude=lost+found ./home/www ./home/mysql ./home/mike/bin
./home/mike/Dropbox ./home/mike/werk ./etc ./varfull backup started
for directory /
Xfer PIDs are now 5551,5550
[ skipped 162019 lines ]
tarExtract: Unable to open
/var/lib/backuppc/pc/xxx.xxx.xxx.xxx/new/f%2f/fhome/fmike
=lost+found ./home/www ./home/mysql ./home/mike/bin
./home/mike/Dropbox ./home/mike/werk ./etc ./varfull backup started
for directory /
Xfer PIDs are now 5551,5550
[ skipped 162019 lines ]
tarExtract: Unable to open
/var/lib/backuppc/pc/xxx.xxx.xxx.xxx/new/f%2f/fhome/fmike/fDropbox/fMYSTIC-PAF
When I see this issue, here's what I do:
* Delay the next backup for 24h
* Make sure backuppc_nightly isn't going to run in the near future
* run, as backuppc, BackupPC_dump -f -v hostname
* figure out what the errors you get back mean, and how to solve them
* run, as backuppc,
On 12-08-27 09:57 AM, Les Mikesell wrote:
On Mon, Aug 27, 2012 at 4:19 AM, martin f krafft madd...@madduck.net wrote:
However, the status quo seems broken to me. If BackupPC times out on
a backup and stores a partial backup, it should be able to resume
the next day. But this is not what seems
On 12-08-14 08:56 AM, gshergill wrote:
Hi BackupPC community,,
I have just finished an install of BackupPC on Ubuntu 12.04 and have added
the public key to the server/s I wish to back up.
I am able to ssh to the server from the backuppc user.
However, there appears to be an issue with
On 12-08-15 12:52 PM, Les Mikesell wrote:
Does the device offer nfs as an option? If so, I'd use that instead of cifs.
Am I going about this in the wrong way? Should I be backing up to this drive
in a different manner? Right now, $Conf{XferMethod} = 'tar'; but I have
tried it set to smb
On 12-08-15 01:08 PM, gshergill wrote:
Any chance you know if you are able to set the download on BackupPC? Seem
unable to find it out how to...
If you're talking about rsync's transfer rate, add the argument
--bwlimit=xxx (which is a number in kbytes/second, ie 500 for 500
kbytes/second).
I cannot believe that it's impossible to move backups of hosts from one
BackupPC instance to another. And actually I believe that
BackupPC_tarPCCopy is the right tool to do so. But I don't get how to
use it.
If you have enough RAM, you can rsync -H the directory in pc from one
machine to
On 12-07-21 02:42 PM, Timothy J Massey wrote:
For most people, a VM is going to make for a much less reliable
solution. They will be way too tempted to put the VM on the same
storage as their production hardware.
For someone who understands the dangers, and has a disaster recovery
On 12-07-13 01:38 PM, Bryan Keadle (.net) wrote:
Thanks for your reply. Yeah, we're using a NAS device, but not
necessary those small ones - using this Drobo B800fs
http://www.drobo.com/products/business/b800fs/index.php. So NFS
would be a protocol-based option for the data pool? Still,
On 12-07-13 01:50 PM, Bryan Keadle (.net) wrote:
I think I had a buddy try AoE and found it problematic - not yet ready
for prime time?
I've used it for a variety of things from software RAID to network
booting - no problems here?
--
Looking for (employment|contract) work in the
Internet
Yes, NFS or iSCSI will work, but it is really a lot cheaper to just
throw some big drives in a linux box.
+1 to that.
And in all the questions you've
asked I don't remember any yet about getting offsite copies of the
archive, which is usually the one hard thing with backuppc that you
How big is your pool filesystem?
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will
. Please do sum up how long it finally takes.
The tar -P -xvpf took a bit under 3 days:
real4002m18.324s
user20m16.236s
sys 82m14.332s
Taking the system live right now, and I'll see if anything is broken.
Mike
some new disks on, so I will
look at that script the next time.
Mike
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
On Sun, 2 Oct 2011, Jeffrey J. Kosowsky wrote:
If you want to troubleshoot, I would do the following:
I'm currently running 3.1.0, so that probably answers why I'm seeing
these. Thought I was on 3.2 for some reason, I might try dpkg -i'ing in
3.2.1 from wheezy(testing) on a test system and
that inode.
I agree with Jeffrey, if there is a bug out there, I'd be interested in
hunting it down :). But first, we should try to make sure that the cause of
this attrib file strangeness is really within BackupPC.
And not the idiot operator who was looking at the wrong pool? :)
Mike
On Sun, 2 Oct 2011, Jeffrey J. Kosowsky wrote:
2 follow-ups would be helpful:
1. Is this true for the other non-top-level attrib files?
2. Are the other f-mangled files in the directory also only have a
single link?
This would happen if there is a problem in the linking stage. In
On Mon, 3 Oct 2011, Holger Parplies wrote:
almost all. Zero-length files are not pooled, so they will have a link count
of 1. Anything else below pc/host/nn should have at least two links (including
directories :).
Ok, that explains my sockets being nlink of 1.. They're 0 byte :)
On 29/09/11 10:28 PM, Adam Goryachev wrote:
Can I assume this is because the new HDD's perform better than the old?
In other words, would it be safe to assume you would get even better
performance using RAID10 with the new HDD's than you are getting with RAID6?
Yes, the new drives are several
Just finishing up moving one of my backuppc servers to new larger disks,
and figured I'd submit a success story with backuppc_tarpccopy... I
wanted to create a new xfs filesystem rather than my usual dd and
xfsgrowfs, as this thing has been in use since backuppc 2.1 or similar.
Old disks were
On 11-05-18 05:21 PM, Carl Wilhelm Soderstrom wrote:
On 05/17 01:25 , Mike wrote:
Has anyone tried using BackupPC and MooseFS (http://www.moosefs.org/)?
Thanks for the link. That looks like a pretty cool project.
and from initial appearances / testing, it runs pretty darn well, too.
We only
Has anyone tried using BackupPC and MooseFS (http://www.moosefs.org/)?
--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach
You can prefix the key in /root/.ssh/authorized_keys with something
like the following:
no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding,command=rsync
--server --sender -vlogDtprze.iL --ignore-errors --numeric-ids
--inplace . / ssh-rsa ...
This will force a ssh connection to
1 - 100 of 129 matches
Mail list logo