, '.' is the target, meaning 'current directory'. The -C option
tells tar to change its working directory to the specified path, then it
backs it up using '.' to have only relative paths in the backup.
--
Les Mikesell
lesmikes...@gmail.com
On 8/1/2011 5:25 PM, Rory Toma wrote:
On 8/1/11 3:05 PM, Les Mikesell wrote:
On 8/1/2011 4:47 PM, Rory Toma wrote:
Let me say, the default command threw me for some time:
#$Conf{TarClientCmd} = '$tarPath -c -v -f - -C /mnt/$host/$shareName'
#. ' --totals';
as I
-like interface). If
you can do it from smbclient but the backuppc connection isn't working,
then it may be the way the -N option works. I've forgotten what
version of samba introduced the change that made it fail.
--
Les Mikesell
lesmikes...@gmail.com
to do, just that it
only affects systems with certain versions of samba so it is worth
making sure that the failure is not due to some other issue first.
--
Les Mikesell
lesmikes...@gmail.com
--
Got Input
doubt if most people would use ftp anyway.
--
Les Mikesell
lesmikes...@gmail.com
--
Got Input? Slashdot Needs You.
Take our quick survey online. Come on, we don't ask for help often.
Plus, you'll get a chance
millions it might be
an issue.
--
Les Mikesell
lesmikes...@gmail.com
--
Got Input? Slashdot Needs You.
Take our quick survey online. Come on, we don't ask for help often.
Plus, you'll get a chance to win $100
runs although the bytes/sec. measurement may be low because you
are only transferring changes. In normal use, you'll also probably skew the
days where fulls and incrementals run on different systems to get a mix of fast
and slow runs to cover more systems in your nightly window.
--
Les
in a VM. Worst case is probably a VM with an LVM on a virtual disk
with
sparse allocation (growing as needed).
Anything else with activity on the same physical disk competing for head
position.
--
Les Mikesell
lesmikes...@gmail.com
and haven't changed the location, it may be a permission problem that is
keeping the test hard link from working or perhaps the filesystem you are using
doesn't support hardlinks (vfat or a remote windows share).
--
Les Mikesell
lesmikes...@gmail.com
xferPids 16249
Got remote protocol 1667594309
That usually means there there is some output from something in /etc/profile,
/etc/bashrc or ~/.bashrc (a message of the day, etc.). Normal rsync can
tolerate that, but with backuppc, there must be nothing sent before rsync
starts.
--
Les Mikesell
. Turning on keep-alives in ssh
might help if some equipment in the connection path is dropping the link
due to idle time.
You might use standard rsync to make a local snapshot copy of the
target, restarting as needed, then let backuppc copy that to keep a history.
--
Les Mikesell
lesmikes
on the network to
stay connected. You can make it look somewhat like the real target if
you use a ClientNameAlias to connect it to the copy.
--
Les Mikesell
lesmikes...@gmail.com
--
Storage Efficiency Calculator
and having it propagate to the drbd copy
instantly?
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record
(.deb/.rpm) earlier than 3.2 you'll
to use the symlink/mount approach to keep the expected TopDir location
(normally /var/lib/backuppc) set by the package builder.
--
Les Mikesell
lesmikes...@gmail.com
--
All
to the live partition while keeping
snapshots of old copies. I wouldn't have expected that to work. Are
they really layered correctly so the lvm copy-on-write business works?
--
Les Mikesell
lesmikes...@gmail.com
really only want one instance of backuppc running, it
doesn't matter what the nfs-server side thinks about the owner's name.
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your
in question, but there are several
other options: http://backuppc.sourceforge.net/faq/BackupPC.html
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your IT infrastructure is seriously valuable
than run
it outside of your blackout window - they won't run before IncPeriod has
elapsed
anyway.
I don't think wakeups are necessary for manually starting a backup unless it is
blocked by something at the time of the request.
--
Les Mikesell
lesmikes...@gmail.com
to have all the missing pieces
to save a description of the disk layout and make a bootable iso that will
reconstruct it, but it would take some work to integrate the parts with
backuppc.
--
Les Mikesell
lesmikes...@gmail.com
it
on ;-).
Rsync has some sort of sliding block window to zoom in on changes, not sure
about inode/attribute changes that don't involve the file contents. They might
always be sent anyway. Checksum caching only affects the server side - the
sender has to do the reads anyway.
--
Les Mikesell
, or similar device between them?
Backups with few changes can let the connection time out leaving both ends
waiting for the other.
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your
in the process.
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats
filesystems you want to back
up. If you do that you do have to be careful to track layout changes
that might move things you want to a newly added filesystem, though.
--
Les Mikesell
lesmikes...@gmail.com
) or when copying huge files with differences
where the server has to uncompress the existing copy and reconstruct it.
After you have completed 2 fulls, you may see a speed increase on
unchanged files if you are using the --checksum-seed option.
--
Les Mikesell
lesmikes...@gmail.com
78.9(% CPU) 3.4 178:52.82
BackupPC_dump
I think linux counts disk io in cpu use.
On Thu, Jun 30, 2011 at 03:39:37PM -0500, Les Mikesell wrote:
Running in a VM imposes a lot of overhead. Running LVM on top of a file
based disk image pretty much guarantees that your disk block writes
won't
to
start over quickly, I'd make a new filesystem on your archive partition
(assuming you did mount a separate partition there, which is always a
good idea...) and re-install the program.
--
Les Mikesell
lesmikes...@gmail.com
links you probably want them to run concurrently with
local runs since their disk use will be throttled by the bandwidth limit.
--
Les Mikesell
lesmikes...@gmail.com
--
All of the data generated in your
their internet
connexion which is not really fast
That still doesn't answer the questions about whether you are mounting
the ftp filesystem as the root of backuppc's archive space, and if so,
has it ever really worked? I would not expect that setup to work at all.
--
Les Mikesell
is the TZ environment for the web server? Not sure about the exact
details but it is normal for time values to be stored as UTC and
converted for local time display according to the process handling it.
--
Les Mikesell
lesmikes...@gmail.com
attempt. The 'unable to read 4
bytes' message is not very helpful but it means that rsync didn't start
up or connect correctly for one reason or another.
--
Les Mikesell
lesmikes...@gmail.com
--
Simplify data backup
distributions usually have an automounter that will
mount filesystems on demand as they are accessed and unmount after a
timeout. The setup probably varies with the distribution.
--
Les Mikesell
lesmikes...@gmail.com
frequent wakeups where the actual
scheduling of the runs is controlled by other settings (which are
checked at each wakeup). Is there some reason that is a problem?
--
Les Mikesell
lesmikes...@gmail.com
interface.
--
Les Mikesell
lesmikes...@gmail.com
--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure
. Note that ftp was added late in the game
and all of the packaged (.deb, .rpm) versions aren't up to the latest
backuppc version so you may or may not see it depending on what you
install. But rsync is definitely something you can control per-target.
--
Les Mikesell
lesmikes...@gmail.com
?
It is the apache configuration that should be requiring authentication so check
that again. If you don't authenticate as an admin user or the owner of one or
more hosts you won't see much in the web interface.
--
Les Mikesell
lesmikes...@gmail.com
with encrypting data for tape output? What program(s) work
and can they run in a pipeline while writing the tape without slowing
down too much or is it best to write an intermediate file?
--
Les Mikesell
lesmikes...@gmail.com
one? I'd expect the scheduling
decisions to turn out badly.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power
in the
backuppc hosts entry and make web logins with the same name. If you
don't give them admin access they can only see their own host(s) in the
web interface.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise
, but something local), then rsync
or ftp the resulting files to the offsite location. If you want finer
control of the archive generation, you can script your own with
BackupPC_tarCreate piped through gzip and split.
--
Les Mikesell
lesmikes...@gmail.com
by the client a long time ago but I
can't remember if the backup started started by server polling or if
something else kicked it off.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most
can copy back to another disk.)
Did you copy in a way that preserves the hard links within the archive?
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most technically advanced content
restored. You should also test restoring one so you know what kind of time to
expect when/if it is needed.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most technically advanced
.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant
snapshot to a new filename (perhaps with a timestamp),
there won't be any good way to handle it.
--
Les Mikesell
lesmikes...@gmail.com
--
EditLive Enterprise is the world's most technically advanced content
authoring
the typical use would be to revive the latest copy after some sort
of disaster, I wouldn't rule out wanting older versions too. For
example if you had a security intrusion or an update-gone-wrong, you
might want to back out to something older and known-good.
--
Les Mikesell
lesmikes...@gmail.com
a slightly related
topic: has anyone looked at the recent freeNAS beta to see if its remote
replication would work for a backuppc archive (as in zfs snapshot
incrementals...)?
--
Les Mikesell
lesmikes...@gmail.com
approach would be to make a similar installation under VMware player or
Virtualbox that you could fire up on about any hardware and connect the USB
device to the virtual machine when you need it.
--
Les Mikesell
lesmikes...@gmail.com
which is slow and won't be pooled with
anything else. If the size and rate of change makes this impractical,
there are some more efficient approaches you could try that would make
an intermediate delta-based backup.
--
Les Mikesell
lesmikes...@gmail.com
a software parity raid using various drives.However, it does not
support hard links, only soft!
With the price of disks these days it seems like a waste of time to try
to accommodate drives that aren't what you need.
--
Les Mikesell
lesmikes...@gmail.com
be
long
and difficult with a data set that large.
Is there any chance the filesystem is corrupted? Or you have some sparse file
that becomes huge when you read through it?
--
Les Mikesell
lesmikes...@gmail.com
control.
--
Les Mikesell
lesmikes...@gmail.com
--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery,
you get blazing-fast, flexible
can go wrong.
If you are prepared for your disks to melt, you probably won't have a
big problem with the relatively unlikely scenario of filesystem corruption.
--
Les Mikesell
lesmikes...@gmail.com
--
vRanger
failure.
If I quit using every filesystem type where I have seen data lost, I
probably wouldn't have a computer any more... Just assume that there is
some small risk to every copy of everything. And that risk will change
in unpredictable ways with future OS updates too.
--
Les Mikesell
On 5/24/2011 9:36 AM, Timothy J Massey wrote:
I ended up upgrading that system to 2GB of RAM merely so that I could
finish the fsck.
(Oh, and to a long-ago debate about would more RAM help BackupPC to do
its job: nope. The backups with 2GB took almost exactly the same amount
of time as the
in the location you want.
--
Les Mikesell
lesmikes...@gmail.com
--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its
next-generation tools to help
be distributed across several machines
instead of needing space for a full copy of even a single instance of
the whole filesystem on any single machine or drive.
--
Les Mikesell
lesmikes...@gmail.com
--
What Every C/C
/var/lib/backuppc/pc/host_name but they are compressed and the file names are
mangled so it is harder to access them directly. You might want to practice
using the BackupPC_tarCreate command line tool, though.
--
Les Mikesell
lesmikes...@gmail.com
, clustered DB that doesn't need a master node). There is
something called luwak that handles files as streams, but because the
chunking step hashes the key from the chunk contents (and thus
deduplicates with no reference counting) you can't ever delete anything.
--
Les Mikesell
?) have succeeded.
Is there some reason for using a NAS? The only real win would probably
be power consumption compared to slapping some big drives in an older PC.
--
Les Mikesell
lesmikes...@gmail.com
--
Achieve
handle hardlinks - but it is hard to tell if it
does it well enough for backuppc. I'd expect the fuse layer to be the
bottleneck in the design - at least if you have several data servers.
--
Les Mikesell
lesmikes...@gmail.com
disk (or raid set) with
the backuppc archive on a separate set. You don't absolutely have to do
that but it will make life easier later when you want to separately
update/change the OS, move to a different box, or swap in larger drives.
--
Les Mikesell
lesmikes...@gmail.com
of
software raid1 you can recover the data from any single disk connected
to any physically compatible interface even if that's all that is left
of the original setup.
--
Les Mikesell
lesmikes...@gmail.com
On 5/17/2011 3:02 PM, Carl Wilhelm Soderstrom wrote:
On 05/17 02:30 , Les Mikesell wrote:
On 5/17/2011 2:06 PM, Carl Wilhelm Soderstrom wrote:
My advice is to get a 3ware RAID card and whatever disks you like for it.
There's some sharp corners on the management interface; but at least it
*has
for the test so you can see which
identities you are trying and perhaps why they fail. You probably need
to be using rsa2 or dsa keypairs with more recent ssh versions.
--
Les Mikesell
lesmikes...@gmail.com
type and have an equal or larger amount of space on the
target (you should be able to resize larger after a copy).
--
Les Mikesell
lesmikes...@gmail.com
--
Achieve unprecedented app performance and reliability
that the hard way.
--
Les Mikesell
lesmikes...@gmail.com
--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next
the install so everything lands in the
right place from the beginning (assuming that's not the only drive in
the box...). And you might consider making the drive a raid with a
'missing' member instead of using it directly. That way you can easily
add a mirror later for reliability.
--
Les
both.
That's 'rsync -aH'. Hardlinks have enough of a performance impact that
the option was intentionally omitted from the ones bundled into 'a'.
--
Les Mikesell
lesmikes...@gmail.com
--
Achieve unprecedented
the archive
goes (/var/lib/backuppc/) and if you get your partition mounted (or
symlinked) there before the install you don't have to move anything later.
--
Les Mikesell
lesmikes...@gmail.com
--
Achieve
and for
the filesystems it understands it should only have to copy the used
blocks. I've never used it directly and don't use jfs myself, but I've
copied a lot of disks with clonezilla which uses partclone to do the
work on linux systems.
--
Les Mikesell
lesmikes...@gmail.com
. It understands deletions and old
files in locations under renamed directories where tar doesn't.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most
user a shell in
the /etc/passwd file.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
management
Not sure though if sudo is a standard package with freebsd...
Even it it is, it should try to run the shell specified in the passwd
file for the user. Apparently the way backuppc is installed it doesn't
have a valid shell or a simple su would have worked.
--
Les Mikesell
lesmikes...@gmail.com
is that the linux/bsd flavors of su take different options.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
On 5/3/2011 4:03 PM, Holger Parplies wrote:
Hi,
Les Mikesell wrote on 2011-05-03 15:14:08 -0500 [Re: [BackupPC-users] Not
able to use the backuppc account in freebsd]:
I'm surprised that sudo doesn't honor the user's shell.
really? It's really not surprising. The semantics of sudo is allow
or hit an OS-imposed limit
(memory/processes/file descriptors, etc.). Can you raise those limits
for the backuppc user?
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management
name or IP.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
management toolset available today
it doesn't work. I'm asking how it works so I can draw my own
conclusions. That is what Open Source means, right?
Well, open source means there is at least one way to find out. But I
usually don't bother unless something goes wrong.
--
Les Mikesell
lesmikes...@gmail.com
clientalias didn't seem to get me
what I needed. And I found no instances of that in the basic
documentation.
Sorry, it is actually $Conf{ClientNameAlias}. You can use dummy
hostnames so you can control the schedule separately but override the
actual target with this setting.
--
Les Mikesell
copy that you get with a
rotating set of disks.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
it which partition to add to an already running
array.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective
On 4/27/11 12:44 AM, Jeffrey J. Kosowsky wrote:
Les Mikesell wrote at about 12:08:22 -0500 on Tuesday, April 26, 2011:
On 4/26/2011 11:38 AM, Michael Conner wrote:
I installed BPC a few weeks ago and have been doing testing and setup
since then and have things working pretty well
-with-sshrsyncvss-on-windows-server/
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
management toolset
(or
raid-weirdness) happens.
--
Les Mikesell
lesmikes...@gmail.com
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network
management toolset available
On 4/27/2011 3:54 PM, Jeffrey J. Kosowsky wrote:
Les Mikesell wrote at about 15:48:29 -0500 on Wednesday, April 27, 2011:
On 4/27/2011 3:18 PM, Jeffrey J. Kosowsky wrote:
Which is *precisely* what I was proposing except that in addition to
failing the device, I suggested
On 4/27/11 7:10 PM, Chris Parsons wrote:
On 28/04/2011 6:52 AM, Les Mikesell wrote:
that wasn't the case the OP was referring to.
I've forgotten the original context, but if it is setting up a new
system you don't have much to lose in the initial sync - and by the time
you do, you should
.
But, note that even though you don't technically have to stop/unmount
the raid while doing the sync, realistically it doesn't perform well
enough to do backups at the same time. I use a cron job to start the
sync very early in the morning so it will complete before backups would
start.
--
Les
of
things had similar limits at 2 or 4 gigs that should be gone in current
versions.
--
Les Mikesell
lesmikes...@gmail.com
--
Benefiting from Server Virtualization: Beyond Initial Workload
Consolidation -- Increasing
is *broken*.
Yes, that doesn't seem right, unless perhaps it is while simultaneously
writing backups. The easiest fix might be to rip the drives out of the
badly-performing nfs server and put them in the linux box running backuppc.
--
Les Mikesell
lesmikes...@gmail.com
tree so both the old
mangled name and the expected new one appears (i.e. link the new name to the
existing one, you don't need to figure out the pool location) - but I really
don't know if that is good advice or not.
--
Les Mikesell
lesmikes...@gmail.com
, nmblookup fails, the hosts (external) must
have iptables that drop ICMP ping requests.
Is there a way to configure BackupPC to backup these hosts regardless?
See $Conf{PingPath} in http://backuppc.sourceforge.net/faq/BackupPC.html
--
Les Mikesell
lesmikes...@gmail.com
boxes. Sometimes
they will block or slow access to the point that the backup times out.
--
Les Mikesell
lesmikes...@gmail.com
--
Benefiting from Server Virtualization: Beyond Initial Workload
Consolidation
cable adapter to
access them elsewhere if you want.
--
Les Mikesell
lesmikes...@gmail.com
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed
command works?
--
Les Mikesell
lesmikes...@gmail.com
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part
it doesn't matter that
much. Unless your backup server is extremely fast, the rsync-in-perl
will throttle things enough to not bother other systems much. If you
are concerned about saturating a WAN link, the rsync --bwlimit option
can be added.
--
Les Mikesell
lesmikes...@gmail.com
checked on the the server to see
if those directories existed and they do not. Why would this be happening?
Perhaps the web server doesn't have read access to the backuppc
directories. Is this on a system with SELinux?
--
Les Mikesell
lesmikes...@gmail.com
is probably going to be limited to something in the same
building. Is this 2nd copy supposed to protect against a building
disaster or is is a staged copy that will somehow end up elsewhere?
--
Les Mikesell
lesmikes...@gmail.com
holding the archive will work as a backup. But in many cases the
best approach is to simply run an independent copy of backuppc from a
different location, connecting to the same targets over a WAN or VPN,
perhaps with the blackout periods skewed to avoid running at the same times.
--
Les
data will
already be compressed and it would give rsync a better chance at finding
matching blocks.
--
Les Mikesell
lesmikes...@gmail.com
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
--
Les Mikesell
lesmikes...@gmail.com
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon
1101 - 1200 of 2903 matches
Mail list logo