Hi,
I have a Solaris 10 x86 server with 2 hard disks running on mirrored UFS
configuration.
Currently we are trying to implement a OS backup solution using Networker 7.6.
I can successfully backup the OS to a remote Networker server.
But now the trouble is if I need to perform a full Solaris
Greetings Gentlemen,
I'm currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With
Hello all,
I'm having a problem that I find difficult to diagnose.
I have an IBM x3550 M3 running nexenta core platform 3.0.1 (134f) with 7x6
disk RAIDZ2 vdevs (see listing at bottom).
Every day a disk fails with Too many checksum errors, is marked as
degraded and rebuilt onto a hot spare. I've
On 01/24/11 09:13 PM, Ddl wrote:
Hi,
I have a Solaris 10 x86 server with 2 hard disks running on mirrored UFS
configuration.
Currently we are trying to implement a OS backup solution using Networker 7.6.
I can successfully backup the OS to a remote Networker server.
But now the trouble is
I have a pool tank and dir1 is the filesystem on that pool. zfs list
and df -h both shows tank/dir1 mounted.
*-*
*# zfs list*
*tank 124K 228G32K /tank*
*tank/dir1 31K 228G31K /tank/dir1*
*#*
*
*
*# df
Hey all,
I have a 10 TB root pool setup like so:
pool: s78
state: ONLINE
scrub: resilver completed after 2h0m with 0 errors on Wed Jan 19 22:04:39 2011
config:
NAME STATE READ WRITE CKSUM
s78 ONLINE 0 0 0
mirror ONLINE 0
On 01/25/11 06:52 AM, Ashley Nicholls wrote:
Hello all,
I'm having a problem that I find difficult to diagnose.
I have an IBM x3550 M3 running nexenta core platform 3.0.1 (134f) with
7x6 disk RAIDZ2 vdevs (see listing at bottom).
Every day a disk fails with Too many checksum errors, is
On Mon, 2011-01-24 at 13:56 -0800, Phillip V wrote:
Hey all,
I have a 10 TB root pool setup like so:
pool: s78
state: ONLINE
scrub: resilver completed after 2h0m with 0 errors on Wed Jan 19 22:04:39
2011
config:
NAME STATE READ WRITE CKSUM
s78
There is only one pool and hundreds of zfs file systems under that
pool. New file systems are getting created on the fly.
Is it possible to automate zfs incremental send/recv in this scenario?
My assumption is negative as incremental send/recv needs a full
snapshot to be sent first before
On 01/25/11 12:30 PM, Rahul Deb wrote:
There is only one pool and hundreds of zfs file systems under that
pool. New file systems are getting created on the fly.
Is it possible to automate zfs incremental send/recv in this scenario?
My assumption is negative as incremental send/recv needs a
And does it handle the deletions as well?
Bryan
On Tue, Jan 25, 2011 at 12:34:57PM +1300, Ian Collins wrote:
On 01/25/11 12:30 PM, Rahul Deb wrote:
There is only one pool and hundreds of zfs file systems under that
pool. New file systems are getting created on the fly.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ddl
But now the trouble is if I need to perform a full Solaris OS restore, I
need to
perform an installation of the Solaris 10 base OS and install Networker
7.6
client to call back the data
comment below...
On Jan 24, 2011, at 1:58 PM, Rahul Deb wrote:
I have a pool tank and dir1 is the filesystem on that pool. zfs list
and df -h both shows tank/dir1 mounted.
-
# zfs list
tank 124K 228G32K /tank
Thanks Richard for the prompt response.
But second time same commands creates dir3 too.
I mean to say, as I said earlier, first time it gives the mounting error and
does not create dir3
*# zfs create -p tank/dir1/dir2/dir3*
*cannot mount '/tank/dir1/dir2': directory is not empty*
*#*
but if I
Thanks Ian for your response.
So you are saying, if I create recursive snapshot of the pool, it will be
able to do the incremental send/recv for the file systems created on the
fly?
I was thinking that if the file systems are created on the fly, then there
is no previous snapshot for the newly
15 matches
Mail list logo