Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us:
What function is the system performing when it is so busy?
The work load of the server is SMTP mail server, with associated spam
and virus scanning, and serving maildir email via POP3 and IMAP.
Wrong conclusion. I am not sure what
Ok, think I have the biggest issue. The drives are 4k sector drives,
and I wasn't aware of that. My fault, I should have checked this. Had
the disks for ages and are sub 1TB so had the idea that they wouldn't
be 4k drives...
I will obviously have to address this, either by creating a pool
Basically I think yes you need to add all the vdevs you require in the
circumstances you describe.
You just have to consider what ZFS is able to do with the disks that
you give it. If you have 4x mirrors to start with then all writes will
be spread across all disks and you will get nice
It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special configuration...?
4k drives are a big problem for ZFS, much has been posted/written
about it. Basically, if the 4k drives report 512 byte blocks, as they
almost all do, then ZFS does not detect
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
My question is about the initial seed of the data. Is it possible
to use a portable drive to copy the initial zfs filesystem(s) to the
remote location and then make the subsequent incrementals over the
network? If so, what would I
Hi,
I am using FreeBSD 8.2 in production with ZFS. Although I have had
one issue with it in the past but I would recommend it and I consider
it production ready. That said if you can wait for FreeBSD 8.3 or 9.0
to come out (a few months away) you will get a better system as these
will
Hi,
see the seeksize script on this URL:
http://prefetch.net/articles/solaris.dtracetopten.html
Not used it but looks neat!
cheers Andy.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Still i wonder what Gartner means with Oracle monetizing on ZFS..
It simply means that Oracle want to make money from ZFS (as is normal
for technology companies with their own technology). The reason this
might cause uncertainty for ZFS is that maintaining or helping make
the open source
Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
Sector size (logical/physical): 512B/512B
Just to check, did you already try:
zpool import -d /dev/zvol/rdsk/pool/ poolname
?
thanks Andy.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi there
Can a ZFS root participate in Live Upgrade, that is does luupgrade
understand a ZFS root?
T
Tabriz Leman wrote:
All,
For those who haven't already gone through the painful manual process
of setting up a ZFS Root, Tim Foster has put together a script. It is
available on his
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a faulty piece of hardware. I would like to bring up our server
mounting the storage pool which is okay and NOT mounting the one with from the
hardware with problems. Is there a simple way to NOT
I have run zpool scrub again, and I now see checksum errors again. Wouldn't
the checksum errors gotten fixed with the first zpool scrub?
Can anyone recommend actions I should do at this point?
Thanks,
David
This message posted from opensolaris.org
Thank you to everyone that has replied. It sounds like I have a few options
with regards to upgrading or just waiting and patching the current environment.
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
G'day, all,
So, I've decided to migrate my home server from Linux+swRAID+LVM to
Solaris+ZFS, because it seems to hold much better promise for data integrity,
which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don't
understand why the results
What build/version of Solaris/ZFS are you using?
Solaris 11/06.
bash-3.00# uname -a
SunOS nitrogen 5.10 Generic_118855-33 i86pc i386 i86pc
bash-3.00#
What block size are you using for writes in bonnie++?
I
ind performance on streaming writes is better w/
larger writes.
I'm afraid I
Hello all,
Spent the last several hours perusing the ZFS forums and some of the blog
entries regarding ZFS. I have a couple of questions and am open to any hints,
tips, or things to watch out for on implementation of my home file server. I'm
building a file server consisting of an Asus P5WD2
The original thought was 3 of the drives as storage, and one of the drives as
parity. So that would yield around 1.4TB of useable storage. I hadn't given
any thought to running 64 bit. This system is being built from the ground up.
I guess in the back of my head I had assumed it would be 32
Sorry about that, the specific processor in question is the Pentium D 930 which
supports 64 bit computing through the Extended Memory 64 Technology. It was my
initial reaction to say I'd go with 32 bit computing because my general
experience with 64-bit is Windows, Linux, and some FreeBSD.
Thanks for the continuing flow of information. I already have all of the
equipment. I'm actually upgrading my main computer to a new Core 2 Duo setup
which is why this hardware is going to the file server. I think I'm going to
try a 64bit install using the four 500GB drives in a RAID-Z
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html
But after a reboot the iscsi target was not longer available, so the iscsi
initiator could not provide the
the path)
My sata drive is using the 'ahci' driver, connecting to the
ICH7 chipset on the motherboard.
And I have a scsi drive on a Adaptec card, plugged into a PCI slot.
Thanks
Nigel Smith
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 DEFAULT cyl 2229 alt 2 hd 255
I was wondering if anyone had a script to parse the zpool status -v output
into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I was in the process of doing a large zfs send | zfs receive when I decided
that I wanted to terminate the the zfs send process. I killed it, but the zfs
receive doesn't want to die... In the meantime my zfs list command just hangs.
Here is the tail end of the truss output from a truss zfs
I don't believe LUN expansion is quite yet possible under Solaris 10 (11/06).
I believe this might make it into the next update but I'm not sure on that.
Someone from Sun would need to comment on when this will make it into the
production release of Solaris.
I know this because I was working
Well, the zfs receive process finally died, and now my zfs list works just fine.
If there is a better way to capture what is going on, please let me know and I
can duplicate the hang.
David
This message posted from opensolaris.org
___
zfs-discuss
You can see the status of bug here:
http://bugs.opensolaris.org/view_bug.do?bug_id=6566207
Unfortunately, it's showing no progress since 20th June.
This fix really could do to be in place for S10u4 and snv_70.
Thanks
Nigel Smith
This message posted from opensolaris.org
What are your thoughts or recommendations on having a zpool made up of
raidz groups of different sizes? Are there going to be performance issues?
For example:
pool: testpool1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ
We have an Oracle 10.2.0.3 installation on a Sun T2000 logical domain. The
virtual disk has a ZFS file system. When we try to create a tablespace we get
these errors:
WARNING: aiowait timed out 1 times
WARNING: aiowait timed out 2 times
WARNING: aiowait timed out 3 times
...
Does
To list your snapshots:
/usr/sbin/zfs list -H -t snapshot -o name
Then you could use that in a for loop:
for i in `/usr/sbin/zfs list -H -t snapshot -o name` ;
do
echo Destroying snapshot: $i
/usr/sbin/zfs destroy $i
done
The above would destroy all your snapshots. You could put a grep on
Yes, I'm not surprised. I thought it would be a RAM problem.
I always recommend a 'memtest' on any new hardware.
Murphy's law predicts that you only have RAM problems
on PC's that you don't test!
Regards
Nigel Smith
This message posted from opensolaris.org
Richard, thanks for the pointer to the tests in '/usr/sunvts', as this
is the first I have heard of them. They look quite comprehensive.
I will give them a trial when I have some free time.
Thanks
Nigel Smith
pmemtest- Physical Memory Test
ramtest - Memory DIMMs (RAM) Test
back to this forum, and on the 'Storage-discuss' forum where these sort
of questions are more usually discussed.
Thanks
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Please can you provide the source code for your test app.
I would like to see if I can reproduce this 'crash'.
Thanks
Nigel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
upgrade to snv70 or latter.
Regards,
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/pipermail/onnv-notify/2007-October/012782.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for a different hard disk controller card:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
And are you seeing any error messages in '/var/adm/messages'
indicating any failure on the disk controller card?
If so, please post a sample back here to the forum.
This message posted from opensolaris.org
___
zfs-discuss mailing list
(PCI-E)
and SiI-3124 (PCI-X) devices.
2. The AHCI driver, which supports the Intel ICH6 and latter devices, often
found on motherboard.
4. The NV_SATA driver which supports Nvidia ck804/mcp55 devices.
Regards
Nigel Smith
This message posted from opensolaris.org
Presumably the labels are some how confused,
especially for your USB drives :-(
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I would like advise about how to replace a raid 0 lun. The lun is basically a
raid 0 lun which is from a single disk volume group / volume from our Flexline
380 unit. So every disk in the unit is a volume group/volume/lun mapped to the
host. We then let ZFS do the raid.
We have a lun now
Addtional information:
It looks like perhaps the original drive is in use, and the hot spare is
assigned but not in use see below about zpool iostat:
raidz22.76T 4.49T 0 0 29.0K 18.4K
c10t600A0B80001139967CF945E80E95d0 - - 0
Yes! That worked to get the spare back to an available state. Thanks!
So that leaves me with the trying to put together a recommended procedure to
replace a failed lun/disk from our Flexline 380. Does anyone have
configuration in
which they are using a RAID 0 lun, which they need to
Roman
I didn't think that we had live upgrade support for zfs root filesystem yet.
T
Roman Morokutti wrote:
# lucreate -n B85
Analyzing system configuration.
Hi,
after typing
# lucreate -n B85
I get the following error:
No name for current boot environment.
INFORMATION: The
Hi folks,
I use an iSCSI disk mounted onto a Solaris 10 server. I installed a ZFS file
system into s2 of the disk. I exported the disk and cloned it on the iSCSI
target. The clone is a perfect copy of the iSCSI LUN and therefore has the
same zpool name and guid.
My question is: is there
Hello, I am fairly new to Solaris and ZFS. I am testing both out in a sandbox
at work. I am playing with virtual machines running on a windows front-end that
connects to a zfs back-end for its data needs. As far as i know my two options
are sharesmb and shareiscsci for data sharing. I have a
Justin,
Thanks for the reply
In the environment I currently work in, the powers that be are almost
completely anti unix. Installing the nfs client on all machines would take
a real good sales pitch. None the less I am still playing with the client
in our sandbox. As I install this on a test
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities it presents.
I'm told one of the greatest benefits is that, instead of setting
quotas, each user can have their own 'filesystem' under a single pool.
This is obviously great if you've got 10 users but what if you have
10,000? Are
bits vs bytes D'oh! again. It's a good job I don't do these calculations
professionally. :-) Date: Tue, 15 Jul 2008 02:30:33 -0400 From: [EMAIL
PROTECTED] To: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] please help with
raid / failure / rebuild calculations CC:
It sounds like you might be interested to read up on Eric Schrock's work. I
read today about some of the stuff he's been doing to bring integrated fault
management to Solaris:
http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
His last paragraph is great to see, Sun
File Browser is the name of the program that Solaris opens when you open
Computer on the desktop. It's the default graphical file manager.
It does eventually stop copying with an error, but it takes a good long while
for ZFS to throw up that error, and even when it does, the pool doesn't
snv_91. I downloaded snv_94 today so I'll be testing with that tomorrow.
Date: Mon, 28 Jul 2008 09:58:43 -0700 From: [EMAIL PROTECTED] Subject: Re:
[zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed To: [EMAIL
PROTECTED] Which OS and revision? -- richard Ross wrote: Ok,
A little more information today. I had a feeling that ZFS would continue quite
some time before giving an error, and today I've shown that you can carry on
working with the filesystem for at least half an hour with the disk removed.
I suspect on a system with little load you could carry on
I agree that device drivers should perform the bulk of the fault monitoring,
however I disagree that this absolves ZFS of any responsibility for checking
for errors. The primary goal of ZFS is to be a filesystem and maintain data
integrity, and that entails both reading and writing data to
PROTECTED] Subject: Re:
[zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed To: [EMAIL
PROTECTED] CC: zfs-discuss@opensolaris.org I was able to reproduce this
in b93, but might have a different interpretation of the conditions. More
below... Ross Smith wrote: A little more
Hey Brent,
On the Sun hardware like the Thumper you do get a nice bright blue ready to
remove led as soon as you issue the cfgadm -c unconfigure xxx command. On
other hardware it takes a little more care, I'm labelling our drive bays up
*very* carefully to ensure we always remove the right
Sorry Ian, I was posting on the forum and missed the word disks from my
previous post. I'm still not used to Sun's mutant cross of a message board /
mailing list.
Ross
Date: Fri, 1 Aug 2008 21:08:08 +1200 From: [EMAIL PROTECTED] To: [EMAIL
PROTECTED] CC: zfs-discuss@opensolaris.org
Hi Matt,
If it's all 3 disks, I wouldn't have thought it likely to be disk errors, and I
don't think it's a ZFS fault as such. You might be better posting the question
in the storage or help forums to see if anybody there can shed more light on
this.
Ross
Date: Sun, 3 Aug 2008 16:48:03
Just a thought, before I go and wipe this zpool, is there any way to manually
recreate the /etc/zfs/zpool.cache file?
Ross Date: Mon, 4 Aug 2008 10:42:43 -0600 From: [EMAIL PROTECTED] Subject:
Re: [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL
PROTECTED]; [EMAIL
PROTECTED]; zfs-discuss@opensolaris.org Ross Smith
wrote: Just a thought, before I go and wipe this zpool, is there any way
to manually recreate the /etc/zfs/zpool.cache file? Do you have a copy
in a snapshot? ZFS for root is awesome! -- richard Ross
Date: Mon, 4 Aug 2008 10:42:43 -0600
Hmm... got a bit more information for you to add to that bug I think.
Zpool import also doesn't work if you have mirrored log devices and either one
of them is offline.
I created two ramdisks with:
# ramdiskadm -a rc-pool-zil-1 256m
# ramdiskadm -a rc-pool-zil-2 256m
And added them to the
Oh god no, I'm already learning three new operating systems, now is not a good
time to add a fourth.
Ross-- Windows admin now working with Ubuntu, OpenSolaris and ESX
Date: Fri, 15 Aug 2008 10:07:31 -0500From: [EMAIL PROTECTED]: [EMAIL
PROTECTED]: Re: [zfs-discuss] FW: Supermicro
Without fail, cfgadm changes the status from disk to sata-port when I
unplug a device attached to port 6 or 7, but most of the time unplugging
disks 0-5 results in no change in cfgadm, until I also attach disk 6 or 7.
That does seem inconsistent, or at least, it's not what I'd expect.
Yup, you got it, and an 8 disk raid-z2 array should still fly for a home system
:D I'm guessing you're on gigabit there? I don't see you having any problems
hitting the bandwidth limit on it.
Ross
Date: Fri, 22 Aug 2008 11:11:21 -0700
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
That sounds absolutely perfect Tim, thanks.
Yes, we'll be sending these to other zfs filesystems, although I haven't looked
at the send/receive part of your service yet. What I'd like to do is stage the
send/receive as files on an external disk, and then receive them remotely from
that.
Hi guys,
Bob, my thought was to have this timeout as something that can be optionally
set by the administrator on a per pool basis. I'll admit I was mainly thinking
about reads and hadn't considered the write scenario, but even having thought
about that it's still a feature I'd like. After
Triple mirroring you say? That'd be me then :D
The reason I really want to get ZFS timeouts sorted is that our long term goal
is to mirror that over two servers too, giving us a pool mirrored across two
servers, each of which is actually a zfs iscsi volume hosted on triply mirrored
disks.
Hey Tim,
I'll admit I just quoted the blog without checking, I seem to remember the
sales rep I spoke to recommending putting aside 20-50% of my disk for
snapshots. Compared to ZFS where I don't need to reserve any space it feels
very old fashioned. With ZFS, snapshots just take up as much
]
To: [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] EMC - top of the table for efficiency, how well
would ZFS do?
CC: zfs-discuss@opensolaris.org
On Sun, Aug 31, 2008 at 10:39 AM, Ross Smith [EMAIL PROTECTED] wrote:
Hey Tim,
I'll admit I just quoted the blog without checking, I seem
Thinking about it, we could make use of this too. The ability to add a
remote iSCSI mirror to any pool without sacrificing local performance
could be a huge benefit.
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Subject: Re: Availability:
Oh cool, that's great news. Thanks Eric.
Date: Tue, 7 Oct 2008 11:50:08 -0700
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS Mirrors braindead?
On Tue, Oct 07, 2008 at 11:42:57AM
help.
Nick Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm using 2008-05-07 (latest stable), am I right in assuming that one is ok?
Date: Wed, 15 Oct 2008 13:52:42 +0200
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Improving zfs send performance
Thanks, that got it working. I'm still only getting 10MB/s, so it's not solved
my problem - I've still got a bottleneck somewhere, but mbuffer is a huge
improvement over standard zfs send / receive. It makes such a difference when
you can actually see what's going on.
Try to separate the two things: (1) Try /dev/zero - mbuffer --- network
--- mbuffer /dev/null
That should give you wirespeed
I tried that already. It still gets just 10-11MB/s from this server.
I can get zfs send / receive and mbuffer working at 30MB/s though from a couple
of test servers
Oh dear god. Sorry folks, it looks like the new hotmail really doesn't play
well with the list. Trying again in plain text:
Try to separate the two things:
(1) Try /dev/zero - mbuffer --- network --- mbuffer /dev/null
That should give you wirespeed
I tried that already. It still
to show the iscsi session has dropped out,
and the initiator is auto retrying to connect to the target,
but failing. It may help to get a packet capture at this stage
to try see why the logon is failing.
Regards
Nigel Smith
--
This message posted from opensolaris.org
Tano, based on the above, I would say you need
unique GUID's for two separate Targets/LUNS.
Best Regards
Nigel Smith
http://nwsmith.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
hard drives would not help,
as the bios update may cause an identical problem
with each drive.)
Good Luck
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
could check the Solaris iScsi target works ok under stress
from something other that ESX, like say the Windows iscsi initiator.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Following Eugene report, I'm beginning to fear that some sort of regression
has been introduced into the iscsi target code...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the snv_93 and snv_97 iscsi target to work
well with the Vmware ESX and Microsoft initiators.
So it is a surprise to see these problems occurring.
Maybe some of the more resent builds snv_98, 99 have
'fixes' that have cause the problem...
Regards
Nigel Smith
--
This message posted from
Hi Tano
I will have a look at your snoop file.
(Tomorrow now, as it's late in the UK!)
I will send you my email address.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
No problem. I didn't use mirrored slogs myself, but that's certainly
a step up for reliability.
It's pretty easy to create a boot script to re-create the ramdisk and
re-attach it to the pool too. So long as you use the same device name
for the ramdisk you can add it each time with a simple
the capture.
You can then use Ethereal or WireShark to analyze the capture file.
On the 'Analyze' menu, select 'Expert Info'.
This will look through all the packets and will report
any warning or errors it sees.
Regards
Nigel Smith
--
This message posted from opensolaris.org
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'prstat' to see if it gives any clues.
Presumably you are using ZFS as the backing store for iScsi, in
which case, maybe try with a UFS formatted disk to see if that is a factor.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss
.
If your using Solaris, maybe try 'prtvtoc'.
http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view
(Unless someone knows a better way?)
Thanks
Nigel Smith
# prtvtoc /dev/rdsk/c1t1d0
* /dev/rdsk/c1t1d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1465149168 sectors
* 1465149101 accessible
/2007/660/onepager/
http://bugs.opensolaris.org/view_bug.do?bug_id=5044205
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'status' of your zpool on Server2?
(You have not provided a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'smartctl' (fully) working with PATA and
SATA drives on x86 Solaris.
I've done a quick search on PSARC 2007/660 and it was
closed approved fast-track 11/28/2007.
I did a quick search, but I could not find any code that had been
committed to 'onnv-gate' that references this case.
Regards
Nigel Smith
Hi Tano
Please check out my post on the storage-forum for another idea
to try which may give further clues:
http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
method used for this file is 98.
Please can you check it out, and if necessary use a more standard
compression algorithm.
Download File Size was 8,782,584 bytes.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
, that's my thoughts conclusion for now.
Maybe you could get some more snoop captures with other clients, and
with a different switch, and do a similar analysis.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
is closed source :-(
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be interesting to do two separate captures - one on the client
and the one on the server, at the same time, as this would show if the
switch was causing disruption. Try to have the clocks on the client
server synchronised as close as possible.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
' for the network card, just in case it turns out to be a driver bug.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any good while that is happening.
I think you need to try a different network card in the server.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Snapshots are not replacements for traditional backup/restore features.
If you need the latter, use what is currently available on the market.
-- richard
I'd actually say snapshots do a better job in some circumstances.
Certainly they're being used that way by the desktop team:
If the file still existed, would this be a case of redirecting the
file's top level block (dnode?) to the one from the snapshot? If the
file had been deleted, could you just copy that one block?
Is it that simple, or is there a level of interaction between files
and snapshots that I've
1 - 100 of 186 matches
Mail list logo