Brandon High writes:
On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai mov...@gmail.com wrote:
What is the upgrade path like from this? For example, currently I
The ashift is set in the pool when it's created and will persist
through the life of that pool. If you set it at pool creation,
Do you know if these bugs are fixed in Solaris 11 Express ?
It says it was fixed in snv_140, and S11E is based on snv_151a, so it
should be in:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6907687
I can confirm it works, iscsi zpools seem to work very happily now.
Yours
Hi,
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
f...@ll
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 01/12/2010 13:36, f...@ll wrote:
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
No.
--
Darren J Moffat
___
zfs-discuss mailing list
f...@ll wrote:
Hi,
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv') then no, there is no limit. If you however send
W dniu 2010-12-01 15:19, Menno Lageman pisze:
f...@ll wrote:
Hi,
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv')
In my situation is first option, I send snapshot to another server using
zfs send | zfs recv and I have problem when data send is completed,
after reboot the zpool have error or have state: faulted.
First server is physical, second is a virtual machine running under
xenserver 5.6
What is
Hello,
I am attempting to move a bunch of zfs filesystems from one pool to another.
Mostly this is working fine, but one collection of file systems is causing me
problems, and repeated re-reading of man zfs and the ZFS Administrators Guide
is not helping. I would really appreciate some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don Jackson
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv
npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010 received
Here is some more info on my system:
This machine is running Solaris 10 U9, with all the patches as of 11/10/2010.
The source zpool I am attempting to transfer from was originally created on a
older OpenSolaris (specifically Nevada) release, I think it was 111.
I did a zpool export on that
kd == Krunal Desai mov...@gmail.com writes:
kd http://support.microsoft.com/kb/whatever
dude.seriously?
This is worse than a waste of time. Don't read a URL that starts this
way.
kd Windows 7 (even with SP1) has no support for 4K-sector
kd drives.
NTFS has 4KByte allocation
t == taemun tae...@gmail.com writes:
t I would note that the Seagate 2TB LP has a 0.32% Annualised
t Failure Rate.
bullshit.
pgpsMvTxl5Ghd.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 2 December 2010 16:17, Miles Nordin car...@ivy.net wrote:
t == taemun tae...@gmail.com writes:
t I would note that the Seagate 2TB LP has a 0.32% Annualised
t Failure Rate.
bullshit.
Apologies, should have read: Specified Annualised Failure Rate.
On 12/01/10 22:14, Miles Nordin wrote:
Also did anyone ever clarify whether the slog has an ashift? or is it
forced-512? or derived from whatever vdev will eventually contain the
separately-logged data? I would expect generalized immediate Caring
about that since no slogs except ACARD and
14 matches
Mail list logo