I tried to buy another drive today (750GB or 1TB) to swap out c3t1d0 (750GB)
but could not find one quickly. So I was thinking as a temporary measure to
use my 1.5TB disk instead as it can be reused at the moment (is currently
attached to a sil3114 controller - c6d1p0).
Would it be ok to do a
Darren J Moffat darr...@opensolaris.org writes:
Kjetil Torgrim Homme wrote:
I don't know how tightly interwoven the dedup hash tree and the block
pointer hash tree are, or if it is all possible to disentangle them.
At the moment I'd say very interwoven by design.
conceptually it doesn't
Anil an...@entic.net writes:
If you have another partition with enough space, you could technically
just do:
mv src /some/other/place
mv /some/other/place src
Anyone see a problem with that? Might be the best way to get it
de-duped.
I get uneasy whenever I see mv(1) used to move
Ah!
Ok, I will give this a try tonight! Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy,
Thanks for the info and fixing the web site.
I'm still confused why there are two different things (zpool and zfs) that need
to be upgraded. For example, is there any reason I would want to upgrade the
zpool and NOT upgrade the zfs?
Thanks,
Doug
--
This message posted from
Hi Doug,
Some features are provided at the pool level and some features are
provided at the file system level so we have two upgrade paths.
I believe the fs versions were originally created to support ZFS
compatibility with other OSes, but I'm not so clear about this.
I can't think of any
$zpool create dpool mirror c1t2d0 c1t3d0
$zfs set mountpoint=none dpool
$zfs create -o mountpoint=/export/zones dpool/zones
On Solaris 10 Update 8 when creating a zone with zonecfg and setting the
zonepath to /export/zones/test1 and then installing with zoneadm install, the
zfs zonepath file
Hi Tim,
I looked up the sil3114 controller and I found this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6813171
sil3114 sata controller not supported
If you can see this disk with format, then I guess I'm less uneasy, but
due to the hardware support issue, you might try to
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different performance
for sub-4k transfers has anyone done any ZFS benchmarks between the X25E and
the F20 they can share?
I am
A bug is being filed on this by Sun. A Senior Sun Engineer was able to
replicate the problem and the only work around they suggested was to
temporarily mount the parent filesystem on the pool. This applies to Sol 10
Update 8; not sure about anything else.
--
This message posted from
On Dec 18, 2009, at 9:40 AM, Stuart Anderson wrote:
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different
performance for sub-4k transfers has anyone done any ZFS
benchmarks
I am seeing this issue posted a lot in the forums:
A zpool add/replace command is run, for example:
zpool add archive spare c2t0d2
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d7s0 is part of active ZFS pool archive. Please see zpool(1M).
(-f just says:
Hi Cindy,
I had similar concerns however I wasn't aware of that bug. Before I bought
this controller I had read a number of people saying that they had problems and
then other people saying didn't have problems with the sil3114. I was
originally after a sil3124 (SATAII) but given my future
d == Doug d...@yahoo.com writes:
d is there any reason I would want to upgrade the zpool and NOT
d upgrade the zfs?
in theory/hope zfs send streams depend only on the ZFS version being
sent, not on the kernel build or zpool version. In practice I doubt
it's perfectly true across
Hi Tim,
The p* devices represent the larger container Solaris fdisk container,
so a possibly scenario is that someone could create a pool that contains
both a p0 container, which might also point to the same blocks as
another partition in that container that is also included in the pool.
This
there's actually no device c6d1 in /dev/dsk, only:
t...@opensolaris:/dev/dsk$ ls -l c6d1*
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p0 -
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:q
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p1 -
should I use slice 2 instead of p0:
Part TagFlag Cylinders SizeBlocks
0 unassignedwm 00 (0/0/0) 0
1 unassignedwm 00 (0/0/0) 0
2 backupwu 0 - 60796
I had referred to this blog entry:
http://blogs.sun.com/observatory/entry/which_disk_devices_to_use
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
hmm ok, the replace with the existing drive still in place wasn't the best
option...it's replacing, but very slowly as it's reading from that sus disk:
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly
On snv_129, a zfs upgrade (*not* a zpool upgrade) from version 3 to version 4
caused the
desktop to freeze - no response to keyboard or mouse events and clock not
updated.
ermine% uname -a
SunOS ermine 5.11 snv_129 i86pc i386 i86pc
ermine% zpool upgrade
This system is currently running ZFS
Ok, I have started my import after using the -k on my kernel line (I just did a
test dump using this method just to make sure it works ok, and it does).
I have also added the following to my /etc/system file and rebooted:
set snooping=1
According to this page:
I've taken to creating an unmounted empty filesystem with a
reservation to prevent the zpool from filling up. It gives you
behavior similar to ufs's reserved blocks.
So ... Something like this?
zpool create -m /path/to/mountpoint myzpool c1t0d0
and then... Assuming it's a 500G disk ...
zfs
On Fri, Dec 18, 2009 at 7:44 PM, Edward Ned Harvey
sola...@nedharvey.com wrote:
So ... Something like this?
zpool create -m /path/to/mountpoint myzpool c1t0d0
and then... Assuming it's a 500G disk ...
zfs create -V 50G /path/to/mountpoint/unused
zfs create /path/to/mountpoint/importantdata
Stacy Maydew wrote:
The commands zpool list and zpool get dedup pool both show a ratio of 1.10.
So thanks for that answer. I'm a bit confused though if the dedup is applied
per zfs filesystem, not zpool, why can I only see the dedup on a per pool basis
rather than for each zfs filesystem?
24 matches
Mail list logo