Hi!
My home server had some disk outages due to flaky cabling and whatnot, and
started resilvering to a spare disk. During this another disk or two
dropped, and were reinserted into the array. So no devices were actually
lost, they just were intermittently away for a while each.
The situation is
Anyone had any look getting either OpenSolaris or FreeBSD with
zfs working on
http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-4248034.html
?
The Neo has a lot more oomph than the Atoms, and the box can
handle up to 8 GByte ECC memory.
--
Eugen*
Hi,
fyi
http://lwn.net/Articles/399148/
copyfile()
The reflink() http://lwn.net/Articles/333783/ system call was
originally proposed as a sort of fast copy operation; it would create a
new copy of a file which shared all of the data blocks. If one of the
files were subsequently written to,
On Mon, Sep 27, 2010 at 6:23 AM, Robert Milkowski mi...@task.gda.pl wrote:
snip
Also see http://www.symantec.com/connect/virtualstoreserver
And
http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/
--
Mike Gerdts
http://mgerdts.blogspot.com/
I just realized that the email I sent to David and the list did not make the
list (at least as jive can see it), so here is what I sent on the 23rd:
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the –f switch. I had seen this as an option earlier,
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a directory
per 'ls -l' to a special character device. This results in not being able to
get into the file system. Here is my file system, scott2, along with a new file
system I just created,
On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a
directory per 'ls -l' to a special character device. This results in not
being able to get into the file system. Here is my file
Is there a way to find out if a dataset has children or not using zfs
properties or other scriptable method?
I am looking for a more efficient way to delete datasets after they are
finished being used. Right now I use custom property to set delete=1 on a
dataset, and then I have a script that
On 9/27/10 9:56 AM, Victor Latushkin victor.latush...@oracle.com wrote:
On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a
directory per 'ls -l' to a special character device. This
134 it is. This is an OpenSolaris rig that's going to be replaced within the
next 60 days, so just need to get it to something that won't through false
checksum errors like the 120-123 builds do and has decent rebuild times.
Future boxes will be NexentaStor.
Thank you guys. :)
-J
On Sun, Sep
Err...I meant Nexenta Core.
-J
On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
134 it is. This is an OpenSolaris rig that's going to be replaced within
the next 60 days, so just need to get it to something that won't through
false checksum errors like
On 27/09/2010 18:14, Geoff Nordli wrote:
Is there a way to find out if a dataset has children or not using zfs
properties or other scriptable method?
I am looking for a more efficient way to delete datasets after they are
finished being used. Right now I use custom property to set delete=1
If one was sticking with OpenSolaris for the short term, is something older
than 134 more stable/less buggy? Not using de-dupe.
-J
On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling richard.ell...@gmail.comwrote:
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to
From: Darren J Moffat
Sent: Monday, September 27, 2010 11:03 AM
On 27/09/2010 18:14, Geoff Nordli wrote:
Is there a way to find out if a dataset has children or not using zfs
properties or other scriptable method?
I am looking for a more efficient way to delete datasets after they
are
hi all
I just setup this test box on OI. It has a couple of X25Ms, 80GB and eight 2TB
drives, two of them Hitachi Deskstar 7k2 drives and the other six WD Green. I
have done some tests on this with mirrors to compare the performance and those
tests conclude that the Hitachi drives are 25% or
On Sep 27, 2010, at 11:54 AM, Geoff Nordli wrote:
Are there any properties I can set on the clone side?
Each clone records its origin snapshot in the origin property.
$ zfs get origin syspool/rootfs-nmu-001
From: Richard Elling
Sent: Monday, September 27, 2010 1:01 PM
On Sep 27, 2010, at 11:54 AM, Geoff Nordli wrote:
Are there any properties I can set on the clone side?
Each clone records its origin snapshot in the origin property.
$ zfs get origin syspool/rootfs-nmu-001
NAME
Is this a sector size issue?
I see two of the disks each doing the same amount of work in roughly half the
I/O operations each operation taking about twice the time compared to each of
the remaining six drives.
I know nothing about either drive, but I wonder if one type of drive has twice
the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
extended device statistics
devicer/sw/s kr/s kw/s wait actv svc_t %w %b
sd1 0.5 140.30.3 2426.3 0.0 1.07.2 0 14
sd2
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd1 0.5 140.3 0.3 2426.3 0.0 1.0 7.2 0 14
sd2 0.0 138.3 0.0 2476.3 0.0 1.5 10.6 0 18
sd3 0.0 303.9 0.0 2633.8 0.0 0.4 1.3 0 7
sd4 0.5 306.9 0.3 2555.8 0.0 0.4 1.2 0 7
sd5 1.0 308.5 0.5 2579.7
On Mon, Sep 27, 2010 at 4:16 AM, Eugen Leitl eu...@leitl.org wrote:
Anyone had any look getting either OpenSolaris or FreeBSD with
zfs working on
I looked at it some, and all the hardware should be supported. There
is a half-height PCIe x16 and a x1 slot as well.
-B
--
Brandon High :
21 matches
Mail list logo