So Cindy, Simon (or anyone else)... now that we are over a year past when Simon
wrote his excellent blog introduction, is there an updated best practices for
ACLs with CIFS? Or, is this blog entry still the best word on the street?
In my case, I am supporting multiple PCs (Workgroup) and Macs;
On Sun, Sep 12, 2010 at 10:07 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
No replies. Does this mean that you should avoid large drives with 4KB
sectors, that is, new dri
ves? ZFS does not handle new drives?
Solaris 10u9 handles 4k sectors, so it might be in a post-b134 release of
From: Richard Elling [mailto:rich...@nexenta.com]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that world, only one thread writes
files
from one application at one time. In those cases, there is a reasonable
expectation that
That sounds strange. What happened? You used raidz1?
You can mount your zpool into an earlier snapshot. Have you tried that? Or, you
can mount your pool within the last 30 seconds or so, I think.
--
This message posted from opensolaris.org
___
On Sun, Sep 12, 2010 at 11:24:06AM -0700, Chris Murray wrote:
Absolutely spot on George. The import with -N took seconds.
Working on the assumption that esx_prod is the one with the problem, I bumped
that to the bottom of the list. Each mount was done in a second:
# zfs mount zp
# zfs
I was thinking to delete all zfs snapshots before zfs send receive to another
new zpool. Then everything would be defragmented, I thought.
(I assume snapshots works this way: I snapshot once and do some changes, say
delete file A and edit file B. When I delete the snapshot, the file A is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
I was thinking to delete all zfs snapshots before zfs send receive to
another new zpool. Then everything would be defragmented, I thought.
You don't need to delete snaps before
On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that world, only one thread writes
files
from one application at one time.
From: Richard Elling [mailto:rich...@nexenta.com]
Regardless of multithreading, multiprocessing, it's absolutely
possible to
have contiguous files, and/or file fragmentation. That's not a
characteristic which depends on the threading model.
Possible, yes. Probable, no. Consider
I have a flash archive that is stored in a ZFS snapshot stream. Is there a way
to mount this image so I can read files from it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 09/13/10 09:40 AM, Buck Huffman wrote:
I have a flash archive that is stored in a ZFS snapshot stream. Is there a way
to mount this image so I can read files from it.
No, but you can use the flar split command to split the flash archive
into its constituent parts, one of which will be
div id=jive-html-wrapper-div
Charles,br
br
Just like UNIX, there are several ways to drill down
on the problem.nbsp; I
would probably start with a live crash dump (savecore
-L) when you see
the problem.nbsp; Another method would be to grap
multiple stats commands
during the problem to
On Mon, September 13, 2010 07:14, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that world, only one thread writes
files
from one application at one
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde
and sdf. The pool is of course very unhappy about the mirrors are no longer
matched up and one device is
try export and import the zpool
On 9/13/2010 1:26 PM, Brian wrote:
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf.
The pool is of course very
To summarize,
A) resilver does not defrag.
B) zfs send receive to a new zpool means it will be defragged
Correctly understood?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
That seems to have done the trick. I was worried because in the past I've had
problems importing faulted file systems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
At first we blamed de-dupe, but we've disabled that. Next we suspected
the SSD log disks, but we've seen the problem with those removed, as
well.
Did you have dedup enabled and then disabled it? If so, data can (or will) be
deduplicated on the drives. Currently the only way of de-deduping
On Sep 13, 2010, at 10:54 AM, Orvar Korvar wrote:
To summarize,
A) resilver does not defrag.
B) zfs send receive to a new zpool means it will be defragged
Define fragmentation?
If you follow the wikipedia definition of defragmentation then the
answer is no, zfs send/receive does not
This is snv_128 x86.
::arc
hits = 39811943
misses=630634
demand_data_hits = 29398113
demand_data_misses=490754
demand_metadata_hits = 10413660
demand_metadata_misses=133461
prefetch_data_hits= 0
Makes sense. My understanding is not good enough to confidently make my own
decisions, and I'm learning as Im going. The BPG says:
- The recommended number of disks per group is between 3 and 9. If you
have more disks, use multiple groups
If there was a reason leading up to this statement,
On 09/07/10 23:26, Piotr Jasiukajtis wrote:
Hi,
After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system.
Here is what I get.
Any idea why it's not able to import rpool?
I saw this issue also on older builds on a different machines.
This sounds (based on the presence
Mattias, what you say makes a lot of sense. When I saw *Both of the above
situations resilver in equal time*, I was like no way! But like you said,
assuming no bus bottlenecks.
This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata
A, I see. But I think your math is a bit out:
62.5e6 iop @ 100iops
= 625000 seconds
= 10416m
= 173h
= 7D6h.
So 7 days 6 hours. Thats long, but I can live with it. This isnt for an
enterprise environment. While the length of time is of worry in terms of
increasing the chance another drive
Run away! Run fast little Netapp. Don't anger the sleeping giant - Oracle!
David Magda wrote:
Seems that things have been cleared up:
NetApp (NASDAQ: NTAP) today announced that both parties have agreed to
dismiss their pending patent litigation, which began in 2007 between Sun
Microsystems
Hi,
*The PCIE 8x port gives me 4GBps, which is 32Gbps. No problem there. Each
ESata port guarantees 3Gbps, therefore 12Gbps limit on the controller.*
I was simply listing the bandwidth available at the different stages of the
data cycle. The PCIE port gives me 32Gbps. The Sata card gives me a
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday.
On Fri, Sep 10, 2010 at 08:36:13AM -0700, Steeve Roy wrote:
I am currently preparing a big SAN deployment using ZFS. As I will start with
60tB of data with a growing rate of 25% per year, I need some online defrag,
data redistribution against drive as storage pool increase etc...
When can
I am currently preparing a big SAN deployment using ZFS. As I will start with
60tB of data with a growing rate of 25% per year, I need some online defrag,
data redistribution against drive as storage pool increase etc...
When can we expect to get the bp rewrite feature into ZFS?
Thanks!
Or you can go into udev's persistent rules and set it up such that the
drives always get the correct names. I'd guess you'll probably find them
somewhere under /etc/udev/rules.d or something similar. It will likely
save you trouble in the long run, as they likely are getting shuffled
with
At first we blamed de-dupe, but we've disabled that. Next we
suspected
the SSD log disks, but we've seen the problem with those removed, as
well.
Did you have dedup enabled and then disabled it? If so, data can (or
will) be deduplicated on the drives. Currently the only way of de-
On Tue, September 7, 2010 15:58, Craig Stevenson wrote:
3. Should I consider using dedup if my server has only 8Gb of RAM? Or,
will that not be enough to hold the DDT? In which case, should I add
L2ARC / ZIL or am I better to just skip using dedup on a home file server?
I would not
Hi! I'd been scouring the forums and web for admins/users who deployed zfs
with compression enabled on Oracle backed by storage array luns.
Any problems with cpu/memory overhead?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brad
Hi! I'd been scouring the forums and web for admins/users who deployed
zfs with compression enabled on Oracle backed by storage array luns.
Any problems with cpu/memory overhead?
I
Can anyone elaborate on the zpool split command. I have not seen any
examples in use am I am very curious about it. Say I have 12 disks in a
pool named tank. 6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and
I'm not even close to maximum capacity in the pool. Say I want to swap out
6 of
On Sep 13, 2010, at 4:40 PM, Chris Mosetick wrote:
Can anyone elaborate on the zpool split command. I have not seen any
examples in use am I am very curious about it. Say I have 12 disks in a pool
named tank. 6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and I'm not
even close to
So are there now any methods to achieve the scenario I described to shrink a
pools size with existing ZFS tools? I don't see a definitive way listed on
the old shrinking
threadhttp://www.opensolaris.org/jive/thread.jspa?threadID=8125
.
Thank you,
-Chris
On Mon, Sep 13, 2010 at 4:55 PM,
On Sep 13, 2010, at 5:51 PM, Chris Mosetick wrote:
So are there now any methods to achieve the scenario I described to shrink a
pools size with existing ZFS tools? I don't see a definitive way listed on
the old shrinking thread.
Today, there is no way to accomplish what you want without
I don't know what happened. I was in the process of copying files onto my new
file server when the copy process from the other machine failed. I turned on
the monitor for the fileserver and found that it had rebooted by itself at some
point (machine fault maybe?) and when I remounted the
Oh and yes, raidz1.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sep 12, 2010, at 7:49 PM, Michael Eskowitz wrote:
I recently lost all of the data on my single parity raid z array. Each of
the drives was encrypted with the zfs array built within the encrypted
volumes.
I am not exactly sure what happened.
Murphy strikes again!
The files were
Richard Elling wrote:
On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that world, only one thread writes
files
from one
On Sep 13, 2010, at 9:41 PM, Haudy Kazemi wrote:
Richard Elling wrote:
On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com
]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that
43 matches
Mail list logo