Hi all,
we have some problems with ZFS.
Our configuration: X4100 + dual 3510 JBOD, 2 zpool, Solaris 10U7
# zfs create dr/netapp11bkpVOL34
# cd /dr/netapp11bkpVOL34
# rsync -av --numeric-ids --delete /netapp11/vol/vol34/* .
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34
On Thu, Sep 10, 2009 at 3:27 PM, Ginodandr...@gmail.com wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
#
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T 158G 1.34T
On 10 Sep 2009, at 09:38, Fajar A. Nugraha wrote:
On Thu, Sep 10, 2009 at 3:27 PM, Ginodandr...@gmail.com wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
#
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
#
Now there are no files in /dr/netapp11bkpVOL34,
but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T
158G
4T /dr/netapp11bkpVOL34
Space has not been freed up!
Are there
Hi,
I have a question, let's say I have a zvol named vol1 which is a clone of a
snapshot of another zvol (its origin property is tank/my...@mysnap).
If I send this zvol to a different zpool through a zfs send does it send the
origin too that is, does an automatic promotion happen or do I end
On Thu, Sep 10, 2009 at 9:09 AM, Chris Kirby christopher.ki...@sun.com wrote:
On Sep 10, 2009, at 7:07 AM, Brandon Mercer wrote:
On Thu, Sep 10, 2009 at 5:11 AM, casper@sun.com wrote:
Hello all, I'm running 2009.06 and I've got a random kernel panic
that keeps killing my system under
On Thu, Sep 10, 2009 at 8:03 PM, Maurilio Longo
maurilio.lo...@libero.it wrote:
Hi,
I have a question, let's say I have a zvol named vol1 which is a clone of a
snapshot of another zvol (its origin property is tank/my...@mysnap).
If I send this zvol to a different zpool through a zfs send
Neither.
It'll send all necessary data (without having to
promote anything) so
that the receiving zvol has a working vol1, and it's
not a clone.
Fajar,
thanks for clarifying, this is what I was calling 'promotion'.
It is like a promotion happening on the receiving side.
Maurilio.
--
On Thu, Sep 10, 2009 at 8:29 PM, Maurilio Longo
maurilio.lo...@libero.it wrote:
Neither.
It'll send all necessary data (without having to
promote anything) so
that the receiving zvol has a working vol1, and it's
not a clone.
Fajar,
thanks for clarifying, this is what I was calling
On Thu, September 10, 2009 04:27, Gino wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T 158G1.34T
/dr/netapp11bkpVOL34
Space has not been
On Thu, September 10, 2009 04:27, Gino wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T 158G1.34T
/dr/netapp11bkpVOL34
Space has not been
On Thu, September 10, 2009 04:27, Gino wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T
158G1.34T
netapp11bkpVOL34
Space has
Actually there is great chance that you are hitting this bug :
6792701 Removing large holey file does not free space
To check run :
# zdb - name of your pool/name of your fs
if you find object(s) without pathname you are in ...
it should look like this :
...
Object lvl iblk dblk
Francois you're right!
We just found that it's happening only with files 100GB and S10U7.
We have no problem with SNV_101a.
gino
Actually there is great chance that you are hitting
this bug :
6792701 Removing large holey file does not free
space
To check run :
# zdb - name of
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote:
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
Some hours later, here I am again:
scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
Any suggestions?
Let it run for another day.
A pool on a build server I manage takes
On Thu, Sep 10, 2009 at 11:11, Jonathan Edwards
jonathan.edwa...@sun.com wrote:
out of curiousity - do you have a lot of small files in the filesystem?
Most of the space in the filesystem is taken by a few large files, but
most of the files in the filesystem are small. For example, I have my
On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld sommerf...@sun.com wrote:
Any suggestions?
Let it run for another day.
I'll let it keep running as long as it wants this time.
I suspect the combination of frequent time-based snapshots and a pretty
active set of users causes the progress
Eugen Leitl wrote:
Inspired by
http://www.webhostingtalk.com/showpost.php?p=6334764postcount=14
I'm considering taking the Supermicro chassis like
http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm
populating it with 1 TByte WD Caviar Black WD1001FALS with TLER
set to 7
On Sep 10, 2009, at 7:07 AM, Brandon Mercer wrote:
On Thu, Sep 10, 2009 at 5:11 AM, casper@sun.com wrote:
Hello all, I'm running 2009.06 and I've got a random kernel panic
that keeps killing my system under high IO loads. It happens almost
every time I start loading up the writes on at
Why do you need 3x LSI SAS3081E-R? The back plane has LSI SAS x36 expander so
you only nedd 1x 3081E. If you want multipathing, you need E2 model.
Second, I'd say use Seagate ES 2 1TB SAS disk especially if you want
multipathing. I believe E2 only supports SAS disks.
I have Supermicro 936E1
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state at
High priority.
CR 6859997 has recently been fixed in Nevada. This fix will also be in
Solaris 10 Update 9.
This fix speeds
Hi.
I'm willing to maintain a project hosted on java.net
(https://zfs.dev.java.net/) that aims to provide a Java wrapper to
libzfs. I've already wrapped, although not committed yet, the last
libzfs.h I found on OpenSolaris.org (v. 10342:108f0058f837) and the
first problem I want to address is
I've hit google and it looks like this is still an issue in b122. Does this
look like it will be fixed any time soon? If so, what build will it be fixed
in and is there an ETA for the build to be released?
Thanks.
-brian
--
Coding in C is like sending a 3 year old to do groceries. You gotta
We finally resolved this issue by change LSI driver. For details, please refer
to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Alex Li wrote:
We finally resolved this issue by change LSI driver. For details, please
refer to here
http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
Anyone from Sun have any knowledge of when the open source mpt driver will be
less broken? Things improved greatly for
Enrico,
Could you compare and contrast your effort with the existing libzfs_jni?
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/
Perhaps it would be worthwhile to try and un-privatize libzfs_jni?
-- richard
On Sep 10, 2009, at 12:20 PM, Enrico Maria
Hello Brian,
On Sep 10, 2009, at 9:21 PM, Brian Hechinger wrote:
I've hit google and it looks like this is still an issue in b122.
Does this
look like it will be fixed any time soon? If so, what build will it
be fixed
in and is there an ETA for the build to be released?
Adam has
Hi Brian,
I'm tracking this issue and expected resolution, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is
On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
richard.ell...@gmail.com wrote:
Enrico,
Could you compare and contrast your effort with the existing libzfs_jni?
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/
Where's the source for the java code that uses
Ah, fantastic. Henrik also pointed out that b124 is about a month out?
I wonder if b119 is worth moving to in the meantime?
-brian
On Thu, Sep 10, 2009 at 01:59:23PM -0600, cindy.swearin...@sun.com wrote:
Hi Brian,
I'm tracking this issue and expected resolution, here:
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state at High
priority.
CR 6859997 has recently been fixed in Nevada. This fix will also be in
On 09/10/09 16:17, Bob Friesenhahn wrote:
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched state
at High priority.
CR 6859997 has recently been
Thanks for pointing it out, Richard. I missed libzfs_jni. I'll have a
look at it and see where we're overlapping.
As far as I can see at a quick glance is that libzfs_jni is including
functionality we'd like to build upon the libzfs wrapper (that's why I
was studying zfs and zpool commands).
Can anyone answer if we will get zfs de-duplication before SXCE EOL? If
possible also anser the same on encryption?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, 10 Sep 2009, Rich Morris wrote:
Excellent. What level of read improvement are you seeing? Is the prefetch
rate improved, or does the fix simply avoid losing the prefetch?
This fix avoids using a prefetch stream when it is no longer valid. BTW, ZFS
prefetch appears to work well
On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:
On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
richard.ell...@gmail.com wrote:
Enrico,
Could you compare and contrast your effort with the existing
libzfs_jni?
On Fri, Sep 11, 2009 at 12:26 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:
On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
richard.ell...@gmail.com wrote:
Enrico,
Could you compare and contrast your effort with the existing libzfs_jni?
37 matches
Mail list logo