Hi Edward,
well that was exactly my point, when I raised this question. If zfs send is
able to identify corrupted files while it transfers a snapshot, why shouldn't
scrub be able to do the same?
ZFS send quit with an I/O error and zpool status -v showed my the file that
indeed had problems. Si
On 7/10/10 03:46 PM, Ramesh Babu wrote:
I am trying to create ZPool using single veritas volume. The host is going
down as soon as I issue zpool create command. It looks like the command is
crashing and bringing host down. Please let me know what the issue might
be.Below is the command used, tex
The NFS client that we're using always uses O_SYNC, which is why it was
critical for us to use the DDRdrive X1 as the ZIL. I was unclear on the entire
system we're using, my apologies. It is:
OpenSolaris SNV_134
Motherboard: SuperMicro X8DAH
RAM: 72GB
CPU: Dual Intel 5503 @ 2.0GHz
ZIL: DDRdrive
I am trying to create ZPool using single veritas volume. The host is going
down as soon as I issue zpool create command. It looks like the command is
crashing and bringing host down. Please let me know what the issue might
be.Below is the command used, textvol is the veritas volume and testpool is
On Wed, Oct 6 at 22:04, Edward Ned Harvey wrote:
* Because ZFS automatically buffers writes in ram in order to
aggregate as previously mentioned, the hardware WB cache is not
beneficial. There is one exception. If you are doing sync writes
to spindle disks, and you don't have a dedicated log d
Hi Edward,
these are interesting points. I have considered a couple of them, when I
started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I conducted a
couple of tests, where I configured my raids as jbods and mapped each drive out
as a seperate LUN and
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Now, scrub would reveal corrupted blocks on the devices, but is there a
> way to identify damaged files as well?
I saw a lot of people offering the same knee-jerk reaction t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Ian,
>
> yes, although these vdevs are FC raids themselves, so the risk is… uhm…
> calculated.
Whenever possible, you should always JBOD the storage and let ZFS manage the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tony MacDoodle
>
> Is it possible to add 2 disks to increase the size of the pool below?
>
> NAME STATE READ WRITE CKSUM
> testpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c1t2d0 ONLINE 0 0 0
On Wed, 6 Oct 2010, Marty Scholes wrote:
If you think about it, this is far more sane than flushing to disk
every time the write() system call is used.
Yes, it dramatically diminishes the number of copy-on-write writes and
improves the pool layout efficiency. It also saves energy.
Bob
--
B
I think you are seeing ZFS store up the writes, coalesce them, then flush to
disk every 30 seconds.
Unless the writes are synchronous, the ZIL won't be used, but the writes will
be cached instead, then flushed.
If you think about it, this is far more sane than flushing to disk every time
the w
On Wed, Oct 06, 2010 at 05:19:25PM -0400, Miles Nordin wrote:
> > "nw" == Nicolas Williams writes:
>
> nw> *You* stated that your proposal wouldn't allow Windows users
> nw> full control over file permissions.
>
> me: I have a proposal
>
> you: op! OP op, wait! DOES YOUR PROPOSAL
> "nw" == Nicolas Williams writes:
nw> *You* stated that your proposal wouldn't allow Windows users
nw> full control over file permissions.
me: I have a proposal
you: op! OP op, wait! DOES YOUR PROPOSAL blah blah WINDOWS blah blah
COMPLETELY AND EXACTLY LIKE THE CURRENT ONE.
> "dd" == David Dyer-Bennet writes:
dd> Richard Elling said ZFS handles the 4k real 512byte fake
dd> drives okay now in default setups
There are two steps to handling it well. one is to align the start of
partitions to 4kB, and apparently on Solaris (thanks to all the
cumbersome par
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks
connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2
spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it,
the writes are always very bursty like this:
ool488
> "ag" == Andrew Gabriel writes:
ag> Having now read a number of forums about these, there's a
ag> strong feeling WD screwed up by not providing a switch to
ag> disable pseudo 512b access so you can use the 4k native.
this reporting lie is no different from SSD's which have 2 - 8
> Hi all
>
> I just discovered WD Black drives are rumored not to
> be set to allow TLER.
Yep: http://opensolaris.org/jive/message.jspa?messageID=501159#501159
> Enterprise drives will cost
> about 60% more, and on a large install, that means a
> lot of money...
True, sometimes more than twice
On Wed, Oct 06, 2010 at 04:38:02PM -0400, Miles Nordin wrote:
> > "nw" == Nicolas Williams writes:
>
> nw> The current system fails closed
>
> wrong.
>
> $ touch t0
> $ chmod 444 t0
> $ chmod A0+user:$(id -nu):write_data:allow t0
> $ ls -l t0
> -r--r--r--+ 1 carton carton 0
> "nw" == Nicolas Williams writes:
nw> The current system fails closed
wrong.
$ touch t0
$ chmod 444 t0
$ chmod A0+user:$(id -nu):write_data:allow t0
$ ls -l t0
-r--r--r--+ 1 carton carton 0 Oct 6 20:22 t0
now go to an NFSv3 client:
$ ls -l t0
-r--r--r-- 1 carton 405 0 201
Ian,
yes, although these vdevs are FC raids themselves, so the risk is… uhm…
calculated.
Unfortuanetly, one of the devices seems to have some issues, as stated im my
previous post.
I will, nevertheless, add redundancy to my pool asap.
Thanks,
budy
--
This message posted from opensolaris.org
_
Hi Cindy,
thanks for bringing that to my attention. I checked fmdump and found a lot of
these entries:
Okt 06 2010 17:52:12.862812483 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
class = ereport.io.scsi.cmd.disk.tran
ena = 0x514dc67d57e1
detector = (embedded nvlis
Seems like it's really the case, that scrub doesn't take traffic that goes onto
the zpool while it's scrubbing away.
After some more time, the scrub finished and everything looks good so far.
Thanks,
budy
--
This message posted from opensolaris.org
__
On Wed, October 6, 2010 14:14, Tony MacDoodle wrote:
> Is it possible to add 2 disks to increase the size of the pool below?
>
> NAME STATE READ WRITE CKSUM
> testpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c1t2d0 ONLINE 0 0 0
> c1t3d0 ONLINE 0 0 0
> mirror-1 ONLINE 0 0 0
> c1t4d0 ONLINE 0 0 0
>
On 10/ 6/10 09:52 PM, Stephan Budach wrote:
Hi,
I recently discovered some - or at least one corrupted file on one ofmy ZFS
datasets, which caused an I/O error when trying to send a ZFDS snapshot to
another host:
zpool status -v obelixData
pool: obelixData
state: ONLINE
status: One or m
On Wed, Oct 6, 2010 at 12:14 PM, Tony MacDoodle wrote:
> Is it possible to add 2 disks to increase the size of the pool below?
Yes. zpool add testpool mirror devname1 devname2
That will add a third mirror vdev to the pool.
> NAME STATE READ WRITE CKSUM
> testpool ONLINE 0 0 0
> mirror-0 ONLI
Budy,
Your previous zpool status output shows a non-redundant pool with data
corruption.
You should use the fmdump -eV command to find out the underlying cause
of this corruption.
You can review the hardware-level monitoring tools, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Tro
Is it possible to add 2 disks to increase the size of the pool below?
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
___
zfs-
Well I think, that answers my question then: after a successful scrub, zpool
status -v should then list all damaged files on an entire zpool.
I only asked, because I read a thread in this forum that one guy had a problem
with different files, aven after a successful scrub.
Thanks,
budy
--
This
- Original Message -
> On Tue, October 5, 2010 17:20, Richard Elling wrote:
> > On Oct 5, 2010, at 2:06 PM, Michael DeMan wrote:
> >>
> >> On Oct 5, 2010, at 1:47 PM, Roy Sigurd Karlsbakk wrote:
>
> >>> Well, here it's about 60% up and for 150 drives, that makes a wee
> >>> difference...
>
> > TLER (the ability of the drive to timeout a command)
>
> I went and got what detailed documentation I could on a couple of the
> Seagate drives last night, and I couldn't find anything on how they
> behaved in that sort of error cases. (I believe TLER is a WD-specific
> term, but I didn't
http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking
I'm curious why nexenta did not perform as well as opensolaris. Both OS
versions seem to be the same.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
Hi,
I came across this exact same problem when using Intel X25-E Extreme 32GB SSD
disks as ZIL and L2ARC devices in a T5220 server. Since I didn't see a
definitive solution here, I opened a support case with Oracle. They told me to
upgrade the firmware on my SSD disks and LSI Expander, and th
Budy,
> No - not a trick question., but maybe I didn't make myself clear.
> Is there a way to discover such bad files other than trying to actually read
> from them one by one, say using cp or by sending a snapshot elsewhere?
As noted by your original email, ZFS reports on any corruption using t
Yes - that may well be. There was data going on to the device while scrub has
been running. Especially large zfs receives had been going on.
I'd be odd if that was the case, though.
Cheers,
budy
--
This message posted from opensolaris.org
___
zfs-dis
Scrub?
On Oct 6, 2010, at 6:48 AM, Stephan Budach wrote:
> No - not a trick question., but maybe I didn't make myself clear.
> Is there a way to discover such bad files other than trying to actually read
> from them one by one, say using cp or by sending a snapshot elsewhere?
>
> I am well awar
On Tue, October 5, 2010 16:47, casper@sun.com wrote:
>
>
>>My immediate reaction to this is "time to avoid WD drives for a while";
>>until things shake out and we know what's what reliably.
>>
>>But, um, what do we know about say the Seagate Barracuda 7200.12 ($70),
>>the SAMSUNG Spinpoint F3
Good idea. Provides options, but it would be nice to be able to set a
low water mark on what can be taken away from the arc metadata cache
without having to have something like an SSD.
Dave
On 10/01/10 14:02, Freddie Cash wrote:
On Fri, Oct 1, 2010 at 11:46 AM, David Blasingame Oracle
wrote
On Tue, October 5, 2010 17:20, Richard Elling wrote:
> On Oct 5, 2010, at 2:06 PM, Michael DeMan wrote:
>>
>> On Oct 5, 2010, at 1:47 PM, Roy Sigurd Karlsbakk wrote:
>>> Well, here it's about 60% up and for 150 drives, that makes a wee
>>> difference...
>> Understood on 1.6 times cost, especial
Have you had a lot of activity since the scrub started?
I have noticed what appears to be extra I/O at the end of a scrub when activity
took place during the scrub. It's as if the scrub estimator does not take the
extra activity into account.
--
This message posted from opensolaris.org
___
Hi,
my machine is a HP ProLiant ML350 G5 with 2 quad-core Xeons, 32GB RAM and a HP
SmartArray E200i RAID controller with 3x160 and 3x500GB SATA discs connected to
it. Two of the 160GB discs build the mirrored root pool (rpool), the third
serves as a temporary data pool called "tank", and the th
No - not a trick question., but maybe I didn't make myself clear.
Is there a way to discover such bad files other than trying to actually read
from them one by one, say using cp or by sending a snapshot elsewhere?
I am well aware that the file shown in zpool status -v is damaged and I have
alre
On 06 October, 2010 - Stephan Budach sent me these 2,1K bytes:
> Hi,
>
> I recently discovered some - or at least one corrupted file on one ofmy ZFS
> datasets, which caused an I/O error when trying to send a ZFDS snapshot to
> another host:
>
>
> zpool status -v obelixData
> pool: obelixDa
Hi all,
I have issued a scrub on a pool, that consists of two independant FC raids. The
scrub has been running for approx. 25 hrs and then showed 100%, but there's
still an incredible traffic on one of the FC raids going on, plus zpool statuv
-v reports that scrub is still running:
zpool stat
www.solarisinternals.com has always been a community. It never was hosted by
Sun, and it's not hosted by Oracle. True, many of the contributors were Sun
employees, but not so many remain at Oracle. If it's out if date, I suspect
that's because the original contributors are too busy doing other f
casper@sun.com wrote:
On Tue, Oct 5, 2010 at 11:49 PM, wrote:
I'm not sure that that is correct; the drive works on naive clients but I
believe it can reveal its true colors.
The drive reports 512 byte sectors to all hosts. AFAIK there's no way
to make it report 4k sectors.
Hi,
I recently discovered some - or at least one corrupted file on one ofmy ZFS
datasets, which caused an I/O error when trying to send a ZFDS snapshot to
another host:
zpool status -v obelixData
pool: obelixData
state: ONLINE
status: One or more devices has experienced an error resulting i
On 05/10/2010 20:14, Miles Nordin wrote:
I'm glad it wasn't my project, though. If I were in Darren's place
I'd have signed on to work for an open-source company, spent seven
years of my life working on something, delaying it and pushing hard to
make it a generation beyond other filesystem crypt
>On Tue, Oct 5, 2010 at 11:49 PM, wrote:
>> I'm not sure that that is correct; the drive works on naive clients but I
>> believe it can reveal its true colors.
>
>The drive reports 512 byte sectors to all hosts. AFAIK there's no way
>to make it report 4k sectors.
Too bad because it makes it le
On Tue, Oct 5, 2010 at 11:49 PM, wrote:
> I'm not sure that that is correct; the drive works on naive clients but I
> believe it can reveal its true colors.
The drive reports 512 byte sectors to all hosts. AFAIK there's no way
to make it report 4k sectors.
-B
--
Brandon High : bh...@freaks.co
If you're spending upwards of $30,000 on a storage system, you probably
shouldn't skimp on the most important component. You might as well be
complaining that ECC ram costs more. Don't be ridiculous. For one, this is a
disk backup system, not a fileserver, and TLER is far from as critic al
Can you give us release numbers that confirm that this is 'automatic'. It is
my understanding that the last available public release of OpenSolaris does not
do this.
On Oct 5, 2010, at 8:52 PM, Richard Elling wrote:
> ZFS already aligns the beginning of data areas to 4KB offsets from the lab
51 matches
Mail list logo