Brandon,
Thanks for replying to the message.
I believe that this is more related to the variable stripe size of RAIDZ
than the fdisk MBR. I say this because the disk works without any issues in
a mirror configuration or as standalone reaching 80 MB/s burst transfer
rates.
In RAIDZ, however, the
Hi there,
My zpool tank has been chugging along nicely but after a failed attempt at
offlining a misbehaving drive I've got a wierd sitation.
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the
...and let the resilver complete.
-- richard
Hi !
pool: zfs_raid
state: ONLINE
scrub: resilver completed after 16h34m with 0 errors on Fri May 21 05:39:42
2010
config:
NAMESTATE READ WRITE CKSUM
zfs_raidONLINE 0 0 0
raidz1
Now, I just have to do the same drive replacement for
the 2 other failing drives...
For information, current iostat results :
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
On the PCIe side, I noticed there's a new card coming from LSI that claims
150,000 4k random writes. Unfortunately this might end up being an OEM-only
card.
I also notice on the ddrdrive site that they now have an opensolaris driver and
are offering it in a beta program.
--
This message
Hi,
I known that ZFS is aware of I/O errors, and can alert or disable a crappy disk.
However, ZFS didn't notice at all these service time problems.
I think it is a good idea to integrate service time triggers in ZFS !
What to you think ?
Best regards !
Philippe
--
This message posted from
If you do not care about this NFS problem (or the others) then maybe
you can just disable the ZIL. It is a matter of working through step
1. Working through STEP 1 might be ``doesn't affect us. Disable
ZIL.'' Or it might be ``get slog with supercap''. STEP 1 will never
be ``plug in OCZ
Hi,
Actually, it seems a common problem with WD EARS drives (advanced format) !
Please, see this other OpenSolaris thread :
https://opensolaris.org/jive/thread.jspa?threadID=126637
It is worth investigating !
I quote :
Just replacing back, and here is the iostat for the new EARS drive:
On May 20, 2010, at 7:17 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 21 maj 2010, at 00.53, Ross Walker wrote:
On May 20, 2010, at 6:25 PM, Travis Tabbal tra...@tabbal.net wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was
AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile NAND
grid. Whether it respects or ignores the cache flush seems irrelevant.
There has been previous discussion about this:
http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/35702
I'm pretty sure that all
On Thu, May 20, 2010 19:44, Freddie Cash wrote:
And you can always patch OpenSSH with HPN, thus enabling the NONE
cipher,
which disable encryption for the data transfer (authentication is always
encrypted). And twiddle the internal buffers that OpenSSH uses to improve
transfer rates,
SNIP a whole lot of ZIL/SLOG discussion
Hi guys.
yep I know about the ZIL, and SSD Slogs.
While setting Nextenta up it offered to disable the ZIL entirely. For
now I left it on. In the end (hopefully for only specifc filesystems -
once that feature is released.) I'll end up disabling the ZIL
On Fri, 21 May 2010, Miika Vesti wrote:
AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile NAND
grid. Whether it respects or ignores the cache flush seems irrelevant.
There has been previous discussion about this:
For the record, in case anyone else experiences this behaviour: I tried
various things which failed, and finally as a last ditch effort, upgraded my
freebsd, giving me zpool v14 rather than v13 - and now it's resilvering as it
should.
Michael
On Monday 17 May 2010 09:26:23 Michael Donaghy
On Fri, May 21, 2010 10:19, Bob Friesenhahn wrote:
On Fri, 21 May 2010, Miika Vesti wrote:
AFAIK OCZ Vertex 2 does not use volatile DRAM cache but non-volatile
NAND
grid. Whether it respects or ignores the cache flush seems irrelevant.
There has been previous discussion about this:
This is intresting. I thought all Vertex 2 SSDs are good choices for ZIL
but this does not seem to be the case.
According to http://www.legitreviews.com/article/1208/1/ Vertex 2 LE,
Vertex 2 Pro and Vertex 2 EX are SF-1500 based but Vertex 2 (without any
suffix) is SF-1200 based.
Here is
Andreas,
Does the pool tank actually have 6 disks c7t0-c7t5 and c7t3d0 is now
masking c7t5d0 or it is a 5-disk configuration with c7t5 repeated twice?
If it is the first case (c7t0-c7t5), then I would check how these
devices are connected before attempting to replace the c7t3d0 disk.
What does
On Fri, May 21, 2010 at 7:12 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, May 20, 2010 19:44, Freddie Cash wrote:
And you can always patch OpenSSH with HPN, thus enabling the NONE
cipher,
which disable encryption for the data transfer (authentication is always
encrypted). And twiddle
On Fri, May 21, 2010 at 10:59 AM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 7:12 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, May 20, 2010 19:44, Freddie Cash wrote:
And you can always patch OpenSSH with HPN, thus enabling the NONE
cipher,
which disable
On Fri, May 21, 2010 12:59, Brandon High wrote:
On Fri, May 21, 2010 at 7:12 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, May 20, 2010 19:44, Freddie Cash wrote:
And you can always patch OpenSSH with HPN, thus enabling the NONE
cipher,
which disable encryption for the data transfer
On Thu, May 20, 2010 at 2:23 PM, Miika Vesti miika.ve...@trivore.com wrote:
I'm pretty sure that all SandForce-based SSDs don't use DRAM as their
cache, but take a hunk of flash to use as scratch space instead. Which
means that they'll be OK for ZIL use.
I've read conflicting reports that the
dd == David Dyer-Bennet d...@dd-b.net writes:
dd Just how DOES one know something for a certainty, anyway?
science.
Do a test like Lutz did on X25M G2. see list archives 2010-01-10.
pgpeiR4DYODbj.pgp
Description: PGP signature
___
zfs-discuss
On Fri, May 21, 2010 at 11:28 AM, David Dyer-Bennet d...@dd-b.net wrote:
I thought I remembered a none cipher, but couldn't find it the other
year and decided I must have been wrong. I did use ssh-1, so maybe I
really WAS remembering after all.
It may have been in ssh2 as well, or at least
Now, if someone would make a Battery FOB, that gives broken SSD 60
seconds of power, then we could use the consumer SSD's in servers
again with real value instead of CYA value.
You know- it would probably be sufficient to provide the SSD with _just_ a big
capacitor bank. If the host lost
I seem to be getting decent speed with arcfour (this was what i was using to
begin with)
Thanks for all the helpthis honestly was just me being stupid...looking
back on yesterday, i can't even remember what i was doing wrong nowi was
REALLY tired when i asked this question.
On Fri, May
For years I have been running a zpool using a Fibre Channel array with
no problems. I would scrub every so often and dump huge amounts of
data (tens or hundreds of GB) around and it never had a problem
outside of one confirmed (by the array) disk failure.
I upgraded to sol10x86 05/09 last year
On Thu, May 20, 2010 at 8:46 PM, Don d...@blacksun.org wrote:
I'm kind of flabbergasted that no one has simply stuck a capacitor on a more
reasonable drive. I guess the market just isn't big enough- but I find that
hard to believe.
I just spoke with a co-worker about doing something about
I just spoke with a co-worker about doing something about it.
He says he can design a small in-line UPS that will deliver 20-30
seconds of 3.3V, 5V, and 12V to the SATA power connector for about $50
in parts. It would be even less if only one voltage was needed. That
should be enough for
On 05/22/10 12:31 PM, Don wrote:
I just spoke with a co-worker about doing something about it.
He says he can design a small in-line UPS that will deliver 20-30
seconds of 3.3V, 5V, and 12V to the SATA power connector for about $50
in parts. It would be even less if only one voltage was needed.
Hello Cindy,
Does the pool tank actually have 6 disks c7t0-c7t5 and c7t3d0 is now
masking c7t5d0 or it is a 5-disk configuration with c7t5 repeated twice?
There are 6 disks connected onto the onboard Intel SATA controller (this is a
home NAS). There are another four that represent 2x
here ya go (sorry for the late reply)
wonsl...@wonslung-raidz2:~$ kstat -m cpu_info -c misc
module: cpu_infoinstance: 0
name: cpu_info0 class:misc
brand AMD Opteron(tm) Processor 6128
cache_id
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the older
server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives in raidz2 (3
vdevs, 2 with 7 drives one with 6) connected to 3 AOC-SAT2-MV8 cards spread
as evenly across them as i
On Fri, May 21, 2010 at 5:31 PM, Don d...@blacksun.org wrote:
Oh I wasn't kidding when I said I was going to have to try this with my home
server. I actually do some circuit board design and this would be an amusing
project. All you probably need is 5v- I'll look into it.
The SATA power
On 05/22/10 12:54 PM, Thomas Burgess wrote:
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the
older server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives
in raidz2 (3 vdevs, 2 with 7 drives one with 6) connected to 3
On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com wrote:
shouldn't the newer server have LESS load?
Please forgive my ubernoobness.
Depends on what it's doing!
Load average is really how many process are waiting to run, so it's
not always a useful metric. If there are
On Fri, May 21, 2010 at 5:39 PM, Andreas Iannou
andreas_wants_the_w...@hotmail.com wrote:
Can you still export a pool when a disk is offline?
You can try booting from a live CD and doing 'zpool import -f', then
export it. That may sort things out.
You may also need to remove
is 3 zfs recv's random?
On Fri, May 21, 2010 at 10:03 PM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com
wrote:
shouldn't the newer server have LESS load?
Please forgive my ubernoobness.
Depends on what it's doing!
Load average
yeah, i'm aware of the performance aspects. I use these servers as mostly
hd video servers for my house...they don't need to perform amazingly. I
originally went with the setup on the old server because of everything i had
read about performance with wide stripes...in all honesty it performed
On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess wonsl...@gmail.com wrote:
is 3 zfs recv's random?
It might be. What do a few reports of 'iostat -xcn 30' look like?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
I can't tell you for sure
For some reason the server lost power and it's taking forever to come back
up.
(i'm really not sure what happened)
anyways, this leads me to my next couple questions:
Is there any way to resume a zfs send/recv
Why is it taking so long for the server to come
On 05/22/10 04:44 PM, Thomas Burgess wrote:
I can't tell you for sure
For some reason the server lost power and it's taking forever to come
back up.
(i'm really not sure what happened)
anyways, this leads me to my next couple questions:
Is there any way to resume a zfs send/recv
yah, it seems that rsync is faster for what i need anywaysat least right
now...
On Sat, May 22, 2010 at 1:07 AM, Ian Collins i...@ianshome.com wrote:
On 05/22/10 04:44 PM, Thomas Burgess wrote:
I can't tell you for sure
For some reason the server lost power and it's taking forever
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.00.00.0 0.0 0.00.0 983.9 0 0 fd0
3.05.5 152.2 67.8 0.0 0.05.01.2 1 1 c8t1d0
41.33.4 1288.3 69.1 0.1 0.12.7
The SATA power connector supplies 3.3, 5 and 12v. A complete
solution will have all three. Most drives use just the 5v, so you can
probably ignore 3.3v and 12v.
I'm not interested in building something that's going to work for every
possible drive config- just my config :) Both the Intel X25-e
On 05/22/10 05:22 PM, Thomas Burgess wrote:
yah, it seems that rsync is faster for what i need anywaysat least
right now...
ZFS send/receive should run at wire speed for a Gig-E link.
Ian.
___
zfs-discuss mailing list
well it wasn't.
it was running pretty slow.
i had one really big filesystemwith rsync i'm able to do multiple
streams and it's moving much faster
On Sat, May 22, 2010 at 1:45 AM, Ian Collins i...@ianshome.com wrote:
On 05/22/10 05:22 PM, Thomas Burgess wrote:
yah, it seems that rsync
46 matches
Mail list logo