Hi all,
yesterday I had to remove a zpool device due to controller errors (I
tried to replace the harddisk, but checksum errors occured again) so I
connected a fresh harddisk to another controller port.
Now I have the problem that zpool status looks as following:
r...@storage:~# zpool status
Hi list,
I have a serious issue with my zpool.
My zpool consists of 4 vdevs which are assembled to 2 mirrors.
One of this mirrors got degraded cause of too many errors on each vdev
of the mirror.
Yes, both vdevs of the mirror got degraded.
According to murphys law I don't have a backup as
I accidentially only replied to cindy but I wanted to reply to the list.
I don't want to overstrain cindys time...maybe one of the list members can help
me as well.
-Ursprüngliche Nachricht-
Von: Matthias Appel
Gesendet: Dienstag, 08. Dezember 2009 03:34
An: 'cindy.swearin...@sun.com
NSA might choose in the future.
I just found this link on the Backblaze blog and I hope you will find it
as amusing as I do:
http://blog.backblaze.com/2009/11/12/nsa-might-want-some-backblaze-pods/
--
Give a man a fish and you feed him for a day; give him a freshly-charged
Electric Eel and
I consider myself also as a noob when it gets to ZFS but I already built
myself a ZFS filer and maybe I can
enlighten you by sharing my advanced noob who read about a lot about ZFS
thoughts about ZFS
A few examples of duh ?s
- How can I effect OCE with ZFS? The traditional 'back up all the
People here dream of using it for the ZFS intent log but it is clear
that this was not Sun's initial focus for the product.
At the moment I'm considering using a Gigabyte iRAM as ZIL device.
(see
http://cgi.ebay.com/Gigabyte-IRAM-I-Ram-GC-RAMDISK-SSD-4GB-PCI-card-SATA_W0Q
Hi,
at the moment I am running a pool consisting of 4 vdefs (Seagate Enterprise
SATA disks) assmebled to 2 mirrors.
Now I want to add two more drives to extend the capacity to 1.5 times the
old capacity.
As these mirrors will be striped in the pool I want to know what will
happen to the
Is anyone else tired of seeing the word redundancy? (:-)
Only in a perfect world (tm) ;-)
IMHO there is no such thing as too much redundancy.
In the real world the possibilities of redundancy are only limited by money,
be it online redundancy (mirror/RAIDZx,) offline redundancy (tape
Redundancy costs in terms of both time and money. Redundant hardware
which fails or feels upset requires time to administer and repair.
This is why there is indeed such a thing as too much redundancy.
Yes that's true, but all I wanted to say is: If there is infinite of money
there can be
You will see more IOPS/bandwith, but if your existing disks are very
full, then more traffic may be sent to the new disks, which results in
less benefit.
OK, that means, over time, data will be distributed across all mirrors?
(assuming all blocks will be written once)
I think a useful
So, yes, SSD and HDD are different, but latency is still important.
But on SSD, write performance is much more unpredictable than on HDD.
If you want to write to SSD you will have to erase the used blocks (assuming
this is not a brand-new SSD) before you are able to write to them.
This takes
Von: Bruno Sousa [mailto:bso...@epinfante.com]
Gesendet: Dienstag, 20. Oktober 2009 22:20
An: Matthias Appel
Cc: zfs-discuss@opensolaris.org
Betreff: Re: [zfs-discuss] Adding another mirror to storage pool
Hi,
Something like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id
12 matches
Mail list logo