On Jan 17, 2011, at 8:22 PM, Repetski, Stephen wrote:
On Mon, Jan 17, 2011 at 22:08, Ian Collins i...@ianshome.com wrote:
On 01/18/11 04:00 PM, Repetski, Stephen wrote:
Hi All,
I believe this has been asked before, but I wasn’t able to find too much
information about the subject. Long
Hi
I have a pool with a raidz2 vdev.
Today I accidentally added a single drive to the pool.
I now have a pool that partially has no redundancy as this vdev is a single
drive.
Is there a way to remove the vdev and replace it with a new raidz2 vdev?
If not what can I do to do damage control and
With two drives it makes more sense to use a mirror then raidz configuration.
You will have the same amount of space and mirroring gives you more
performance, afaik.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I second that.
This is exactly what happened to me.
There is a bug (ID 4852783) that is in State 6-Fix Understood but it is
unchanged since February 2010.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
You can also make 250 GB slices (partitions) and create RAIDZ 3x250GB and
mirror 2x1750GB (one or more).
Mirror has better performance for write operations, Raidz shoud be faster for
read.
Regards
--
Piotr Tarnowski /DrFugazi/
http://www.drfugazi.eu.org/
--
This message posted from
Hi,
I would like to know about which threads will be preempted by which on my
OpenSolaris machine.
Therefore, I ran a multithreaded program myprogram with 32 threads on my
24-core Solaris machine. I make sure that each thread of my program has same
priority (priority zero), so that we can
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently
built a zfs test box based on nexentastor with 4x samsung 2tb drives connected
via SATA-II in a raidz1 configuration with dedup enabled compression off and
pool version 23. From running bonnie++ I get the following
I was copying a filesystem using zfs send | zfs receive and inadvertently
unplugged the power to the USB disk that was the destination. Much to my
horror this caused the system to panic. I recovered fine on rebooting, but it
*really* unnerved me.
I don't find anything about this online. I
Hello, I'm going to build home server. System is deployed on 8 GB USB flash
drive. I have two identical 2 TB HDD and 250 GB one. Could you please recommend
me ZFS configuration for the set of my hard drives?
1)
pool1: mirror 2tb x 2
pool2: 250 gb (or maybe add this drive to pool1???)
2)
pool1:
I've successfully installed NexentaStor 3.0.4 on this microserver using PXE.
Works like a charm.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 01/15/11 11:32 PM, Gal Buki wrote:
Hi
I have a pool with a raidz2 vdev.
Today I accidentally added a single drive to the pool.
I now have a pool that partially has no redundancy as this vdev is a single
drive.
Is there a way to remove the vdev
Not at the moment, as far as I know.
and
Hi all
This is just an off-the-cuff idea at the moment, but I would like to sound
it out.
Consider the situation where someone has a large amount of off-site data
storage (of the order of 100s of TB or more). They have a slow network link
to this storage.
My idea is that this could be used to
...If this is a general rule, maybe it will be worth considering using
SHA512 truncated to 256 bits to get more speed...
Doesn't it need more investigation if truncating 512bit to 256bit gives
equivalent security as a plain 256bit hash? Maybe truncation will introduce
some bias?
--
This
On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently
built a zfs test box based on nexentastor with 4x samsung 2tb drives
connected via SATA-II in a raidz1 configuration with dedup enabled
compression off and
Totally Off Topic:
Very interesting. Did you produce some papers on this? Where do you work? Seems
very fun place to work at!
BTW, I thought about this. What do you say?
Assume I want to compress data and I succeed in doing so. And then I transfer
the compressed data. So all the information
Big subject!
You haven't said what your 32 threads are doing, or how you gave them
the same priority, or what scheduler class they are running in.
However, you only have 24 VCPUs, and (I assume) 32 active threads, so
Solaris will try to share resources evenly, and yes, it will preempt one
I've since turned off dedup, added another 3 drives and results have improved
to around 148388K/sec on average, would turning on compression make things more
CPU bound and improve performance further?
On 18 Jan 2011, at 15:07, Richard Elling wrote:
On Jan 15, 2011, at 4:21 PM, Michael
On Tue, Jan 18, 2011 at 07:16:04AM -0800, Orvar Korvar wrote:
BTW, I thought about this. What do you say?
Assume I want to compress data and I succeed in doing so. And then I
transfer the compressed data. So all the information I transferred is
the compressed data. But, then you don't count
On Mon, Jan 17, 2011 at 02:19:23AM -0800, Trusty Twelve wrote:
I've successfully installed NexentaStor 3.0.4 on this microserver using PXE.
Works like a charm.
I've got 5 of them today, and for some reason NexentaCore 3.0.1 b134
was unable to write to disks (whether internal USB or the 4x
On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote:
I've since turned off dedup, added another 3 drives and results have improved
to around 148388K/sec on average, would turning on compression make things
more CPU bound and improve performance further?
On 18 Jan 2011, at 15:07,
Thanks everyone, I think overtime I'm gonna update the system to include an ssd
for sure. Memory may come later though. Thanks for everyone's responses
Erik Trimble erik.trim...@oracle.com wrote:
On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote:
I've since turned off dedup, added
You can't really do that.
Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes
RAM to maintain a cache table of what's in the L2ARC. Using 2GB of RAM
with an SSD-based L2ARC (even without Dedup) likely won't help you too
much vs not having the SSD.
If you're going to turn
On Mon, Jan 17, 2011 at 6:19 AM, Piotr Tarnowski
drfug...@drfugazi.eu.org wrote:
You can also make 250 GB slices (partitions) and create RAIDZ 3x250GB and
mirror 2x1750GB (one or more).
This configuration doesn't make a lot of sense for redundancy, since
it doesn't provide any. It will have
Ah ok, I wont be using dedup anyway just wanted to try. Ill be adding more ram
though, I guess you can't have too much. Thanks
Erik Trimble erik.trim...@oracle.com wrote:
You can't really do that.
Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes
RAM to maintain a cache
Sorry if this is well known.. I tried a bunch of googles, but didnt get
anywhere useful. Closest I came, was
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/028090.html but
that doesnt answer my question, below, reguarding zfs mirror recovery.
Details of our needs follow.
We
On 1/18/2011 2:46 PM, Philip Brown wrote:
My specific question is, how easily does ZFS handle*temporary* SAN
disconnects, to one side of the mirror?
What if the outage is only 60 seconds?
3 minutes?
10 minutes?
an hour?
Depends on the multipath drivers and the failure mode. For example, if
On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon wrote:
On 1/18/2011 2:46 PM, Philip Brown wrote:
My specific question is, how easily does ZFS handle*temporary* SAN
disconnects, to one side of the mirror?
What if the outage is only 60 seconds?
3 minutes?
10 minutes?
an hour?
Erik Trimble wrote:
On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon wrote:
On 1/18/2011 2:46 PM, Philip Brown wrote:
My specific question is, how easily does ZFS handle*temporary* SAN
disconnects, to one side of the mirror?
What if the outage is only 60 seconds?
3 minutes?
10 minutes?
an
On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon
wrote:
ZFS's ability to handle short-term interruptions
depend heavily on the
underlying device driver.
If the device driver reports the device as
dead/missing/etc at any
point, then ZFS is going to require a zpool replace
action before
I've installed nexentastor on 8GB usb stick without any problems, so try
nexentastor instead of nexentacore...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 2011-01-18 at 13:34 -0800, Philip Brown wrote:
On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon
wrote:
ZFS's ability to handle short-term interruptions
depend heavily on the
underlying device driver.
If the device driver reports the device as
dead/missing/etc at any
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Trusty Twelve
Hello, I'm going to build home server. System is deployed on 8 GB USB
flash
drive. I have two identical 2 TB HDD and 250 GB one. Could you please
recommend me ZFS configuration
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
Consider the situation where someone has a large amount of off-site data
storage (of the order of 100s of TB or more). They have a slow network
link
to this storage.
My idea
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
As far as what the resync does: ZFS does smart resilvering, in that
it compares what the good side of the mirror has against what the
bad side has, and only copies the
34 matches
Mail list logo