We've migrated from an old samba installation to a new box with
openindiana, and it works well, but... It seems Windows now honours
the executable bit, so that .exe files for installing packages, are
no longer directly executable. While it is positive that windows
honours this bit, it
- Original Message -
From: Brian Wilson brian.wil...@doit.wisc.edu
To: zfs-discuss@opensolaris.org
Cc:
Sent: Thursday, August 4, 2011 2:57:26 PM
Subject: Re: [zfs-discuss] Wrong rpool used after reinstall!
I'm curious - would it work to boot from a live CD, go to shell, and
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
swhitef...@yahoo.com wrote:
# zpool import -f tank
http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
I encourage you to open a support case and ask for an escalation on CR
7056738.
--
Mike Gerdts
Hi Mike,
Suppose I want to build a 100-drive storage system, wondering if there is any
disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then setup
ZFS file system on these 20 virtual drives and configure them as RAIDZ?
I understand people always say ZFS doesn't prefer HW RAID. Under
Over provisioning does not directly increase flash performance, but allows
for greater reliability as the drive ages by improving garbage collection
efforts and reducing write amplification. This article doesn't provide any
sources, but it explains the concept at a very basic level -
Did you 4k align your partition table and is ashift=12?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David Wragg wrote:
I've not done anything different this time from when I created the original
(512b) pool. How would I check ashift?
For a zpool called export...
# zdb export | grep ashift
ashift: 12
^C
#
As far as I know (although I don't have any WD's), all the current 4k
sectorsize
may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool
On 8/15/2011 9:12 AM,
Help - I've got a bad disk in a zpool and need to replace it. I've
got
an extra drive that's not being used, although it's still marked
like
it's in a pool. So I need to get the "xvm" pool destroyed, c0t5d0
marked as available, and replace c0t3d0 with c0t5d0.
D'oh. I shouldn't answer questions first thing Monday morning.
I think you test this configuration with and without the
underlying hardware RAID.
If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised with a RAIDZ configuration
built on the h/w raid array in
Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
difference?
- Original Message -
From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
On Fri, 12 Aug 2011, Tom Tang wrote:
Suppose I want to build a 100-drive storage system, wondering if
there is any disadvantages for me to setup 20 arrays of HW RAID0 (5
drives each), then setup ZFS file system on these 20 virtual drives
and configure them as RAIDZ?
The main concern would
Hi Doug,
The vms pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup,
On Fri, Aug 12, 2011 at 06:53:22PM -0700, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
For ZIL, I
suppose we could get the 300GB drive and overcommit to 95%!
What kind of benefit does that
On 8/15/2011 11:25 AM, Stu Whitefish wrote:
Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
difference?
try it
- Original Message
On Aug 11, 2011, at 1:16 PM, Ray Van Dolson wrote:
On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote:
On 08/12/11 08:00 AM, Ray Van Dolson wrote:
Are any of you using the Intel 320 as ZIL? It's MLC based, but I
understand its wear and performance characteristics can be bumped up
Unfortunately this panics the same exact way. Thanks for the suggestion though.
- Original Message -
From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. laot...@gmail.com
To: zfs-discuss@opensolaris.org
Cc:
Sent: Monday, August 15, 2011 3:06:20 PM
Subject: Re: [zfs-discuss] Kernel panic on
imho, not a good idea, any two hdd in your raid0 fail zpool is dead
if possible just do one hdd raid0 then use zfs to do mirror
raidz or raidz2 will be the last choice
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 12, 2011, at 21:34, Tom Tang thomps...@supermicro.com wrote:
Suppose
iirc if you use two hdd, you can import the zpool
can you try to import -R with only two hdd at time
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 15, 2011, at 13:42, Stu Whitefish swhitef...@yahoo.com wrote:
Unfortunately this panics the same exact way. Thanks for the suggestion
On Mon, August 15, 2011 12:25, Ray Van Dolson wrote:
Perhaps this is it. Pulled the recommendation from Intel's Solid-State
Drive 320 Series in Server Storage Applications whitepaper.
Section 4.1:
[...]
On the Intel SSD 320 Series, the spare capacity reserved at the
factory is 7% to
I am catching up here and wanted to see if I correctly understand the
chain of events...
1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
determine these disks are fine
3. copy data to save to
I'm sorry, I don't understand this suggestion.
The pool that won't import is a mirror on two drives.
- Original Message -
From: LaoTsao laot...@gmail.com
To: Stu Whitefish swhitef...@yahoo.com
Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Monday, August 15, 2011
In message 1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com, Stu Whi
tefish writes:
I'm sorry, I don't understand this suggestion.
The pool that won't import is a mirror on two drives.
Disconnect all but the two mirrored drives that you must import
and try to import from a S11X LiveUSB.
Hi Paul,
1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
I don't remember at this point which disks were which, but I believe it was 0
and 1 because during the first install there were only 2 drives in the box
because I had only 2 drives.
2. add two more
Given I can boot to single user mode and elect not to import or mount any
pools, and that later I can issue an import against only the pool I need, I
don't understand how this can help.
Still, given that nothing else seems to help I will try this and get back to
you tomorrow.
Thanks,
Jim
Hello Stu Whitefish and List,
On August, 15 2011, 21:17 Stu Whitefish wrote in [1]:
7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
kernel panic, even when booted from different OS versions
Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
from Oracle)
On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson rvandol...@esri.com wrote:
Are any of you using the Intel 320 as ZIL? It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote:
On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson rvandol...@esri.com wrote:
Are any of you using the Intel 320 as ZIL? It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by
From: Ray Van Dolson [mailto:rvandol...@esri.com]
Sent: Monday, August 15, 2011 12:26 PM
On the Intel SSD 320 Series, the spare capacity reserved at the
factory is 7% to 11% (depending on the SKU) of the full NAND
capacity. For better random write performance and endurance, the
I am creating a custom Solaris 11 Express CD used for disaster recovery.
I have included the necessary files on the system to run zfs commands
without error (no apparent missing libraries or drivers). However, when
I create a zvol, the device in /devices and the link to
/dev/zvol/dsk/rpool do
30 matches
Mail list logo