Thanks.
But it seems not the truth.
I just recalled a thread in this list and it said SMI lable and EFI label
cannot be in one disk.
Is it correct?
Let me describe my case.
I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a 100GB
slice -- c0t0d0s0 for rpool.
And I want
On Jun 1, 2010, at 4:35 AM, Fred Liu wrote:
Thanks.
But it seems not the truth.
I just recalled a thread in this list and it said SMI lable and EFI label
cannot be in one disk.
Is it correct?
Let me describe my case.
I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to
On Mon, 31 May 2010, Sandon Van Ness wrote:
With sequential writes I don't see how parity writing would be any
different from when I just created a 20 disk zpool which is doing the
same writes every 5 seconds but the only difference is it isn't maxing
out CPU usage when doing the writes and and
We have a couple 48 - 1TB drive bays from PAC data. The drives bays support
RAID 0,1,5, no JBOD unfortunately, Fibrechannel interface. What would be the
best way to configure these drives? We were thinking of creating 96 - 1TB RAID
0 stripes of 1 drive each so that we can let ZFS handle
On Tue, 1 Jun 2010, GARRISON, TRAVIS J. wrote:
We have a couple 48 - 1TB drive bays from PAC data. The drives bays support
RAID 0,1,5, no JBOD
unfortunately, Fibrechannel interface. What would be the best way to configure
these drives? We
were thinking of creating 96 – 1TB RAID 0 stripes of
Hi--
The purpose of the ZFS dump volume is to provide space for a
system crash dump. You can choose not to have one, I suppose,
but you wouldn't be able to collect valuable system info.
Thanks,
Cindy
On 05/30/10 11:28, me wrote:
Reinstalling grub helped.
What is the purpose of dump slice?
Is it currently possible (Solaris 10 u8) to encrypt a ZFS pool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 28, 2010 11:04, Thanassis Tsiodras wrote:
I've read on the web that copies=2 affects only the files copied *after* I
have changed the setting
That is correct.
Rewriting datasets is a feature desired for future versions (it would make
a LOT of things, including shrinking pools and
Hi--
I'm no user property expert, but I have some syntax for you to try to
resolve these problems. See below.
Maybe better ways exist but one way to correct datapool inheritance of
com.sun:auto-snapshot:dailyy is to set it to false, inherit the false
setting, then reset the correct property at
On Mon, May 31, 2010 at 5:44 AM, wojciech wojciech.g...@cimr.cam.ac.uk wrote:
I wanted to do the same with system users:
but i didn't notice that instead of datapool/system I set datapool in
inherit -r and all users under datapool/system inherits settings from
datapool.
zfs inherit -r
On Tue, Jun 1, 2010 at 8:43 AM, GARRISON, TRAVIS J. garri...@otc.edu wrote:
Anyone have any ideas?
Have you contacted the vendor? There may be a way to enable JBOD mode,
perhaps with new firmware.
-B
--
Brandon High : bh...@freaks.com
___
On Tue, Jun 1, 2010 at 9:45 AM, Tony MacDoodle tpsdoo...@gmail.com wrote:
Is it currently possible (Solaris 10 u8) to encrypt a ZFS pool?
No. The feature is not in current snv builds yet, either.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss
On 6/1/10 4:35 AM -0700 Fred Liu wrote:
I just recalled a thread in this list and it said SMI lable and EFI label
cannot be in one disk. Is it correct?
Correct. But that was not your original question.
Let me describe my case.
I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer
-Original Message-
From: Frank Cusack [mailto:frank+lists/z...@linetwo.net]
Sent: Wednesday, June 02, 2010 2:38 AM
To: Fred Liu
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] SMI lable and EFI label in one disk?
On 6/1/10 4:35 AM -0700 Fred Liu wrote:
I just recalled a
Hello All,
We are currently testing a NFS+Sun Cluster solution with ZFS in our
environment. Currently we have 2 HP DL360s each with a 2-port LSI SAS 9200-8e
controller (mpt_sas driver) connected to a Xyratex OneStor SP1224s 24-bay sas
tray. The xyratex sas tray has 2 ports on the controller
I have but unfortunately they don't support JBOD with this chassis. We now pay
much better attention before we purchase something. :)
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Tuesday, June 01, 2010 1:00 PM
To: GARRISON, TRAVIS J.
Cc:
Silly question - you're not trying to have the ZFS pool imported on
both hosts at the same time, are you? Maybe I misread, had a hard
time following the full description of what exact configuration caused
the scsi resets.
On Jun 1, 2010, at 2:22 PM, Steve Jost wrote:
Hello All,
We are
On Tue, Jun 1, 2010 at 11:54 AM, Fred Liu fred_...@issi.com wrote:
That is true. Any internals about this limitation? How can I realize my goal?
You can't do it using the Caiman installer that comes with the osol dev builds.
There are a few ways that you can do it now that the system is
On Jun 1, 2010, at 2:43 PM, Steve D. Jost wrote:
Definitely not a silly question. And no, we create the pool on
node1 then set up the cluster resources. Once setup, sun cluster
manages importing/exporting the pool into only the active cluster
node. Sorry for the lack of clarity..
Definitely not a silly question. And no, we create the pool on node1 then set
up the cluster resources. Once setup, sun cluster manages importing/exporting
the pool into only the active cluster node. Sorry for the lack of clarity..
not much sleep has been had recently.
When connecting to
It is cool.
Many thanks.
It seems the installer of Solaris 10 U8 is more flexible in this aspect which
can realize my goal directly.
Thanks.
Fred
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Wednesday, June 02, 2010 3:40 AM
To: Fred Liu
Cc: Frank Cusack;
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/06/10 11:39 AM, Fred Liu wrote:
Thanks.
No.
If you must disable MPxIO, then you do so after installation,
using the stmsboot command.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
Yes. But the output of zpool commands still uses MPxIO naming convention and
format command cannot find any disks.
Thanks.
Fred
-Original Message-
From: James C. McPherson [mailto:j...@opensolaris.org]
Sent: 星期三, 六月 02, 2010 9:58
To: Fred Liu
Cc: zfs-discuss@opensolaris.org
Subject:
On 2/06/10 12:01 PM, Fred Liu wrote:
Yes. But the output of zpool commands still uses MPxIO naming convention
and format command cannot find any disks.
_But_ ?
What is the problem with ZFS using the device naming system
that the system provides it with?
Do you mean that you cannot see any
Hello,
I currently have a raid1 array setup on Windows 7 with a pair of 1.5TB drives.
I don't have enough space in any other drives to make a backup of all this data
and I really don't want to copy my ~1.1 TB of files over the network anyways.
What I want to do it get a third 1.5 TB drive and
The Intel SSD is not a dual ported SAS device. This device must be supported by
the SAS expander in your external chassis.
Did you use an AAMUX transposer card for the SATA device between the connector
of the chassis and the SATA drive?
Andreas
--
This message posted from opensolaris.org
In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.
I think there is no way to give short aliases to these long tedious MPxIO name.
And I just have only one HBA card, so I don't need multipath indeed.
The simple name -- cxtxdx will be
Fix some typos.
#
In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.
I think there is no way to give short aliases to these
Hi,
It would be great help to understand the how the actually allocated disk
space keeps getting reduced as shown below (I understand different
filesystem/volume management implementation would have different logic
to set aside some space reserved for meta-data etc). If the following
can be
30 matches
Mail list logo