hi
by default the disk partition s2 cover the whole disk
this is fine for ufs for LONG time.
Now zfs does not like this overlap so you just need to run format then
delete s2
or use s2 and delete all other partitions
(by default when you run format/fdisk it create s2 whole disk and s7
for
hi
you did not answer the question, what is the RAM of the server? how many
socket and core etc
what is the block size of zfs?
what is the cache ram of your san array?
what is the block size/strip size of your raid in san array? raid 5 or
what?
what is your test program and how (from what
just note that you can has different zpool name but with the same old
mount point for export purpose
-LT
On 3/8/2012 8:40 AM, Paul Kraus wrote:
Lots of suggestions (not included here), but ...
With the exception of Cindy's suggestion of using 4 disks and
mirroring (zpool attach two new
IMHO, there is no easy way out for you
1)tape backup and restore
2)find a larger USB SATA disk, copy the data over then restore later
after raidz1 setup
-LT
On 3/7/2012 4:38 PM, Bob Doolittle wrote:
Hi,
I had a single-disk zpool (export) and was given two new disks for
expanded storage.
what is your main application for ZFS? e.g. just NFS or iSCSI for home
dir or VM? or Window client?
Is performance important? or space is more important?
what is the memory of your server?
do you want to use ZIL or L2ARC?
what is your backup or DR plan?
You need to answer all these question
it seems that s11 shadow migration can help:-)
On 1/7/2012 9:50 AM, Jim Klimov wrote:
Hello all,
I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get
may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2 c0t1d0s1
4)use lucreate -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0
5)lustatus
correction
On 1/6/2012 3:34 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2
AFAIK, most ZFS based storage appliance are move to SAS with 7200 rpm or
15k rpm
most SSD are SATA and are connecting to on bd SATA with IO chips
On 12/19/2011 9:59 AM, tono wrote:
Thanks for the sugestions, especially all the HP info and build
pictures.
Two things crossed my mind on the
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
On 12/18/2011 9:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks
please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
ZIL(SSD)
may be try the ZFS simulator SW
regards
On 12/12/2011 2:28 PM, Albert Chin wrote:
We're preparing to purchase an X4170M2 as an upgrade for our existing
X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for
4c@2.4ghz
On 12/12/2011 2:44 PM, Albert Chin wrote:
On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
wrote:
please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
ZIL(SSD)
may be try the ZFS simulator SW
Good point. Thanks.
regards
On 12/12/2011
On 12/12/2011 3:02 PM, Gary Driggs wrote:
On Dec 12, 2011, at 11:42 AM, \Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.\ wrote:
please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and ZIL(SSD)
Do those appliances also use the F20 PCIe flash cards?
no, these controller need the slots
FYI
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-size-zfs-dedup-1354231.html
never to late:-(
On 12/1/2011 5:19 PM, Freddie Cash wrote:
The system has 6GB of RAM and a 10GB swap partition. I added a 30GB
swap file but this hasn't helped.
ZFS doesn't use
did you see this link
http://www.solarisinternals.com/wiki/index.php/ZFS_forensics_scrollback_script
may be out of date already
regards
On 11/23/2011 11:14 AM, Gary Driggs wrote:
Is zdb still the only way to dive in to the file system? I've seen the
extensive work by Max Bruning on this but
AFAIK, there is no change in open source policy for Oracle Solaris
On 11/9/2011 10:34 PM, Fred Liu wrote:
... so when will zfs-related improvement make it to solaris-
derivatives :D ?
I am also very curious about Oracle's policy about source code. ;-)
Fred
for ZFS appliance
NFS or SMB(CIFS) as File server sd_max_throttle donot play
for FC or iSCSI it may play
regards
On 11/3/2011 5:29 PM, Gary wrote:
Hi folks,
I'm reading through some I/O performance tuning documents and am
finding some older references to sd_max_throttle kernel/project
http://download.oracle.com/docs/cd/E22471_01/html/820-4167/application_integration__microsoft.html#application_integration__microsoft__sun_storage_7000_provider_for_microsoft_vs
On 9/15/2011 9:19 AM, S Joshi wrote:
By iirc do you mean 'if i remember correctly' or is there a company
called iirc?
may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool
On 8/15/2011 9:12 AM,
On 8/15/2011 11:25 AM, Stu Whitefish wrote:
Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
difference?
try it
- Original Message
hi
most mordern server has separate ILOM that support IPMLtool that can
talk to HDD
what is your server? does it has separate remote management port?
On 8/10/2011 8:36 AM, Lanky Doodle wrote:
Hiya,
Now I have figured out how to read disks using dd to make LEDs blink, I want to
write a
hi
try to import zpool at different mnt root
it hangs forever
how to recover
can one kill the import job
1 S root 5 0 0 0 SD? 0? Jun 27
? 8:58 zpool-rootpool
1 S root 16786 0 0 0 SD? 0? 16:11:15
? 0:00
yes good idea, another things to keep in mind
technology change so fast, by the time you want a replacement, may be
HDD does exist any more
or the supplier changed, so the drives are not exactly like your
original drive
On 5/28/2011 6:05 PM, Michael DeMan wrote:
Always pre-purchase
23 matches
Mail list logo