[zfs-discuss] new labelfix needed

2010-08-31 Thread Benjamin Brumaire
Hi,

labelfix http://www.opensolaris.org/jive/thread.jspa?messageID=229969 saved 
already a lot of data as it makes dettached devices importable.

A quick test today shows labelfix won't work anymore:

#uname -a
SunOs bigmama 5.11 snv_127 i86pc 386 i86pc Solaris
#./labelfix /dev/rdsk/c0d1s4
ld.so.1: labelfix: fatal: relocation error: file labelfix: symbol zio_checksum: 
referenced symbol not found
Killed
#

Could anyone with the right skills have a look at it?
As this feature didn't make it into zfs it would be nice to have it again.

bbr
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new labelfix needed

2010-08-31 Thread Mark J Musante

On Mon, 30 Aug 2010, Benjamin Brumaire wrote:

As this feature didn't make it into zfs it would be nice to have it 
again.


Better to spend time fixing the problem that requires a 'labelfix' as a 
workaround, surely.  What's causing the need to fix vdev labels?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Ray Van Dolson
On Mon, Aug 30, 2010 at 10:11:32PM -0700, Christopher George wrote:
  I was wondering if anyone had a benchmarking showing this alignment 
  mattered on the latest SSDs. My guess is no, but I have no data.
 
 I don't believe there can be any doubt whether a Flash based SSD (tier1 
 or not)  is negatively affected by partition misalignment.  It is intrinsic 
 to 
 the required asymmetric erase/program dual operation and the resultant 
 RMW penalty to perform a write if unaligned.  This is detailed in the 
 following vendor benchmarking guidelines (SF-1500 controller):
 
 http://www.smartm.com/files/salesLiterature/storage/AN001_Benchmark_XceedIOPSSATA_Apr2010_.pdf
 
 Highlight from link - Proper partition alignment is one of the most critical 
 attributes that can greatly boost the I/O performance of an SSD due to 
 reduced read modify‐write operations.
 
 It should be noted, the above highlight only applies to Flash based SSD 
 as an NVRAM based SSD does *not* suffer the same fate, as its 
 performance is not bound by or vary with partition (mis)alignment.

Here's an article with some benchmarks:

  http://wikis.sun.com/pages/viewpage.action?pageId=186241353

Seems to really impact IOPS.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intermittent ZFS hang

2010-08-31 Thread David Blasingame Oracle

Charles,

Just like UNIX, there are several ways to drill down on the problem.  I 
would probably start with a live crash dump (savecore -L) when you see 
the problem.  Another method would be to grap multiple stats commands 
during the problem to see where you can drill down later.  I would 
probably use this method if the problem lasts for a while and drill down 
with dtrace base on what I saw.  But each method is going to depend on 
your skill, when looking at the problem.


Dave

On 08/30/10 16:15, Charles J. Knipe wrote:

David,

Thanks for your reply.  Answers to your questions are below.

  

Is it just ZFS hanging (or what it appears to be is
slowing down or
blocking) or does the whole system hang?nbsp; br



Only the ZFS storage is affected.  Any attempt to write to it blocks until the 
issue passes.  Other than that the system behaves normally.  I have not, as far 
as I remember, tried writing to the root pool while this is going on, I'll have 
to check that next time.  I suspect the problem is likely limited to a single 
pool.

  

What does iostat show during the time period of the
slowdown?br
What does mpstat show during the time of the
slowdown?br
br
You can look at the metadata statistics by running
the following.
echo ::arc | mdb -kbr
When looking at a ZFS problem, I usually like to
gather
echo ::spa | mdb -kbr
echo ::zio_state | mdb -kbr



I will plan to dump information from all of these sources next time I can catch 
it in the act.  Any other diag commands you think might be useful?

  

I suspect you could drill down more with dtrace or
lockstat to see
where the slowdown is happening.



I'm brand new to DTrace.  I'm doing some reading now toward being in a position 
to ask intelligent questions.

-Charles
  



--



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Bob Friesenhahn

On Mon, 30 Aug 2010, Christopher George wrote:


It should be noted, the above highlight only applies to Flash based SSD
as an NVRAM based SSD does *not* suffer the same fate, as its
performance is not bound by or vary with partition (mis)alignment.


What is a NVRAM based SSD?  It seems to me that you are misusing the 
term NVRAM.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Christopher George
 What is a NVRAM based SSD? 

It is simply an SSD (Solid State Drive) which does not use Flash, 
but does use power protected (non-volatile) DRAM, as the primary 
storage media.

http://en.wikipedia.org/wiki/Solid-state_drive

I consider the DDRdrive X1 to be a NVRAM based SSD even 
though we delineate the storage media used depending on host 
power condition.  The X1 exclusively uses DRAM for all IO 
processing (host is on) and then Flash for permanent non-volatility 
(host is off).

Thanks,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Brandon High
On Mon, Aug 30, 2010 at 3:05 PM, Ray Van Dolson rvandol...@esri.com wrote:
 I want to fix (as much as is possible) a misalignment issue with an
 X-25E that I am using for both OS and as an slog device.

It's pretty easy to get the alignment right

fdisk uses a default of 63/255/*, which isn't easy to change. This
makes each cylinder ( 63 * 255 * 512b ).  You want ( $cylinder_offset
) * ( 63 * 255 * 512b ) / ( $block_alignment_size ) to be evenly
divisible. For a 4k alignment you want the offset to be 8.

With fdisk, create your SOLARIS2 partition that uses the entire disk.
The partition will be from cylinder 1 to whatever. Cylinder 0 is used
for the MBR, so it's automatically un-aligned.

When you create slices in format, the MBR cylinder isn't visible, so
you have to subtract 1 from the offset, so your first slice should
start on cylinder 7. Each additional cylinder should start on a
multiple of 8, minus 1. eg: 63, 1999, etc.

It doesn't matter if the end of a slice is unaligned, other than to
make aligning the next slice easier.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Terrible ZFS performance on a Dell 1850 w/ PERC 4e/Si (Sol10U6)

2010-08-31 Thread Robert Loper
I ran into this issue on my Dell 1850s also.  You'll need to go into BIOS,
change the PERC controller mode from RAID to SCSI and reboot.  Then when the
system comes back up it will give a warning about switching RAID controller
modes and data loss (Have Backups)!.  Then if you boot from DVD/Jumpstart
you should see 2 disks and just do a ZFS 2 disk mirror for rpool.

Hope this helps...

 - Robert Loper

-- Forwarded message --
From: Andrei Ghimus ghi...@gmail.com
To: zfs-discuss@opensolaris.org
Date: Mon, 30 Aug 2010 11:05:27 PDT
Subject: Re: [zfs-discuss] Terrible ZFS performance on a Dell 1850 w/ PERC
4e/Si (Sol10U6)
I have the same problem you do, ZFS performance under Solaris 10 u8 is
horrible.

When you say passthrough mode, do you mean non-RAID configuration?
And if so, could you tell me how you configured it?

The best I can manage is to configure each physical drive as a RAID 0 array
then export that as a logical drive.

All tips/suggestions are appreciated.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Haudy Kazemi

Christopher George wrote:
What is a NVRAM based SSD? 



It is simply an SSD (Solid State Drive) which does not use Flash, 
but does use power protected (non-volatile) DRAM, as the primary 
storage media.


http://en.wikipedia.org/wiki/Solid-state_drive

I consider the DDRdrive X1 to be a NVRAM based SSD even 
though we delineate the storage media used depending on host 
power condition.  The X1 exclusively uses DRAM for all IO 
processing (host is on) and then Flash for permanent non-volatility 
(host is off).
  


NVRAM = non-volatile random access memory.  It is a general category.
EEPROM = electrically-erasable programmable read-only memory.  It is a 
specific type of NVRAM.
Flash memory = memory used in flash devices, commonly NOR or NAND 
based.  It is a specific type of EEPROM, which in turn is a specific 
type of NVRAM.


http://en.wikipedia.org/wiki/Non-volatile_random_access_memory
http://en.wikipedia.org/wiki/EEPROM
http://en.wikipedia.org/wiki/Flash_memory

He means a DRAM based SSD with NVRAM (flash) backup vs. SSDs that use 
NVRAM (flash) directly.  This class of SSD may use DDR DIMMs or may be 
integrated.  Almost all of these devices that retain their data upon 
power loss are technically NVRAM based.  (Exception could be a hard 
drive based device that uses a DRAM cache equal to its hard drive 
storage capacity.)  It is effectively what you would get if you had a 
regular flash based SSD with an internal RAM cache equal in size to the 
nonvolatile storage plus enough energy storage to write out the whole 
cache upon power loss.


I doubt there would be any additional performance beyond what you could 
see from a RAMDISK carved from main memory (actually there would 
probably be theoretical lower performance because of lower bus 
bandwidths).  It does effectively solve the problems posed by 
motherboard physical RAM limits and of an unexpected power loss due to 
failed power supplies or failed UPSes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss