Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Thanks for the help, but I cannot get it to work.

j...@opensolaris:~# zpool import
  pool: vault
id: 8738898173956136656
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

vaultUNAVAIL  missing device
  raidz1-0   ONLINE
c11d0ONLINE
c12d0ONLINE
c12d1ONLINE
c10d1ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
j...@opensolaris:~# zpool clear vault
cannot open 'vault': no such pool
j...@opensolaris:~# zpool add vault log /dev/dsk/c10d0p1
cannot open 'vault': no such pool

j...@opensolaris:~# zpool import vault
cannot import 'vault': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.

It seems to me that the 'missing' log device is the problem. The first time I 
did a 'zpool import' it was aware of the log device, but the disks were missing 
(becaus they were moved to a different controller), but now that the disks are 
back the log is not showing up anymore.

I read http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6707530 and 
if I understand correctly this is related to my problem, but it should also 
have been fixed in build 96? I am on build 133
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Jason Barr
Hello,

I want to slice these 3 disks into 2 partitions each and configure 1 Raid0 and 
1 Raidz1 on these 3.

What exactly has to be done? Using format and fdisk I know but not exactly how 
for this setup.

In case of disk failure: how do I replace the faulty one?

Thank you for a great product

Jason
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
I cannot import - that is the problem. :-(

I have read the discussions you referred to (and quite a few more), and also 
about the logfix program. I also found a discussion where 'zpool import -FX' 
solved a similar problem so I tried that but no luck.

Now I have read so many discussions and blog posts I am getting dizzy. :-)

To summarize:

I moved the disks without exporting first
I tried to import
I moved them back
I cannot import the pool because of missing log
The log is there, the disks are back but I still cannot import

According to the bug 6707530 this should have been fixed in b96? Since I am on 
b133 it shouldn't affect me, or am I wrong?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
I don't think that is the problem (but I am not sure). It seems like te problem 
is that the ZIL is missing. It is there, but not recognized.

I used fdisk to create a 4GB partition of a SSD, and then added it to the pool 
with the command 'zpool add vault log /dev/dsk/c10d0p1'.

When I try to import the pool is says the log is missing. When I try to add the 
log to the pool it says there is no such pool (since it isn't imported yet). 
Catch22? :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Ian Collins

On 05/15/10 09:43 PM, Jason Barr wrote:

Hello,

I want to slice these 3 disks into 2 partitions each and configure 1 Raid0 and 
1 Raidz1 on these 3.

   

Lets get the obvious question out of the way first: why?

If you intend one two way mirror and one raidz, you will either have to 
waste one slice, or have two slices from the same drive on the raidz, 
which isn't a good idea.


What is your goal?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Roy Sigurd Karlsbakk
- Jan Hellevik opensola...@janhellevik.com skrev:

 I don't think that is the problem (but I am not sure). It seems like
 te problem is that the ZIL is missing. It is there, but not
 recognized.
 
 I used fdisk to create a 4GB partition of a SSD, and then added it to
 the pool with the command 'zpool add vault log /dev/dsk/c10d0p1'.
 
 When I try to import the pool is says the log is missing. When I try
 to add the log to the pool it says there is no such pool (since it
 isn't imported yet). Catch22? :-)

Which version of opensolaris/zpool is this? There was a problem with earlier 
osol (up to snv_129 or so, don't remember) that they failed to import a pool if 
the zil was missing - you effectually lost the whole pool. This was fixed in 
later (development) versions of opensolaris. I have still seen some reports 
addressing problems with this in later versions, but I don't have the links 
handy - google for it :)

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-15 Thread Marc Bevand
I have done quite some research over the past few years on the best (ie. 
simple, robust, inexpensive, and performant) SATA/SAS controllers for ZFS. 
Especially in terms of throughput analysis (many of them are designed with an 
insufficient PCIe link width). I have seen many questions on this list about 
which one to buy, so I thought I would share my knowledge: 
http://blog.zorinaq.com/?e=10 Very briefly:

- The best 16-port one is probably the LSI SAS2116, 6Gbps, PCIe (gen2) x8. 
Because it is quite pricey, it's probably better to buy 2 8-port controllers.
- The best 8-port is the LSI SAS2008 (faster, more expensive) or SAS1068E 
(150MB/s/port should be sufficient).
- The best 2-port is the Marvell 88SE9128 or 88SE9125 or 88SE9120 because of 
PCIe gen2 allowing a throughput of at least 300MB/s on the PCIe link with 
Max_Payload_Size=128. And this one is particularly cheap ($35). AFAIK this is 
the _only_ controller of the entire market allowing 2 drives to not bottleneck 
an x1 link.

I hope this helps ZFS users here!

-mrb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-15 Thread Pasi Kärkkäinen
On Sat, May 15, 2010 at 11:01:00AM +, Marc Bevand wrote:
 I have done quite some research over the past few years on the best (ie. 
 simple, robust, inexpensive, and performant) SATA/SAS controllers for ZFS. 
 Especially in terms of throughput analysis (many of them are designed with an 
 insufficient PCIe link width). I have seen many questions on this list about 
 which one to buy, so I thought I would share my knowledge: 
 http://blog.zorinaq.com/?e=10 Very briefly:
 
 - The best 16-port one is probably the LSI SAS2116, 6Gbps, PCIe (gen2) x8. 
 Because it is quite pricey, it's probably better to buy 2 8-port controllers.
 - The best 8-port is the LSI SAS2008 (faster, more expensive) or SAS1068E 
 (150MB/s/port should be sufficient).
 - The best 2-port is the Marvell 88SE9128 or 88SE9125 or 88SE9120 because of 
 PCIe gen2 allowing a throughput of at least 300MB/s on the PCIe link with 
 Max_Payload_Size=128. And this one is particularly cheap ($35). AFAIK this is 
 the _only_ controller of the entire market allowing 2 drives to not 
 bottleneck 
 an x1 link.
 
 I hope this helps ZFS users here!
 

Excellent post! It'll definitely help many.

Thanks!

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Jason Barr
Hi,

something like this

Disk #  Slice 1 Slice 2
1   raid5   raid0
2   raid5   raid0
3   raid5   raid0

I want to have some fast scratch space (raid0) and some protected (raidz)

Greetings

J
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Roy Sigurd Karlsbakk
- Jason Barr forum4...@arcor.de skrev:

 Hi,
 
 something like this
 
 Disk #Slice 1 Slice 2
 1 raid5   raid0
 2 raid5   raid0
 3 raid5   raid0
 
 I want to have some fast scratch space (raid0) and some protected
 (raidz)

Should be doable - make sure the raid0 is on the first slice, though - drives 
have more sectors on the outer cylinders than on the inner ones, and are 
therefore fastest at the beginning of the drive. The difference is about 1:2 
comparing outer rim and inner, although space-wise, the curve is not linear, 
since most of the data is stored in the outer parts (due to more sectors per 
cylinder there).

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
svn_133 and zfs 22. At least my rpool is 22.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Richard Elling
On May 15, 2010, at 2:53 AM, Jan Hellevik wrote:

 I don't think that is the problem (but I am not sure). It seems like te 
 problem is that the ZIL is missing. It is there, but not recognized.
 
 I used fdisk to create a 4GB partition of a SSD, and then added it to the 
 pool with the command 'zpool add vault log /dev/dsk/c10d0p1'.

Ah, this is critical information! By default, ZFS import does not
look for fdisk partitions.  Hence, your log device is not found.
Since the pool is exported, there is no entry in /etc/zfs/zpool.cache
to give ZFS a hint to look at the fdisk partition.

First, you need to find the partition, because it might have moved
to a new controller. For this example, lets assume the new disk
pathname is c33d0.

1. verify that you can read the ZFS label on the partition
zdb -l /dev/dsk/c33d0p1
   you should see 4 labels

2. create a symlink ending with s0 to the partition.
ln -s /dev/dsk/c33d0p1 /dev/dsk/c33d0p1s0

3. see if ZFS can find the log device
zpool import

4. if that doesn't work, let us know and we can do the same trick
   using another name (than c33d0p1s0) or another way, using 
   the -d option to zpool import.

 When I try to import the pool is says the log is missing. When I try to add 
 the log to the pool it says there is no such pool (since it isn't imported 
 yet). Catch22? :-)

By default, Solaris only looks at fdisk partitions with a Solaris2 ID and only
one fdisk partition per disk.
 -- richard

-- 
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-15 Thread Brian
Very helpful.  I just started to setup my system and have run into a problem 
where my SATA port 7/8 aren't really SATA ports they are behind an unsupported 
RAID controller, so I am in the market for a compatible controller.

Very helpful post.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Thanks! Not home right now, but I will try that as soon as I get home.

Message was edited by: janh
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Roy Sigurd Karlsbakk
- Richard Elling richard.ell...@gmail.com skrev:

 On May 15, 2010, at 2:53 AM, Jan Hellevik wrote:
 
  I don't think that is the problem (but I am not sure). It seems like
 te problem is that the ZIL is missing. It is there, but not
 recognized.
  
  I used fdisk to create a 4GB partition of a SSD, and then added it
 to the pool with the command 'zpool add vault log /dev/dsk/c10d0p1'.
 
 Ah, this is critical information! By default, ZFS import does not
 look for fdisk partitions.  Hence, your log device is not found.
 Since the pool is exported, there is no entry in /etc/zfs/zpool.cache
 to give ZFS a hint to look at the fdisk partition.

Will ZFS look for all these devices in case of failure?

r...@urd:~$ zpool status
  pool: dpool
 state: ONLINE
 scrub: scrub completed after 63h23m with 0 errors on Tue May  4 08:23:58 2010
config:

NAME STATE READ WRITE CKSUM
dpoolONLINE   0 0 0
  raidz2-0   ONLINE   0 0 0
c7t2d0   ONLINE   0 0 0
c7t3d0   ONLINE   0 0 0
c7t4d0   ONLINE   0 0 0
c7t5d0   ONLINE   0 0 0
c7t6d0   ONLINE   0 0 0
c7t7d0   ONLINE   0 0 0
c8t0d0   ONLINE   0 0 0
  raidz2-1   ONLINE   0 0 0
c8t1d0   ONLINE   0 0 0
c8t2d0   ONLINE   0 0 0
c8t3d0   ONLINE   0 0 0
c8t4d0   ONLINE   0 0 0
c8t5d0   ONLINE   0 0 0
c8t6d0   ONLINE   0 0 0
c8t7d0   ONLINE   0 0 0
  raidz2-2   ONLINE   0 0 0
c9t0d0   ONLINE   0 0 0
c9t1d0   ONLINE   0 0 0
c9t2d0   ONLINE   0 0 0
c9t3d0   ONLINE   0 0 0
c9t4d0   ONLINE   0 0 0  52K repaired
c9t5d0   ONLINE   0 0 0
c9t6d0   ONLINE   0 0 0
logs
  mirror-3   ONLINE   0 0 0
c10d1s0  ONLINE   0 0 0
c11d0s0  ONLINE   0 0 0
cache
  c10d1s1ONLINE   0 0 0
  c11d0s1ONLINE   0 0 0
spares
  c9t7d0 AVAIL   


-- 
Vennlige hilsener

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS home server (Brandon High)

2010-05-15 Thread Annika

I'm also about to set up a small home server. This little box
http://www.fractal-design.com/?view=productprod=39
is able housing six 3,5 hdd's and also has one 2,5 bay, eg for an ssd. 
Fine.


I need to know which SATA controller cards (both PCI and PCI-E) are 
supported in OS, also I'd be grateful for tips on which ones to use in a 
non-pro environment.


My problem, I can't find the HCL for Open Solaris. Thanks in advance for 
a link.

Date: Fri, 14 May 2010 15:16:10 -0700
From: Brandon High bh...@freaks.com
To: ZFS discuss zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS home server
Message-ID:
aanlktinukj0ihv3i6lj2xrkvjia8ww4586vu_i8kj...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

I've been thinking about building a small NAS box for my father in law
to back his home systems up to. He mistakenly bought a Drobo, but his
Macs refuse to use it as a Time Machine target, even with the afp
protocol.

I came across a review of the ASUS TS Mini, which comes with an Atom
N280, 1GB RAM, 2 drive bays (one with a Seagate 7200.12 500gb drive),
and lots of external ports. Photos in some review show an RGB port
inside. Since it was cheap, I ordered one to play with.

It's turned out to be a great small NAS and case. It's 9.5 high,
3.75 wide, and 8 deep. Power is from an external brick. The top is
held on with thumb screws, which once removed let you pull out the
drive cage. The bottom cover is held on by some philips screws. This
also gives you access to the single DDR2 SO-DIMM slot. There are also
solder pads for a second memory slot and for a PCIe 1x slot. If you're
handy with a soldering iron, you could double your memory.

Taking the back cover off lets you get at the VGA. You need to use a
Torx-9 driver and remove the 8 or so screws, then loosen the
motherboard to take it out. Once out, you can see that the RBG port
can be trimmed out of the back plate with a razor or dremel.

The two internal drives are connected to the ICH7-M southbridge. It
looks like a sideways PCIe 1x slot on the motherboard, but it's the
sata and power connectors for the internal drives, so don't think you
can plug a different card in.

The two external eSATA ports are provided via a Marvell 88SE6121 PCIe
SATA controller, which supports PMP. There are also 6 USB ports on the
back. All of this is supported by OpenSolaris.

When booting with a monitor and keyboard attached, you can hit DEL to
get into the BIOS and change any settings. There's nothing that
prevents you from replacing the provided Windows Home Server.

I've currently got the system running NexentaStor Community, booting
off of a 4GB USB drive. Large writes (eg: DVD iso) go at about 20MB/s
over GigE, and reads are about 40MB/s.

It's not the fanciest or fastest system, but I think it'll work fine
as an iSCSI target for Time Machine. And my FIL can even use the Drobo
as external USB drives if he wants.

-B

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Jason Barr
Ok,

I got it working: however I set up two partitions on each disk using fdisk 
inside of format

what's the difference to slices (I checked with gparted)

Bye
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
It did not work. I did not find labels on p1, but on p0.

j...@opensolaris:~# zdb -l /dev/dsk/c10d0p1

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3

j...@opensolaris:~# zdb -l /dev/dsk/c10d0p0

LABEL 0

version: 22
state: 4
guid: 9172477941882675499

LABEL 1

version: 22
state: 4
guid: 9172477941882675499

LABEL 2

version: 22
state: 4
guid: 9172477941882675499

LABEL 3

version: 22
state: 4
guid: 9172477941882675499

j...@opensolaris:~# ln -s  /dev/dsk/c10d0p0 /dev/dsk/c10d0p0s0

j...@opensolaris:~# zpool import
  pool: vault
id: 8738898173956136656
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

vaultUNAVAIL  missing device
  raidz1-0   ONLINE
c11d0ONLINE
c12d0ONLINE
c12d1ONLINE
c10d1ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
j...@opensolaris:~#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Roy Sigurd Karlsbakk
- Jason Barr forum4...@arcor.de skrev:

 Ok,
 
 I got it working: however I set up two partitions on each disk using
 fdisk inside of format
 
 what's the difference to slices (I checked with gparted)

Both are supported, but from another post in here recently, it seems fdisk 
partitions aren't automatically recognized if moving a port, while slices are. 
I think you should change that to slices.

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS home server (Brandon High)

2010-05-15 Thread Roy Sigurd Karlsbakk
- Annika annika...@telia.com  skrev:

 I'm also about to set up a small home server. This little box
 http://www.fractal-design.com/?view=productprod=39
 is able housing six 3,5 hdd's and also has one 2,5 bay, eg for an
 ssd.
 Fine.

 I need to know which SATA controller cards (both PCI and PCI-E) are
 supported in OS, also I'd be grateful for tips on which ones to use in
 a
 non-pro environment.

See http://www.sun.com/bigadmin/hcl/data/os/ for supported hardware. There was 
also a post in her yesterday or perhaps earlier today about the choice of 
SAS/SATA controllers. Most will do in a home server environment, though. 
AOC-SAT2-MV8 are great controllers, but run on PCI-X, which isn't very 
compatible with PCI Express

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Haudy Kazemi
Can you recreate the problem with a second pool on a second set of 
drives, like I described in my earlier post?  Right now it seems like 
your problem is mostly due to the missing log device.  I'm wondering if 
that missing log device is what messed up the initial move to the other 
controller, or if the other controller did something to the disks when 
it saw them.



Jan Hellevik wrote:

I don't think that is the problem (but I am not sure). It seems like te problem 
is that the ZIL is missing. It is there, but not recognized.

I used fdisk to create a 4GB partition of a SSD, and then added it to the pool 
with the command 'zpool add vault log /dev/dsk/c10d0p1'.

When I try to import the pool is says the log is missing. When I try to add the 
log to the pool it says there is no such pool (since it isn't imported yet). 
Catch22? :-)
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-15 Thread John Balestrini
Howdy All,

I've a bit of a strange problem here. I have a filesystem with one snapshot 
that simply refuses to be destroyed. The snapshots just prior to it and just 
after it were destroyed without problem. While running the zfs destroy command 
on this particular snapshot, the server becomes more-or-less hung. It's 
pingable but will not open a new shell (local or via ssh) however existing 
shells will occasionally provide some interaction, but at an obscenely slow 
rate, if at all. Also, since this snapshot is unusually large in size so I've 
allowed the server to chew on it over night without success. 

It set up as a 3-disk raidz configuration. Here's a abbreviated listing of it 
(the pool1/st...@20100421.0800 snapshot is the one that is misbehaving):

ba...@~$ zfs list -t filesystem,snapshot -r pool1/Staff
NAME   USED  AVAIL  
REFER  MOUNTPOINT
pool1/Staff   741G   760G  
44.6K  /export/Staff
pool1/st...@20100421.0800 366G  -   
366G  -
pool1/st...@zfs-auto-snap:daily-2010-05-02-21:57 0  -  
44.6K  -
pool1/st...@zfs-auto-snap:monthly-2010-05-02-21:58   0  -  
44.6K  -
pool1/st...@zfs-auto-snap:daily-2010-05-03-00:00 0  -  
44.6K  -
.
. (Lines Removed)
.
pool1/st...@archive_1.20100514.2218  0  -  
44.6K  -
pool1/st...@zfs-auto-snap:daily-2010-05-15-00:00 0  -  
44.6K  -


Has anyone seen this type of problem? Any ideas?

Thanks,

-- John 











___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3 Data Disks: partitioning for 1 Raid0 and 1 Raidz1

2010-05-15 Thread Eric D. Mudama

On Sat, May 15 at  2:43, Jason Barr wrote:

Hello,

I want to slice these 3 disks into 2 partitions each and configure 1 Raid0 and 
1 Raidz1 on these 3.

What exactly has to be done? Using format and fdisk I know but not exactly how 
for this setup.

In case of disk failure: how do I replace the faulty one?


I think this is a bad idea.  Spreading multiple pools across the
partitions of the same set of drives will mean accesses to both pools
will have lots of extra seeks going from one portion of each drive to
the other.

If you are trying to get redundancy on a system without many disks,
i'd just mirror the root pool and put your data in there as well.  At
least that way, you won't be seeking across partitions.

Another option is to have a single boot drive, and a mirror of drives
for your data pool.  That's effectively how we do it at work, since
our SLA for system recovery allows a reinstall of the OS, and the
amount of custom configuration is minimal in our rpool.


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Jan Hellevik
Yes, I can try to do that. I do not have any more of this brand of disk, but I 
guess that does not matter. It will have to wait until tomorrow (I have an 
appointment in a few minutes, and it is getting late here in Norway), but I 
will try first thing tomorrow. I guess a pool on a single drive will do the 
trick? I can create the log as a partition on yet another drive just as I did 
with the SSD (do not want to mess with it just yet). Thanks for helping!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Haudy Kazemi

Jan Hellevik wrote:

Yes, I can try to do that. I do not have any more of this brand of disk, but I 
guess that does not matter. It will have to wait until tomorrow (I have an 
appointment in a few minutes, and it is getting late here in Norway), but I 
will try first thing tomorrow. I guess a pool on a single drive will do the 
trick? I can create the log as a partition on yet another drive just as I did 
with the SSD (do not want to mess with it just yet). Thanks for helping!
  


In this case the specific brand and model of drive probably does not matter.

The most accurate test will be to setup a test pool as similar as 
possible to the damaged pool, i.e. a 4 disk RAIDZ1 with a log on a 
partition of a 5th disk.


A single drive pool might do the trick for testing, but it has no 
redundancy.  The smallest pool with redundancy is a mirror, thus the 
suggestion to use a mirror.  If you have enough spare/small/old drives 
that are compatible with the second controller, use them to model your 
damaged pool.  For this test it doesn't really matter if these are 4 gb 
or 40 gb or 400 gb drives.



Try the following things in order.  Keep a copy of the terminal commands 
you use and the command responses you get.


1.) Wipe (e.g. dban/dd/zero wipe) disks that will make up the test pool, 
and create the test pool.  Copy some test data to the pool, like an 
OpenSolaris ISO file.
Try migrating the disks to the second controller the same way you did 
with your damaged pool.  Use the exact same steps in the same order.  
See your notes/earlier posts while doing this to make sure you remember 
them exactly.
If that works (a forced import will likely be needed), then you might 
have had a one time error, or hard to reproduce error, or maybe did a 
minor step slightly differently from how you remembered doing it with 
the damaged pool.

If that fails, then you may have a repeatable test case.

2.) Wipe (e.g. dban/dd/zero wipe) disks that made up the test pool, and 
recreate the test pool. Copy some test data to the pool, like an 
OpenSolaris ISO file.
Try migrating the disks the recommended way, using export, powering 
everything off, and then import.
If that works (without needed a forced import), then skipping the export 
was likely a trigger.
If that fails, it seems like the second controller is doing something to 
the disks.  Look at the controller BIOS settings for something relevant 
and see if there are any firmware updates available.


3.) If you have a third (different model) controller (or a another 
computer running the same Solaris version with a different controller), 
repeat step 2 with it.  If step 2 failed but this works, that's more 
evidence the second controller is up to something.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-15 Thread Roy Sigurd Karlsbakk

 Has anyone seen this type of problem? Any ideas?

Yeah, I've seen the same. Tried to remove a dataset, and it hung on one 
snapshot. This was a test server, so I ended up recreating the pool instead of 
trying to report a bug about it (hours of time saved, since the debugging 
features for ZFS all involve serial consoles, which did not work, and takes a 
lot of time debugging).

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-15 Thread Ian Collins

On 05/16/10 06:52 AM, John Balestrini wrote:

Howdy All,

I've a bit of a strange problem here. I have a filesystem with one snapshot 
that simply refuses to be destroyed. The snapshots just prior to it and just 
after it were destroyed without problem. While running the zfs destroy command 
on this particular snapshot, the server becomes more-or-less hung. It's 
pingable but will not open a new shell (local or via ssh) however existing 
shells will occasionally provide some interaction, but at an obscenely slow 
rate, if at all. Also, since this snapshot is unusually large in size so I've 
allowed the server to chew on it over night without success.

It set up as a 3-disk raidz configuration. Here's a abbreviated listing of it 
(the pool1/st...@20100421.0800 snapshot is the one that is misbehaving):

   

Is dedup enabled on that pool?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dedup status

2010-05-15 Thread Roy Sigurd Karlsbakk
Hi all

I've been doing a lot of testing with dedup and concluded it's not really ready 
for production. If something fails, it can render the pool unuseless for hours 
or maybe days, perhaps due to single-threded stuff in zfs. There is also very 
little data available in the docs (though I've from what I've got on this list) 
on how much memory one should have for deduping an xTiB dataset.

Does anyone know how the status is for dedup now? In 134 it doesn't work very 
well, but is it better in ON140 etc?

Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-15 Thread Thomas Burgess
Well i just wanted to let everyone know that preliminary results are good.
 The livecd booted, all important things seem to be recognized. It sees all
16 gb of ram i installed and all 8 cores of my opteron 6128

The only real shocker is how loud the norco RPC-4220 fans are (i have
another machine with a norco 4020 case so i assumed the fans would be
similar.this was a BAD assumption)  This thing sounds like a hair dryer

Anyways, I'm running the install now so we'll see how that goes. It did take
about 10 minutes to find a disk durring the installer, but if i remember
right, this happened on other machines as well.


On Thu, May 13, 2010 at 9:56 AM, Thomas Burgess wonsl...@gmail.com wrote:

 I ordered it.  It should be here monday or tuesday.  When i get everything
 built and installed, i'll report back.  I'm very excited.  I am not
 expecting problems now that i've talked to supermicro about it.  Solaris 10
 runs for them so i would imagine opensolaris should be fine too.

 On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar 
 knatte_fnatte_tja...@yahoo.com wrote:

 Great! Please report here so we can read about your impressions.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-15 Thread Dennis Clarke
- Original Message -
From: Thomas Burgess wonsl...@gmail.com
Date: Saturday, May 15, 2010 8:09 pm
Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
To: Orvar Korvar knatte_fnatte_tja...@yahoo.com
Cc: zfs-discuss@opensolaris.org


 Well i just wanted to let everyone know that preliminary results are good.
  The livecd booted, all important things seem to be recognized. It 
 sees all
 16 gb of ram i installed and all 8 cores of my opteron 6128
 
 The only real shocker is how loud the norco RPC-4220 fans are (i have
 another machine with a norco 4020 case so i assumed the fans would be
 similar.this was a BAD assumption)  This thing sounds like a hair 
 dryer
 
 Anyways, I'm running the install now so we'll see how that goes. It 
 did take
 about 10 minutes to find a disk durring the installer, but if i remember
 right, this happened on other machines as well.
 

Once you have the install done could you post ( somewhere ) what you see during 
a single user mode boot with options -srv ? 

I would like to see all the gory details.

Also, could you run cpustat -h ? 

At the bottom, according to usr/src/uts/intel/pcbe/opteron_pcbe.c you shoud see 
: 

See BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h Processors 
(AMD publication 31116)

The following registers should be listed : 

 #defineAMD_FAMILY_10h_generic_events   
\
{ PAPI_tlb_dm,DC_dtlb_L1_miss_L2_miss,  0x7 },  \
{ PAPI_tlb_im,IC_itlb_L1_miss_L2_miss,  0x3 },  \
{ PAPI_l3_dcr,L3_read_req,  0xf1 }, \
{ PAPI_l3_icr,L3_read_req,  0xf2 }, \
{ PAPI_l3_tcr,L3_read_req,  0xf7 }, \
{ PAPI_l3_stm,L3_miss,  0xf4 }, \
{ PAPI_l3_ldm,L3_miss,  0xf3 }, \
{ PAPI_l3_tcm,L3_miss,  0xf7 }


You should NOT see anything like this : 

r...@aequitas:/root# uname -a
SunOS aequitas 5.11 snv_139 i86pc i386 i86pc Solaris
r...@aequitas:/root# cpustat -h
cpustat: cannot access performance counters - Operation not applicable


... as well as psrinfo -pv please ? 


When I get my HP Proliant with the 6174 procs I'll be sure to post whatever I 
see. 

Dennis 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-15 Thread John Balestrini
Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that 
the large ratio was tied to that particular snapshot.

basie@/root# zpool list pool1
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
pool1  2.72T  1.55T  1.17T57%  1.50x  ONLINE  -

So, it is possible to turn dedup off? More importantly, what happens when I try?

Thanks

-- John




On May 15, 2010, at 3:40 PM, Ian Collins wrote:

 On 05/16/10 06:52 AM, John Balestrini wrote:
 Howdy All,
 
 I've a bit of a strange problem here. I have a filesystem with one snapshot 
 that simply refuses to be destroyed. The snapshots just prior to it and just 
 after it were destroyed without problem. While running the zfs destroy 
 command on this particular snapshot, the server becomes more-or-less hung. 
 It's pingable but will not open a new shell (local or via ssh) however 
 existing shells will occasionally provide some interaction, but at an 
 obscenely slow rate, if at all. Also, since this snapshot is unusually large 
 in size so I've allowed the server to chew on it over night without success.
 
 It set up as a 3-disk raidz configuration. Here's a abbreviated listing of 
 it (the pool1/st...@20100421.0800 snapshot is the one that is misbehaving):
 
   
 Is dedup enabled on that pool?
 
 -- 
 Ian.
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Destroy One Particular Snapshot

2010-05-15 Thread Ian Collins

On 05/16/10 12:40 PM, John Balestrini wrote:

Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that 
the large ratio was tied to that particular snapshot.

basie@/root# zpool list pool1
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
pool1  2.72T  1.55T  1.17T57%  1.50x  ONLINE  -

So, it is possible to turn dedup off? More importantly, what happens when I try?

   
How is your pool configured?  If you don't have plenty of RAM or a cache 
device it may take an awfully long time to delete a large snapshot.


Run zpool iostat your pool 30  and see what you get.  If you see a 
lot of read activity, expect a long wait!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS home server (Brandon High)

2010-05-15 Thread Thomas Burgess
The Intel SASUC8I Is a pretty good deal.  around 150 dollars for 8 sas/sata
channels.  This card is identical to the LSI SAS3081E-R for a lot less
money.  It doesn't come with cables, but this leaves you free to buy the
type you need (in my case, i needed SFF-8087 - SFF-8087 cables, some people
will need SFF-8087- 4 sata breakout cables...either way, cables run 12-20
dollars each (and each card needs 2) so you can tack that on to the
priceThese cards also work well with expanders.

They are based on LSI 1068e chip.


On Sat, May 15, 2010 at 1:41 PM, Roy Sigurd Karlsbakk r...@karlsbakk.netwrote:

 - Annika annika...@telia.com  skrev:

  I'm also about to set up a small home server. This little box
  http://www.fractal-design.com/?view=productprod=39
  is able housing six 3,5 hdd's and also has one 2,5 bay, eg for an
  ssd.
  Fine.
 
  I need to know which SATA controller cards (both PCI and PCI-E) are
  supported in OS, also I'd be grateful for tips on which ones to use in
  a
  non-pro environment.

 See http://www.sun.com/bigadmin/hcl/data/os/ for supported hardware. There
 was also a post in her yesterday or perhaps earlier today about the choice
 of SAS/SATA controllers. Most will do in a home server environment, though.
 AOC-SAT2-MV8 are great controllers, but run on PCI-X, which isn't very
 compatible with PCI Express

 Best regards

 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
 og relevante synonymer på norsk.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup status

2010-05-15 Thread Erik Trimble

Roy Sigurd Karlsbakk wrote:

Hi all

I've been doing a lot of testing with dedup and concluded it's not really ready 
for production. If something fails, it can render the pool unuseless for hours 
or maybe days, perhaps due to single-threded stuff in zfs. There is also very 
little data available in the docs (though I've from what I've got on this list) 
on how much memory one should have for deduping an xTiB dataset.
  
I think it was Richard a month or so ago that had a good post about 
about how much space the Dedup Table entry would be (it was in some 
discussion where I ask about it).  I can't remember what it was (a 
hundred bytes?) per DDT entry, but one had to remember that each entry 
was for a slab, which can vary in size (512 bytes to 128k).  So, there's 
no good generic formula for X bytes in RAM per Y TB space.  You can 
compute a rough guess if you know what kind of data and the general 
usage pattern is for the  pool (basically, you need to take a stab at  
how big you think the average slab size is).   Also, remember that if 
you have a /very/ good dedup ratio, then you will have a smaller DDT for 
a given X size pool, vs a pool with poor dedup ratios.   

Unfortunately, there's no magic bullet, though if you can dig up 
Richard's post, you should be able to take a guess, and not be off more 
than x2 or so.  

Also, remember you only need to hold the DDT in L2ARC, not in actual 
RAM, so buy that SSD, young man!


As far as failures, well, I can't speak to that specifically. Though, do 
realize that not having sufficient L2ARC/RAM to hold the DDT does mean 
that you spend an awful amount of time reading pool metadata, which 
really hurts performance (not to mention can cripple deleting of any 
sort...)






Does anyone know how the status is for dedup now? In 134 it doesn't work very 
well, but is it better in ON140 etc?

  
Honestly, I don't see it being much different over the last couple of 
builds.  The limitations are still there, but given those ones, I've 
found it works well.




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss