[zfs-discuss] moving rppol in laptop to spare SSD drive.

2010-09-19 Thread Steve Arkley
Hello folks,

I ordered a bunch of 128Gb SSD's the other day, placed 2 in PC, another in a 
windoz laptop and I thought I'd place one in my opensolaris laptop, should be 
straightforward or so I thought.

The problem I seem to be running into is that the partition the rpool is on is 
130Gb, SSD once sliced up is only about 120Gb.

I pulled the main disk from the latop and put it in a caddy, put the new ssd in 
the drive bay and booted from cdrom.

I imported the rpool and created an altpool on the ssd drive.

zfs pool list shows both pools.
altpool size 119G avail 119G
rpool size 130G used 70G

I created a snapshot of the rpool and tried to send it to the other disk but it 
fails with file too large.

zfs send -R rp...@backup  altpool
warning: cannot send 'rpool/bu...@backup': file too large.

is there anyway to get the data over onto the other drive at all?

Thanks Steve.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver that never finishes

2010-09-19 Thread Markus Kovero
Hi, 

 The drives and the chassis are fine, what I am questioning is how can it 
 be resilvering more data to a device than the capacity of the device?

If data on pool has changed during resilver, resilver counter will not update 
accordingly, and it will show resilvering 100% for needed time to catch up.

Yours
Markus Kovero

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver that never finishes

2010-09-19 Thread Tomas Ögren
On 19 September, 2010 - Markus Kovero sent me these 0,5K bytes:

 Hi, 
 
  The drives and the chassis are fine, what I am questioning is how can it 
  be resilvering more data to a device than the capacity of the device?
 
 If data on pool has changed during resilver, resilver counter will not
 update accordingly, and it will show resilvering 100% for needed time
 to catch up.

I believe this was fixed recently, by displaying how many blocks it has
checked vs how many to check...

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver that never finishes

2010-09-19 Thread Tom Bird

On 18/09/10 15:25, George Wilson wrote:

Tom Bird wrote:



In my case, other than an hourly snapshot, the data is not
significantly changing.

It'd be nice to see a response other than you're doing it wrong,
rebuilding 5x the data on a drive relative to its capacity is clearly
erratic behaviour, I am curious as to what is actually happening.



It sounds like you're hitting '6891824 7410 NAS head continually
resilvering following HDD replacement'. If you stop taking and
destroying snapshots you should see the resilver finish.


George, I think you've won the prize.  I suspended the snapshots last 
night and this morning one pool had completed, one left to go.


Thanks,

Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving rppol in laptop to spare SSD drive.

2010-09-19 Thread Mark Farmer
Hi Steve,

Couple of options.  

Create a new boot environment on the SSD, and this will copy the data over.

Or 

zfs send -R rp...@backup | zfs recv altpool

I'd use the alt boot environment, rather than the send and receive.

Cheers,
-Mark.



On 19/09/2010, at 5:37 PM, Steve Arkley wrote:

 Hello folks,
 
 I ordered a bunch of 128Gb SSD's the other day, placed 2 in PC, another in a 
 windoz laptop and I thought I'd place one in my opensolaris laptop, should be 
 straightforward or so I thought.
 
 The problem I seem to be running into is that the partition the rpool is on 
 is 130Gb, SSD once sliced up is only about 120Gb.
 
 I pulled the main disk from the latop and put it in a caddy, put the new ssd 
 in the drive bay and booted from cdrom.
 
 I imported the rpool and created an altpool on the ssd drive.
 
 zfs pool list shows both pools.
 altpool size 119G avail 119G
 rpool size 130G used 70G
 
 I created a snapshot of the rpool and tried to send it to the other disk but 
 it fails with file too large.
 
 zfs send -R rp...@backup  altpool
 warning: cannot send 'rpool/bu...@backup': file too large.
 
 is there anyway to get the data over onto the other drive at all?
 
 Thanks Steve.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




inline: oracle_sig_logo.gif
Mark Farmer | Sales Consultant
Phone: +61730317106  | Mobile: +61414999143 
Oracle Systems

ORACLE Australia | 300 Ann St | Brisbane
inline: green-for-email-sig_0.gif
 Oracle is committed to developing practices and products that help protect the 
environment

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Howto reclaim space under legacy mountpoint?

2010-09-19 Thread Gary Gendel
I moved my home directories to a new disk and then mounted the disk using a 
legacy mount point over /export/home.  Here is the output of the zfs list:

NAME USED  AVAIL  REFER  MOUNTPOINT
rpool   55.8G  11.1G83K  /rpool
rpool/ROOT  21.1G  11.1G19K  legacy
rpool/ROOT/snv-134  21.1G  11.1G  14.3G  /
rpool/dump  1.97G  11.1G  1.97G  -
rpool/export30.8G  11.1G23K  /export
rpool/export/home   30.8G  11.1G  29.3G  legacy
rpool/swap  1.97G  12.9G   144M  -
users   32.8G   881G  31.1G  /export/home

The question is how to remove the files from the orginal rpool/export/home (non 
mount point) rpool?  I a bit nervous to do a:

zfs destroy rpool/export/home

Is the the correct and safe methodology?

Thanks,
Gary
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import Problem ZFS-8000-EY

2010-09-19 Thread rene
OK, the Pool is died and i had create a new one :-)

regards ré
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving rppol in laptop to spare SSD drive.

2010-09-19 Thread Steve Arkley
Doh,

Why didn't I think of that cheers Mark, some time the most obvious options get 
completely passed by, alt boot environment it is.

Thanks Steve.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Howto reclaim space under legacy mountpoint?

2010-09-19 Thread Edho P Arief
On Sun, Sep 19, 2010 at 7:42 PM, Gary Gendel g...@genashor.com wrote:
 I moved my home directories to a new disk and then mounted the disk using a 
 legacy mount point over /export/home.  Here is the output of the zfs list:

 NAME                 USED  AVAIL  REFER  MOUNTPOINT
 rpool               55.8G  11.1G    83K  /rpool
 rpool/ROOT          21.1G  11.1G    19K  legacy
 rpool/ROOT/snv-134  21.1G  11.1G  14.3G  /
 rpool/dump          1.97G  11.1G  1.97G  -
 rpool/export        30.8G  11.1G    23K  /export
 rpool/export/home   30.8G  11.1G  29.3G  legacy
 rpool/swap          1.97G  12.9G   144M  -
 users               32.8G   881G  31.1G  /export/home

 The question is how to remove the files from the orginal rpool/export/home 
 (non mount point) rpool?  I a bit nervous to do a:


set mountpoint to somewhere and decide for yourself.

-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Please warn a home user against OpenSolaris under VirtualBox under WinXP ; )

2010-09-19 Thread Nils
Pardon in advance my n00b ignorance. (Yes I have googled a [i]lot[/i] before 
asking.)

I am considering VirtualBoxing away one physical machine at home, and running 
WinXP as host (yes, as atrocious it may seem, explanation below [1]) and 
OpenSolaris guest as file server, with OpenSolaris (why?[2]) having raw access 
to all my SATA drives. (OSes on the IDE channel). This is for home use, as a 
media center -- files basically written 4 TB once and for all and later only 
rare addtions or overwrites.

If this is a bad idea, I'd like to be warned before I spend another two weeks 
trying to learn, reinstall (initial bare-metal testing OK), swear  curse, 
populate and swear  curse. 

* Raw access. 
- ZFS is best off accessing physical devices, right? That means raw access, or 
is that even insufficient?
- I am a bit scared at Windows having access to the critical hard drives ;)  
And raw access is considered for experts, which I ain't. And there are 
concerns like flush drive ( comments in 
http://blogs.sun.com/mock/entry/stupid_virtualbox_zfs_tricks ) -- solvable yes, 
but I am not confident I know all the warnings.
- FTR, I will anyway have a few-months-old backup at work, so I am not too 
worried at a sudden data loss due to my own screwing up.

* Memory. 
- As said, hardly any write operations, so OpenSolaris seems to run perfectly 
well on this (bare metal) with even less than 1GB under no load, but I 
haven't even tried to populate with data -- how much to even idle on a 5TB 
RAID-Z?
- This mobo clocks down RAM if using more than 2 slots.


[1] Why XP as _host_? 
- Will sometimes run XP on bare metal for performance -- I can then simply 
pause the VM.
- Running my pre-installed XP under VirtualBox looks like a nightmare, judging 
from some googling
- Stability? If XP crashes, I don't need to access my media files until XP is 
up and running again. Would of course be quicker to restart in VM, but hey, 
server uptime is really not an issue.
(- Better network card support.)
- I might have an OpenSolaris installation running on bare metal for those 
occational heavier tasks, and stick to my laptop for those couple of days.

[2] Why OpenSolaris at all?
- Would like ZFS for RAID management (5 disk RAID-Z) and snapshotting 
(basically [i]every[/i] time a file is overwritten!).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please warn a home user against OpenSolaris under VirtualBox under WinXP ; )

2010-09-19 Thread R.G. Keen
I have another question to add to the two you already asked and answered.

Why not two separate machines, one for XP, one for zfs/raid?  At today's 
network speeds, hooking a cable between those two would provide any speed data 
access to the files in the raid that you want. A suitable ZFS machine could sit 
in another room if you want the quiet for home theater.

The idea of having one set of hardware run everything is clever, and 
virtualizing everything is neat, efficient - and complicated.

Now that Oracle has gutted Open Solaris, I'm forced to once again articulate 
why I got into Open Solaris about a year ago - I wanted ZFS, and was willing to 
go learn Open Solaris to get it. ZFS was a powerful motivator. But I can get 
what I need there with just a ZFS back end machine; and last-generation 
hardware is very, very cheap. For me, the UN complication of using two hardware 
setups is well worth it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS checksum errors (ZFS-8000-8A)

2010-09-19 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
 
 It is very unusual to obtain the same number of errors (probably same
 errors) from two devices in a pair.  This should indicate a common
 symptom such as a memory error (does your system have ECC?),
 controller glitch, or a shared power supply issue.

Bob's right.  I didn't notice that both sides of the mirror have precisely
56 checksum errors.  Ignore what I said about adding a 3rd disk to the
mirror.  It won't help.  The 3rd mirror would have only been useful if the
block corruption on these 2 disks weren't the same blocks.

I think you have to acknowledge the fact that you have corrupt data.  And
you should run some memory diagnostics on your system to see if you can
detect some failing memory.  The cause is not necessarily memory, as Bob
pointed out, but a typical way to produce the result you're seeing is ...
ZFS calculates a checksum of a block it's about to write to disk, and of
course that checksum is stored in ram.  Unfortunately, if it's stored in
corrupt ram, then ... when it's written to disk, of course the checksum will
mismatch.  And the faulty checksum gets written to both sides of the mirror.
It is discovered later during your scrub.  There is no un-corrupt copy of
the data that ZFS thought it wrote.

At least it's detected by ZFS.  Without checksumming, that error would pass
undetected.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] space_map again nuked!!

2010-09-19 Thread Victor Latushkin
On Sep 18, 2010, at 11:37 AM, Stephan Ferraro wrote:

 I'm really angry against ZFS:

Emotions rarely help to get to the root cause...

 My server no more reboots because the ZFS spacemap is again corrupt.
 I just replaced the whole spacemap by recreating a new zpool from scratch and 
 copying back the data with zfs send  zfs receive.
 Did it copied corrupt spacemap?!

Definitely no. It suggests that you may have and issue with hardware, for 
example with memory, or CPU and/or cache, or some other components.

regards
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to fsck the spacemap?

2010-09-19 Thread Victor Latushkin

On Sep 19, 2010, at 9:06 PM, Stephan Ferraro wrote:

 Am 19.09.2010 um 18:59 schrieb Victor Latushkin:
 
 On Sep 19, 2010, at 12:08 AM, Stephan Ferraro wrote:
 
 Is there a way to fsck the spacemap?
 Does scrub helps for this?
 
 No, because issues that you see are internal inconsistencies with unclear 
 nature.
 
 Though as actual issue varies from one inctance to another this is likely 
 some random corruption.
 
 
 The error repeated in endless loop because the server rebooted in endless 
 loop and each time when it booted up ZFS it showed this ASSERTION error of 
 the corrupt spacemap every time in the same memory location.

Once it is there you'll trip various assertions each time affected space map is 
loaded. By random corruption I mean that it is different each time you 
encounter it another time after fixing it.

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS checksum errors (ZFS-8000-8A)

2010-09-19 Thread Heinrich
I have registered ECC memory in the system. I will run some memory diagnostics 
also, but mentioning the power supply got me thinking that around the same time 
of the errors we had a storm and the lights dimmed in my house quite a few 
times. It was not enough of a drop to shut the system down but perhaps it had 
something to do with it. Hopefully it is as simple as that. A UPS is now on my 
list.

I took Bob's advice, added more disks and created another pool since I do not 
trust the old pool.
I used dd with noerror and sync to a new block volume and that did the trick, 
thanks Bob and thanks Edward for the explanation.

I was a bit unsure using dd on the zvol directly so I added another LUN (on the 
new pool) to the system's view and used clonezilla; booted it to the command 
prompt and use dd from there to duplicate the dev.
Any thoughts on directly accessing the zvol via dd? I assume it the same as any 
other device and should not be a problem.

Another thing I noticed is the high % of wait I/O on the disks of the 
problematic pool. I am not sure if it was ever this high before. My new pool is 
on a different controller and it is a different raid type so I cannot compare. 
This time I selected raidz2


Thanks for the replies really appreciate it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss