Re: [zfs-discuss] Newbie question : snapshots, replication and recovering failure of Site B

2010-10-27 Thread Tuomas Leikola
On Tue, Oct 26, 2010 at 5:21 PM, Matthieu Fecteau matthieufect...@gmail.com wrote: My question : in the event that there's no more common snapshot between Site A and Site B, how can we replicate again ? (example : Site B has a power failure and then Site A cleanup his snapshots before Site B

[zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-27 Thread Jan Hellevik
Ok, so I did it again... I moved my disks around without doing export first. I promise - after this I will always export before messing with the disks. :-) Anyway - the problem. I decided to rearrange the disks due to cable lengths and case layout. I disconnected the disks and moved them around.

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-27 Thread bhanu prakash
Hi Mike, Thanks for the information... Actually the requirement is like this. Please let me know whether it matches for the below requirement or not. *Question*: The SAN team will assign the new LUN’s on EMC DMX4 (currently IBM Hitache is there). We need to move the 17 containers which are

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-27 Thread Mike Gerdts
On Wed, Oct 27, 2010 at 9:27 AM, bhanu prakash bhanu.sys...@gmail.com wrote: Hi Mike, Thanks for the information... Actually the requirement is like this. Please let me know whether it matches for the below requirement or not. Question: The SAN team will assign the new LUN’s on EMC DMX4

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-27 Thread Roy Sigurd Karlsbakk
- Original Message - Ok, so I did it again... I moved my disks around without doing export first. I promise - after this I will always export before messing with the disks. :-) Anyway - the problem. I decided to rearrange the disks due to cable lengths and case layout. I

[zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
It seems my hardware is getting bad, and I can't keep the os running for more than a few minutes until the machine shuts down. It will run 15 or 20 minutes and then shutdown I haven't found the exact reason for it. Or really any thing in logs that seems like a reason. It may be because I don't

Re: [zfs-discuss] Clearing space nearly full zpool

2010-10-27 Thread Brandon High
On Mon, Oct 25, 2010 at 2:46 AM, Cuyler Dingwell cuy...@gmail.com wrote: I have a zpool that once it hit 96% full the performance degraded horribly.   So, in order to get things better I'm trying to clear out some space.  The problem I have is after I've deleted a directory it no longer shows

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Toby Thain
On 27/10/10 3:14 PM, Harry Putnam wrote: It seems my hardware is getting bad, and I can't keep the os running for more than a few minutes until the machine shuts down. It will run 15 or 20 minutes and then shutdown I haven't found the exact reason for it. One thing to try is a thorough

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-27 Thread David Magda
On Wed, October 27, 2010 15:07, Roy Sigurd Karlsbakk wrote: - Original Message - Ok, so I did it again... I moved my disks around without doing export first. I promise - after this I will always export before messing with the disks. :-) Anyway - the problem. I decided to rearrange

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Toby Thain t...@telegraphics.com.au writes: On 27/10/10 3:14 PM, Harry Putnam wrote: It seems my hardware is getting bad, and I can't keep the os running for more than a few minutes until the machine shuts down. It will run 15 or 20 minutes and then shutdown I haven't found the exact

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Krunal Desai
I believe he meant a memory stress test, i.e. booting with a memtest86+ CD and seeing if it passed. Even if the memory is OK, the stress from that test may expose defects in the power supply or other components. Your CPU temperature is 56C, which is not out-of-line for most modern CPUs (you

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Toby Thain
On 27/10/10 4:21 PM, Krunal Desai wrote: I believe he meant a memory stress test, i.e. booting with a memtest86+ CD and seeing if it passed. Correct. The POST tests are not adequate. --Toby Even if the memory is OK, the stress from that test may expose defects in the power supply or other

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Krunal Desai mov...@gmail.com writes: I believe he meant a memory stress test, i.e. booting with a memtest86+ CD and seeing if it passed. Even if the memory is OK, the stress from that test may expose defects in the power supply or other components. Your CPU temperature is 56C, which is not

[zfs-discuss] Space not freed from large deleted sparse file

2010-10-27 Thread Tom Fanning
I created a 1TB file on my new FreeNAS 0.7.2 Sabanda (revision 5226) box recently using dd, in order to get an idea of write performance, and when I deleted it the space was not freed. Snapshots are not enabled: bunker:~# zfs list -t all NAMEUSED AVAIL REFER MOUNTPOINT tank0

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Toby Thain t...@telegraphics.com.au writes: On 27/10/10 4:21 PM, Krunal Desai wrote: I believe he meant a memory stress test, i.e. booting with a memtest86+ CD and seeing if it passed. Correct. The POST tests are not adequate. Got it. Thank you. Short of doing such a test, I have

[zfs-discuss] Jumping ship.. what of the data

2010-10-27 Thread Harry Putnam
If I were to decide my current setup is too problem beset to continue using it, is there a guide or some good advice I might employ to scrap it out and build something newer and better in the old roomy midtower? I don't mean the hardware part, although I no doubt will need advice right through

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Krunal Desai
With an A64, I think a thermal shutdown would instantly halt CPU execution, removing the chance to write any kind of log message. memtest will report any errors in RAM; perhaps when the ARC expands to the upper-stick of memory it hits the bad bytes and crashes. Can you try switching power

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Glenn Lagasse
* Harry Putnam (rea...@newsguy.com) wrote: Toby Thain t...@telegraphics.com.au writes: On 27/10/10 4:21 PM, Krunal Desai wrote: I believe he meant a memory stress test, i.e. booting with a memtest86+ CD and seeing if it passed. Correct. The POST tests are not adequate. Got it.

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Krunal Desai mov...@gmail.com writes: With an A64, I think a thermal shutdown would instantly halt CPU execution, removing the chance to write any kind of log message. memtest will report any errors in RAM; perhaps when the ARC expands to the upper-stick of memory it hits the bad bytes and

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Bob Friesenhahn
On Wed, 27 Oct 2010, Harry Putnam wrote: I have been having some trouble with corrupted data in one pool but I thought I'd gotten it cleared up and posted to that effect in another thread. zpool status on all pools shows thumbs up. What are some key words I should be looking for in

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Peter Jeremy
On 2010-Oct-28 04:45:16 +0800, Harry Putnam rea...@newsguy.com wrote: Short of doing such a test, I have evidence already that machine will predictably shutdown after 15 to 20 minutes of uptime. My initial guess is thermal issues. Check that the fans are running correctly and there's no

Re: [zfs-discuss] Jumping ship.. what of the data

2010-10-27 Thread Peter Jeremy
On 2010-Oct-28 04:54:00 +0800, Harry Putnam rea...@newsguy.com wrote: If I were to decide my current setup is too problem beset to continue using it, is there a guide or some good advice I might employ to scrap it out and build something newer and better in the old roomy midtower? I'd scrap the

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Peter Jeremy peter.jer...@alcatel-lucent.com writes: It seems there ought to be something, some kind of evidence and clues if I only knew how to look for them, in the logs. Serious hardware problems are unlikely to be in the logs because the system will die before it can write the error to

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Mike Gerdts
On Wed, Oct 27, 2010 at 3:41 PM, Harry Putnam rea...@newsguy.com wrote: I'm guessing it was probably more like 60 to 62 c under load.  The temperature I posted was after something like 5minutes of being totally shutdown and the case been open for a long while. (mnths if not yrs) What happens

Re: [zfs-discuss] Jumping ship.. what of the data

2010-10-27 Thread Harry Putnam
Peter Jeremy peter.jer...@alcatel-lucent.com writes: See the archives for lots more discussion on suggested systems for ZFS. Any suggested search stings? Maybe at search.gmane.org It would be too lucky to expect someone has a list of some good (up to date) setups a home NAS fellow could be

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Harry Putnam
Mike Gerdts mger...@gmail.com writes: [...] Thanks for suggestions and I have closed it all up to see if there was a difference. Perhaps this belongs somewhere other than zfs-discuss - it has nothing to do with zfs. Yes... it does, It started out much nearer to belonging here. Not sure now

Re: [zfs-discuss] Swapping disks in pool to facilitate pool growth

2010-10-27 Thread Brandon High
On Thu, Oct 7, 2010 at 12:22 AM, Kevin Walker indigoskywal...@gmail.com wrote: I would like to know if it is viable to add larger disks to zfs pool to grow the pool size and then remove the smaller disks? I would assume this would degrade the pool and require it to resilver? You can do a zfs

Re: [zfs-discuss] Swapping disks in pool to facilitate pool growth

2010-10-27 Thread David Magda
On Oct 27, 2010, at 21:17, Brandon High wrote: You may be able to replace more than one drive at the same time this way. I've never tried it, and you should test before attempting to do so. If the OP doesn't have a test system available, it may be possible to try this multi-replace

[zfs-discuss] zil behavior

2010-10-27 Thread Kei Saito
Hi! i'm interested in zil's behavior. it's my understanding that zil is used only when synchronous(o_sync,o_dsync directio) write io request. then zil have written contents that system call is used by zil replay when used by crashed. is this correct? It seems a large amount of io for system

Re: [zfs-discuss] Jumping ship.. what of the data

2010-10-27 Thread Haudy Kazemi
Finding PCIe x1 cards with more than 2 SATA ports is difficult so you might want to make sure that either your chosen motherboard has lots of PCIe slots or has some wider slots. If you plan on using on-board video and re-using the x16 slot for something else, you should verify that the BIOS

Re: [zfs-discuss] Swapping disks in pool to facilitate pool growth

2010-10-27 Thread Brandon High
On Wed, Oct 27, 2010 at 3:56 PM, David Magda dma...@ee.ryerson.ca wrote: If the OP doesn't have a test system available, it may be possible to try this multi-replace experiment using plain files as the backing store (created with mkfile). .. or via a VirtualBox, VMWare, or other virtualization