Re: [zfs-discuss] partioned cache devices

2013-03-17 Thread Fajar A. Nugraha
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki 
andrew.werchowie...@xpanse.com.au wrote:

 I understand that p0 refers to the whole disk... in the logs I pasted in
 I'm not attempting to mount p0. I'm trying to work out why I'm getting an
 error attempting to mount p2, after p1 has successfully mounted. Further,
 this has been done before on other systems in the same hardware
 configuration in the exact same fashion, and I've gone over the steps
 trying to make sure I haven't missed something but can't see a fault.


How did you create the partition? Are those marked as solaris partition, or
something else (e.g. fdisk on linux use type 83 by default).

I'm not keen on using Solaris slices because I don't have an understanding
 of what that does to the pool's OS interoperability.



Linux can read solaris slice and import solaris-made pools just fine, as
long as you're using compatible zpool version (e.g. zpool version 28).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS

2012-12-05 Thread Fajar A. Nugraha
On Thu, Dec 6, 2012 at 5:11 AM, Morris Hooten mhoo...@us.ibm.com wrote:
 Is there a documented way or suggestion on how to migrate data from VXFS to
 ZFS?

Not zfs-specific, but this should work for solaris:
http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html#filesystem-15

For illumos-based distros, you'd probably just use rsync/tar/whatever

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rpool rename offline

2012-12-02 Thread Fajar A. Nugraha
On Mon, Dec 3, 2012 at 4:14 AM, Heiko L. h.lehm...@hs-lausitz.de wrote:
 Hallo,

 Howto rename zpool offline (with zdb)?

You don't.

You simply export the pool, and import it (zpool import). Something like

# zpool import old_pool_name_or_ID new_pool_name



 I use OpenSolaris in a VM.
 Pool rpool is to small.
 So i create rpool2 and copy FS from rpool:
 domu # zfs send -R $srcpool@$srcsnap | zfs receive -vFd $dstpool

There's more required then simply copying the data. See
http://docs.oracle.com/cd/E23824_01/html/821-1448/recover-4.html

If your VM is full virtualization (e.g. hvm), then the instructions
should be the same. If it's xen PV, then the hardest part might be
finding the correct kernel boot parameter.

 Pool rpool2 should be rename to rpool.

Not really.

Last time I tried with some version of opensolaris, you can use
whatever pool name you want IF you create it manually (e.g. use a
variant of the above instructions). Check grub command line, it should
include the pool name to boot.

However note that using a pool name other than rpool is AFAIK
unsupported, so you might be best just follow the instructions above.
In your case it should basically be boot with rescue CD and WITHOUT
the old pool disks, then importing the new pool using a new name
(rpool), then follow the relevant part or the docs.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-27 Thread Fajar A. Nugraha
On Tue, Nov 27, 2012 at 5:13 AM, Eugen Leitl eu...@leitl.org wrote:
 Now there are multiple configurations for this.
 Some using Linux (roof fs on a RAID10, /home on
 RAID 1) or zfs. Now zfs on Linux probably wouldn't
 do hybrid zfs pools (would it?)

Sure it does. You can even use the whole disk as zfs, with no
additional partition required (not even for /boot).

 and it wouldn't
 be probably stable enough for production. Right?

Depends on how you define stable, and what kind of in-house
expertise you have.

Some companies are selling (or plan to sell, as their product is in
open beta stage) storage appliances powered by zfs on linux (search
the ZoL list for details). So it's definitely stable-enough for them.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zvol wrapped in a vmdk by Virtual Box and double writes?

2012-11-20 Thread Fajar A. Nugraha
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 Why are you parititoning, then creating zpool,

The common case it's often because they use the disk for something
else as well (e.g. OS), not only for zfs

 and then creating zvol?

Because it enables you to do other stuff easier and faster (e.g.
copying files from the host) compared to using plain disk image files
(vmdk/vdi/vhd/whatever)

 I think you should make the whole disk a zpool unto itself, and then carve 
 out the 128G zvol and 60G zvol.  For that matter, why are you carving out 
 multiple zvol's?  Does your Guest VM really want multiple virtual disks for 
 some reason?

 Side note:  Assuming you *really* just want a single guest to occupy the 
 whole disk and run as fast as possible...  If you want to snapshot your 
 guest, you should make the whole disk one zpool, and then carve out a zvol 
 which is significantly smaller than 50%, say perhaps 40% or 45% might do the 
 trick.

... or use sparse zvols, e.g. zfs create -V 10G -s tank/vol1

Of course, that's assuming you KNOW that you never max-out storage use
on that zvol. If you don't have control over that, then using smaller
zvol size is indeed preferable.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Fajar A. Nugraha
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
brian.wil...@doit.wisc.edu wrote:
 So it depends on your setup. In your case if it's at all painful to grow the 
 LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB 
 LUNs with them one at a time with zpool replace, and wait for the resliver to 
 finish each time. With autoexpansion on,

Yup, that's the gotcha. AFAIK autoexpand is off by default. You should
be able to use zpool online -e to force the expansion though.

 you should get the additional capacity as soon as the resliver for each one 
 is done, and the old 2TB LUNs should be reclaimable as soon as it's 
 reslivered out.

Minor correction: the additional capacity is only usable after a top
level vdev is completely replaced. In case of stripe-of-mirrors, this
is as soon as all vdev in one mirror is replaced. In the case of
raidzX, this is when all vdev is replaced.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool LUN Sizes

2012-10-28 Thread Fajar A. Nugraha
On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha

 So my
 suggestion is actually just present one huge 25TB LUN to zfs and let
 the SAN handle redundancy.

 create a bunch of 1-disk volumes and let ZFS handle them as if they're JBOD.

Last time I use IBM's enterprise storage (which was, admittedly, a
long time ago) you can't even do that. And looking at Morris' mail
address, it should be revelant :)

... or probably it's just me who haven't found how to do that. Which
why I suggested just use whatever the SAN can present :)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool LUN Sizes

2012-10-26 Thread Fajar A. Nugraha
On Sat, Oct 27, 2012 at 4:08 AM, Morris Hooten mhoo...@us.ibm.com wrote:
 I'm creating a zpool that is 25TB in size.

 What are the recommendations in regards to LUN sizes?

 For example:

 Should I have 4 x 6.25 TB LUNS to add to the zpool or 20 x 1.25TB LUNs to
 add to the pool?

 Or does it depend on the size of the san disks themselves?

More like you shouldn't let the SAN mess with the disks and let zfs
see the disks as JBOD.

... but then again, your SAN might not let you do that. So my
suggestion is actually just present one huge 25TB LUN to zfs and let
the SAN handle redundancy.

Or if your SAN can't do it either, just let it give
whatever-biggest-size LUN that it can, and simply use stripe on zfs
side.



 Or should I divide the zpool up and make several smaller zpools?

If you're going to use them for a single purpose anyway (e.g. storing
database files from a single db, or whatever), I don't see the benefit
in doing that.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing rpool device paths/drivers

2012-10-03 Thread Fajar A. Nugraha
On Wed, Oct 3, 2012 at 5:43 PM, Jim Klimov jimkli...@cos.ru wrote:
 2012-10-03 14:40, Ray Arachelian пишет:

 On 10/03/2012 05:54 AM, Jim Klimov wrote:

 Hello all,

It was often asked and discussed on the list about how to
 change rpool HDDs from AHCI to IDE mode and back, with the
 modern routine involving reconfiguration of the BIOS, bootup
 from separate live media, simple import and export of the
 rpool, and bootup from the rpool.

IIRC when working with xen I had to boot with live cd, import the
pool, then poweroff (without exporting the pool). Then it can boot.
Somewhat inline with what you described.

 The documented way is to
 reinstall the OS upon HW changes. Both are inconvenient to
 say the least.


 Any chance to touch /reconfigure, power off, then change the BIOS
 settings and reboot, like in the old days?   Or maybe with passing -r
 and optionally -s and -v from grub like the old way we used to
 reconfigure Solaris?


 Tried that, does not help. Adding forceloads to /etc/system
 and remaking the boot archive - also no.

On Ubuntu + zfsonlinux + root/boot on zfs, the boot script helper is
smart enough to try all available device nodes, so it wouldn't
matter if the dev path/id/name changed. But ONLY if there's no
zpool.cache in the initramfs.

Not sure how easy it would be to port that functionality to solaris.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iscsi confusion

2012-09-28 Thread Fajar A. Nugraha
On Sat, Sep 29, 2012 at 3:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 I am confused, because I would have expected a 1-to-1 mapping, if you create
 an iscsi target on some system, you would have to specify which LUN it
 connects to.  But that is not the case...

Nope. one target can have anything from zero (which is kinda useless)
or many LUNs.

 I shouldn't be thinking in such linear terms.  When I create an iscsi
 target, don't think of it as connecting to a device - instead, think of it
 as sort of a channel.  Any initiator connecting to it can see any of the
 devices that I have done add-views on.

Yup

  But each iscsi target can only be
 used by one initiator at a time.

Nope. Many people use iscsi to provide shared storage (e.g. for
clustering), where two or more initiators connetcs to the same target.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-16 Thread Fajar A. Nugraha
On Sun, Sep 16, 2012 at 7:43 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 There's another lesson to be learned here.

 As mentioned by Matthew, you can tweak your reservation (or refreservation) 
 on the zvol, but you do so at your own risk, possibly putting yourself into a 
 situation where writes to the zvol might get denied.

 But the important implied meaning is the converse - If you have guest VM's in 
 the filesystem (for example, if you're sharing NFS to ESX, or if you're 
 running VirtualBox) then you might want to set the reservation (or 
 refreservation) for those filesystems modeled after the zvol  behavior.  In 
 other words, you might want to guarantee that ESX or VirtualBox can always 
 write.  It's probably a smart thing to do, in a lot of situations.

I'd say just do what you normally do.

In my case, I use sparse files or dynamic disk images anyway, so when
I use zvols I use zfs create -s. That single switch sets reservation
and refreservation to none,

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box? D1B1A95FBD cf7341ac8eb0a97fccc477127fd...@sn2prd0410mb372.namprd04.prod.outlook.com

2012-08-30 Thread Fajar A. Nugraha
On Thu, Aug 30, 2012 at 9:08 PM, Nomen Nescio nob...@dizum.com wrote:
 In this specific use case I would rather have a system that's still bootable
 and runs as best it can

That's what would happen if the corruption happens on part of the disk
(e.g. bad sector).

 than an unbootable system that has detected an
 integrity problem especially at this point in ZFS's life. If ZFS would not
 panic the kernel and give the option to fail or mark file(s) bad,

You'd be unable to access that particular file. Access to other files
would still be fine.

 it
 would be very annoying if ZFS barfed on a technicality and I had to
 reinstall the whole OS because of a kernel panic and an unbootable system.

It shouldn't do that.

Plus, if you look around a bit, you'll find some tutorials to back up
the entire OS using zfs send-receive. So even if for some reason the
OS becomes unbootable (e.g. blocks on some critical file is corrupted,
which would cause panic/crash no matter what filesystem you use), the
reinstall process is basically just a zfs send-receive plus
installing the bootloader, so it can be VERY fast.

This is what I do on linux (ubuntu + zfsonlinux). Two notebooks and
one USB disk (which function as rescue/backup disk) basically store
the same copy of the OS dataset, with very small variations (only four
files) for each environment. I can even update one of them and copy
the update result (using incremental send) to the others, making sure
I will always have the same working environment no matter which
notebook I'm working on.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Fajar A. Nugraha
On Thu, Aug 30, 2012 at 11:15 PM, Nomen Nescio nob...@dizum.com wrote:

 Plus, if you look around a bit, you'll find some tutorials to back up
 the entire OS using zfs send-receive. So even if for some reason the
 OS becomes unbootable (e.g. blocks on some critical file is corrupted,
 which would cause panic/crash no matter what filesystem you use), the
 reinstall process is basically just a zfs send-receive plus
 installing the bootloader, so it can be VERY fast.

 Now that is interesting. But how do you do a receive before you reinstall?
 Live cd??

Live CD, live USB, or better yet, a full-blown installation on a USB
disk. This is different from a live USB in that it's faster and you
can customize it (i.e. add/remove packages) just like a normal
installation.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Fajar A. Nugraha
On Tue, Jul 10, 2012 at 4:25 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
 Hi all,

 By default I'm using ZFS for all the zones:

 admjoresp@cyd-caszonesrv-15:~$ zfs list
 NAME USED  AVAIL  REFER  MOUNTPOINT
 opt 4.77G  45.9G   285M  /opt
 opt/zones   4.49G  45.9G29K  /opt/zones
 opt/zones/glad-gm02-ftcl01   367M  45.9G   367M  /opt/zones/glad-gm02-ftcl01
 opt/zones/glad-gp02-ftcl01   502M  45.9G   502M  /opt/zones/glad-gp02-ftcl01
 opt/zones/glad-gp02-ftcl02  1.21G  45.9G  1.21G  /opt/zones/glad-gp02-ftcl02
 opt/zones/mbd-tcasino-02 257M  45.9G   257M  /opt/zones/mbd-tcasino-02
 opt/zones/mbd-tcasino-04 281M  45.9G   281M  /opt/zones/mbd-tcasino-04
 opt/zones/mbfd-gp02-ftcl01   501M  45.9G   501M  /opt/zones/mbfd-gp02-ftcl01
 opt/zones/mbfd-gp02-ftcl02   475M  45.9G   475M  /opt/zones/mbfd-gp02-ftcl02
 opt/zones/mbhd-gp02-ftcl01   475M  45.9G   475M  /opt/zones/mbhd-gp02-ftcl01
 opt/zones/mbhd-gp02-ftcl02   507M  45.9G   507M  /opt/zones/mbhd-gp02-ftcl02

 However, I have the compression disabled in all of them.

 According to this Oracle whitepaper
 http://www.oracle.com/technetwork/server-storage/solaris10/solaris-zfs-in-containers-wp-167903.pdf:

 The next example demonstrates the compression property. If compression is
 enabled, Oracle Solaris ZFS will transparently compress all of the data
 before it is written to disk. The benefits of compression
 are both saved disk space and possible write speed improvements.

 What exactly means POSSIBLE write speed improvements?

compression = possibly less data  to write (depending on the data) =
possibly faster writes

Some data is not compressible (e.g. mpeg4 movies), so in that case you
won't see any improvements.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Fajar A. Nugraha
On Tue, Jul 10, 2012 at 4:40 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
 On 2012-07-10 11:34, Fajar A. Nugraha wrote:

 compression = possibly less data  to write (depending on the data) =
 possibly faster writes

 Some data is not compressible (e.g. mpeg4 movies), so in that case you
 won't see any improvements.


 Thanks for your answer Fajar.

 As I said in my initial mail, those zones are mainly only writing some
 Glassfish logs. Since they all are text files, I guess I can save a lot of
 space using compression. Hopefully I can even improve the performance. Can
 I?


correct.

Even normal OS files are usually compressible-enough. For example,
this is my root partition (Ubuntu, but uses zfs nontheles)

$ sudo zfs get compression,compressratio C/ROOT/precise
NAMEPROPERTY   VALUE SOURCE
C/ROOT/precise  compressiongzip  inherited from C
C/ROOT/precise  compressratio  2.48x -

so on that dataset I save over 50% I/O read/writes (in bytes)

 However. What's the difference using

 zfs set compression=on opt/zones/whatever_zone

on = standard LZJB compression (very fast, but doesn't compress much)


 or

 zfs set compression=gzip-6 opt/zones/whatever_zone

gzip-6 and gzip uses gzip compression. Fast enough, good compression.


 or

 zfs set compression=gzip-9 opt/zones/whatever_zone

gzip-9 = uses gzip's best, but also slowest, compression

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very sick iSCSI pool

2012-07-02 Thread Fajar A. Nugraha
On Tue, Jul 3, 2012 at 11:08 AM, Ian Collins i...@ianshome.com wrote:
 I'm assuming the pool is hosed?

 Before making that assumption, I'd try something simple first:
 - reading from the imported iscsi disk (e.g. with dd) to make sure
 it's not iscsi-related problem
 - import the disk in another host, and try to read the disk again, to
 make sure it's not client-specific problem
 - possibly restart the iscsi server, just to make sure

 Booting the initiator host from a live DVD image and attempting to
 import the pool gives the same error report.


 The pool's data appears to be recoverable when I import it read only.

That's good


 The storage appliance is so full they can't delete files from it!

Hahaha :D

  Now that
 shouldn't have caused problems with a fixed sized volume, but who knows?

AFAIK you'll always need space, e.g. to replay/rollback transactions
during pool import.

The best way is, of course, fix the appliance. Sometimes something
simple like deleting snapshots/datasets will do the trick.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very sick iSCSI pool

2012-06-30 Thread Fajar A. Nugraha
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins i...@ianshome.com wrote:
 On 06/30/12 03:01 AM, Richard Elling wrote:

 Hi Ian,
 Chapter 7 of the DTrace book has some examples of how to look at iSCSI
 target
 and initiator behaviour.


 Thanks Richard, I 'll have a look.

 I'm assuming the pool is hosed?

Before making that assumption, I'd try something simple first:
- reading from the imported iscsi disk (e.g. with dd) to make sure
it's not iscsi-related problem
- import the disk in another host, and try to read the disk again, to
make sure it's not client-specific problem
- possibly restart the iscsi server, just to make sure

I suspect the problem is with your oracle storage appliance. But since
you say there's no errors there, then the simple tests should make
sure whethere it's client, disk, or zfs problem.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Fajar A. Nugraha
On Mon, Jun 18, 2012 at 2:19 PM, Koopmann, Jan-Peter
jan-pe...@koopmann.eu wrote:
 Hi Carson,


 I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
 They also make a 4-bay TR4X.

 http://www.sansdigital.com/towerraid/tr4xb.html
 http://www.sansdigital.com/towerraid/tr8xb.html


 looks nice! The only thing coming to mind is that according to the
 specifications the enclosure is 3Gbits only.

You mean http://www.sansdigital.com/towerraid-plus/index.php ?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Fajar A. Nugraha
On Wed, Apr 18, 2012 at 6:43 PM, Jim Klimov jimkli...@cos.ru wrote:
 Hmmm, how come they have encryption and we don't?

Cause the author doesn't really try it :)

If he did, he would've known that encryption doesn't work (unless you
encrypt the underlying storage with luks, which doesn't count). And
that Ubuntu ppa is also usable on debian, so that he didn't have to
compile from source or worry about misplaced manpages.

 Can GNU/Linux boot off raidz roots?

boot as in root (/) located on raidz, yes.

boot as in the whole disk including /boot is on raidz, no, due to
grub2 limitation. /boot still needs to be on a separate ext2/3/4
partiition (recommended), or on a non-raidz pool (possibly with some
additional hacks).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-25 Thread Fajar A. Nugraha
On Mon, Mar 26, 2012 at 2:13 AM, Aubrey Li aubrey...@gmail.com wrote:
 The problem is, every zfs vnode access need the **same zfs root**
 lock. When the number of
 httpd processes and the corresponding kernel threads becomes large,
 this root lock contention
 becomes horrible. This situation does not occurs on linux.


 I disagree with your conclusion and I've seen ZFS systems do millions of
 stats()
 per second without issue. What does prstat -Lm show?
  -- richard


 I have ever not seen any issues until I did a comparison with Linux.

So basically you're comparing linux + ext3/4 performance with solaris
+ zfs, on the same hardware? That's not really fair, is it?
If your load is I/O-intensive enough, ext3/4 will easily win since it
doesn't have to do things like calculating checksum.

Now if you compare it to .. say ... btrfs, or zfsonlinux, it'd be more
in the same ballpark.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-25 Thread Fajar A. Nugraha
On Mon, Mar 26, 2012 at 12:19 PM, Richard Elling
richard.ell...@richardelling.com wrote:
 Apologies to the ZFSers, this thread really belongs elsewhere.

Some of the info in it is informative for other zfs users as well though :)

 Here is the output, I changed to tick-5sec and trunc(@, 5).

 No.2 and No.3 is what I care about.

 The sort is in reverse order. The large number you see below the
 stack trace is the number of times that stack was seen. By far the
 most frequently seen is tmpfs`tmp_readdir

So to summarize, even though there are locks in zfs, they are normal,
and insignificant performance-wise (at least in this case). And if
performance is a problem, the cause is elsewhere.

Is that correct?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Fajar A. Nugraha
On Thu, Mar 8, 2012 at 4:38 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
 Hi,

 I had a single-disk zpool (export) and was given two new disks for expanded
 storage. All three disks are identically sized, no slices/partitions. My
 goal is to create a raidz1 configuration of the three disks, containing the
 data in the original zpool.

 However, I got off on the wrong foot by doing a zpool add of the first
 disk. Apparently this has simply increased my storage without creating a
 raidz config.

IIRC you can't convert a single-disk (or striped) pool to raidz. You
can  only convert it to mirror. So even your intended approach (you
wanted to try zpool attach?) was not appropriate.


 Unfortunately, there appears to be no simple way to just remove that disk
 now and do a proper raidz create of the other two. Nor am I clear on how
 import/export works and whether that's a good way to copy content from one
 zpool to another on a single host.

 Can somebody guide me? What's the easiest way out of this mess, so that I
 can move from what is now a simple two-disk zpool (less than 50% full) to a
 three-disk raidz configuration, starting with one unused disk?

- use the one new disk to create a temporary pool
- copy the data (zfs snapshot -r + zfs send -R | zfs receive)
- destroy old pool
- create a three-disk raidz pool using two disks and a fake device,
something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
- destroy the temporary pool
- replace the fake device with now-free disk
- export the new pool
- import the new pool and rename it in the process: zpool import
temp_pool_name old_pool_name

 In the end I
 want the three-disk raidz to have the same name (and mount point) as the
 original zpool. There must be an easy way to do this.

Nope.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Fajar A. Nugraha
On Thu, Mar 8, 2012 at 5:48 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
 Wait, I'm not following the last few steps you suggest. Comments inline:


 On 03/07/12 17:03, Fajar A. Nugraha wrote:

 - use the one new disk to create a temporary pool
 - copy the data (zfs snapshot -r + zfs send -R | zfs receive)
 - destroy old pool
 - create a three-disk raidz pool using two disks and a fake device,
 something like http://www.dev-eth0.de/creating-raidz-with-missing-device/


 Don't I need to copy the data back from the temporary pool to the new raidz
 pool at this point?

Yes, I missed it :)
That's what you get for writing mail at 5 am :P

 I'm not understanding the process beyond this point, can you clarify please?


 - destroy the temporary pool


 So this leaves the data intact on the disk?


Destroy it after the data is copied back, of course.


 - replace the fake device with now-free disk


 So this replicates the data on the previously-free disk across the raidz
 pool?

Not really.
The fake disk was never written cause it was destroyed soon after
created (see the link), so the pool was degraded. The replace process
tells zfs to use the new disk to make the pool not degraded anymore by
writing the necessary data (e.g. raidz parity, although this might not
be the most accurate way to describe it) to  the new disk.


 What's the point of the following export/import steps? Renaming?

Yes

 Why can't I
 just give the old pool name to the raidz pool when I create it?

Cause you can't have two pools with the same name. You either need to
rename the old pool first, or rename the new pool afterwards.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Fajar A. Nugraha
On Thu, Mar 8, 2012 at 10:28 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
 On 3/7/2012 9:04 PM, Fajar A. Nugraha wrote:

 Why can't I
 just give the old pool name to the raidz pool when I create it?

 Cause you can't have two pools with the same name. You either need to
 rename the old pool first, or rename the new pool afterwards.

 But in your instructions you have me destroying the old pool before creating
 the new raidz pool, so it seems I can create the new pool with the old name.

You're probably right :)

 This means I don't need the export/import at the end, doesn't it?

Yup.


 So I think the steps are:


 - use the one new disk to create a temporary pool
 - copy the data (zfs snapshot -r + zfs send -R | zfs receive)

i'd probably add verify that your data is copied and accessibe in the
temp pool, just to be sure.

 - destroy old pool
 - create a three-disk raidz pool (with the old pool name) using two disks
 and a fake device,
 something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
 - copy data to new pool from temp pool

... and here as well, verify that your data is copied and accessibe
in the new pool, just to be sure.



 - destroy the temporary pool
 - replace the fake device with now-free disk


yup


 I think that's it. Does this look right? I very much appreciate your
 assistance here. Kinda important to me that I get this right :-)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-05 Thread Fajar A. Nugraha
On Fri, Jan 6, 2012 at 12:32 PM, Jesus Cea j...@jcea.es wrote:
 So, my questions:

 a) Is this workflow reasonable and would work?. Is the procedure
 documented anywhere?. Suggestions?. Pitfalls?

try 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery


 b) *MUST* SWAP and DUMP ZVOLs reside in the root zpool or can they
 live in a nonsystem zpool? (always plugged and available). I would
 like to have a quite small(let say 30GB, I use Live Upgrade and quite
 a fez zones) system zpool, but my swap is huge (32 GB and yes, I use
 it) and I would rather prefer to have SWAP and DUMP in the data zpool,
 if possible  supported.

try it? :D

Last time i played around with S11, you could even go without swap and
dump (with some manual setup).


 c) Currently Solaris decides to activate write caching in the SATA
 disks, nice. What would happen if I still use the complete disks BUT
 with two slices instead of one?. Would it still have write cache
 enabled?. And yes, I have checked that the cache flush works as
 expected, because I can only do around one hundred write+sync per
 second.

You can enable disk cache manually using format.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any HP Servers recommendation for Openindiana (Capacity Server) ?

2012-01-03 Thread Fajar A. Nugraha
On Wed, Jan 4, 2012 at 1:36 PM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
 On Tue, Jan  3 at  8:03, Gary Driggs wrote:

 I can't comment on their 4U servers but HP's 12U includwd SAS
 controllers rarely allow JBOD discovery of drives. So I'd recommend an
 LSI card and an external storage chassis like those available from
 Promise and others.


 That was what got us with the HP boxes was the unsupported RAID cards.

See also http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/48231
, Edmund's post in particular.
370-G6  should be able to handle max 14x3TB drives.

 We ended up getting Dell T610 boxes with SAS6i/R cards, which are
 properly supported in Solaris/OI.

 Supposedly the H200/H700 cards are just their name for the 6gbit LSI
 SAS cards, but I haven't tested them personally.

Were the Dell cards able to present the disks as JBOD without any
third-party-flashing involved?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-29 Thread Fajar A. Nugraha
On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote:
 Is there a non-disruptive way to undeduplicate everything and expunge
 the DDT?

AFAIK, no

  zfs send/recv and then back perhaps (we have the extra
 space)?

That should work, but it's disruptive :D

Others might provide better answer though.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Fajar A. Nugraha
On Tue, Dec 20, 2011 at 9:51 AM, Frank Cusack fr...@linetwo.net wrote:
 If you don't detach the smaller drive, the pool size won't increase.  Even
 if the remaining smaller drive fails, that doesn't mean you have to detach
 it.  So yes, the pool size might increase, but it won't be unexpectedly.
 It will be because you detached all smaller drives.  Also, even if a smaller
 drive is failed, it can still be attached.

Isn't autoexpand=off by default, so it won't use the larger size anyway?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-18 Thread Fajar A. Nugraha
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek p...@freebsd.org wrote:
 BTW. Can you, Cindy, or someone else reveal why one cannot boot from
 RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
 would have to be licensed under GPL as the rest of the boot code?

 I'm asking, because I see no technical problems with this functionality.
 Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev
 pools works just fine on FreeBSD for a long time now.

Really? How do they do that?

In Linux, you can boot from disks with GPT label with grub2, and have
/ on raidz, but only as long as /boot is on grub2-compatible fs
(e.g. single or mirrored zfs pool, ext4, etc).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-18 Thread Fajar A. Nugraha
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
 The affected pool does indeed have a mix of straight disks and
 mirrored disks (due to running out of vdevs on the controller),
 however it has to be added that the performance of the affected pool
 was excellent until around 3 weeks ago, and there have been no
 structural changes nor to the pools neither to anything else on this
 server in the last half year or so.

Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-18 Thread Fajar A. Nugraha
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
 Hi,

 On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
 Is the pool over 80% full? Do you have dedup enabled (even if it was
 turned off later, see zpool history)?

 The pool stands at 86%, but that has not changed in any way that
 corresponds chronologically with the sudden drop in performance on the
 pool.

From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it seems to be inaccessible
now:


Keep pool space under 80% utilization to maintain pool performance.
Currently, pool performance can degrade when a pool is very full and
file systems are updated frequently, such as on a busy mail server.
Full pools might cause a performance penalty, but no other issues. If
the primary workload is immutable files (write once, never remove),
then you can keep a pool in the 95-96% utilization range. Keep in mind
that even with mostly static content in the 95-96% range, write, read,
and resilvering performance might suffer.


I'm guessing that your nearly-full disk, combined with your usage
performance, is the cause of slow down. Try freeing up some space
(e.g. make it about 75% full), just tot be sure.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Fajar A. Nugraha
On Sat, Dec 17, 2011 at 6:48 AM, Edmund White ewwh...@mac.com wrote:
 If you can budget 4U of rackspace, the DL370 G6
 is a good option that can accommodate 14LFF or 24 SFF disks (or a
 combination). I've built onto DL180 G6 systems as well. If you do the
 DL180 G6, you'll need a 12-bay LFF model. I'd recommend a Lights-Out 100
 license key to gain remote console. The backplane has a built-in SAS
 expander, so you'll only have a single 4-lane SAS cable to the controller.
 I typically use LSI controllers. In the DL180, I would spec a LSI 9211-4i
 SAS HBA. You have room to mount a ZIL or L2Arc internally and leverage the
 motherboard SATA ports. Otherwise, consider a LSI 9211-8i HBA and use the
 second 4-land SAS connector for those.

 See: http://www.flickr.com/photos/ewwhite/sets/72157625918734321/ for an
 example of the DL380 G7 build.

I assume you bought the controller separately, not from HP, right? Are
there any other parts you need to buy separately? (e.g. cables)
How about the disks? are they from HP?


-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Fajar A. Nugraha
On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack fr...@linetwo.net wrote:
 I haven't been able to get this working.  To keep it simpler, next I am
 going to try usbcopy of the live USB image in the VM, and see if I can boot
 real hardware from the resultant live USB stick.

To be clear, I'm talking about two things:
(1) live USB, created from Live CD
(2) solaris installed on USB

This first one works on real hardware, but not on a VM. The cause is
simple: seems like a boot code somewhere searches ONLY removable media
for live solaris image. Since you need to map the USB disk as regular
disk (SATA/IDE/SCSI) in a VM to be able to boot from it, you won't be
able to boot live usb on a VM.

The second one works on both real hardare and VM, BUT with a
prequisite that you have to export-import rpool first on that
particular system. Unless you already have solaris installed, this
usually means you need to boot with a live cd/usb first.

I'm not sure what you mean by usbcopy of the live USB image in the
VM, and see if I can boot real hardware from the resultant live USB
stick.. If you're trying to create (1), it'd be simpler to just use
live cd on real hardware, and if necessary create live usb there (MUCH
faster than on a VM). If you mean (2), then it won't work unless you
boot with live cd/usb first.

Oh and for reference, instead of usbcopy, I prefer using this method:
http://blogs.oracle.com/jim/entry/how_to_create_a_usb

-- 
Fajar


 On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha l...@fajar.net wrote:

 On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov jimkli...@cos.ru wrote:
  Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
  seems to have gone away.
 
  I haven't used sol11 yet, so I can't say for certain.
  But it is possible that the default boot (without findroot)
  would use the bootfs property of your root pool.

 Nope.

 S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
 property is no longer used.

 Anyway, after some testing, I found out you CAN use vbox-installed s11
 usb stick on real notebook (enough hardware difference there). The
 trick is you have to import-export the pool on the system you're going
 to boot the stick on. Meaning, you need to have S11 live cd/usb handy
 and boot with that first before booting using your disk.

 --
 Fajar
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Fajar A. Nugraha
On Wed, Nov 30, 2011 at 2:35 PM, Frank Cusack fr...@linetwo.net wrote:
 The second one works on both real hardare and VM, BUT with a
 prequisite that you have to export-import rpool first on that
 particular system. Unless you already have solaris installed, this
 usually means you need to boot with a live cd/usb first.


 yup.  I didn't quite do that, what I did is exit to shell after installing
 (from install CD) onto the USB.  Then in the shell from the install CD I did
 the zpool export.  The resultant USB is still unbootable for me on real
 hardware.

It won't work unless you did export-import on the real hardware. Blame
oracle for that. Even zfsonlinux can work without this hassle.

... then again your kind of use case is probably not the supported
configuration anyway, and there's no incentive for Oracle to fix it
:)


 Anyway, the point of that story is that I tried to install onto it as as USB
 device, instead of as a SATA device, in case something special happens to
 make USB bootable that doesn't happen when the S11 installer thinks it's a
 SATA device.  But I was unable to complete that test.

Not sure about solaris, but in linux grub1 installation would fail in
the BIOS does not list the disk as bootable. Virtualbox definitely
does not support booting from passthru-usb, so that may be the source
of your problem.

Mapping it as SATA disk should work as expected.

 I don't use live cd on real hardware because that doesn't meet my objective
 of being able to create a removable boot drive, created in a VM, that I can
 boot on real hardware if I wanted to.  I mean, I could do it that way, but I
 want to be able to do this in a 100% VM environment.

I use ubuntu for that, which works fine :D
It also supports zfs (via zfsonlinux), albeit limited to pool version
28 (same as openindiana)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-22 Thread Fajar A. Nugraha
On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov jimkli...@cos.ru wrote:
 Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
 seems to have gone away.

 I haven't used sol11 yet, so I can't say for certain.
 But it is possible that the default boot (without findroot)
 would use the bootfs property of your root pool.

Nope.

S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
property is no longer used.

Anyway, after some testing, I found out you CAN use vbox-installed s11
usb stick on real notebook (enough hardware difference there). The
trick is you have to import-export the pool on the system you're going
to boot the stick on. Meaning, you need to have S11 live cd/usb handy
and boot with that first before booting using your disk.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Fajar A. Nugraha
On Tue, Nov 22, 2011 at 11:26 AM, Frank Cusack fr...@linetwo.net wrote:
 I have a Sun machine running Solaris 10, and a Vbox instance running Solaris
 11 11/11.  The vbox machine has a virtual disk pointing to /dev/disk1
 (rawdisk), seen in sol11 as c0t2.

 If I create a zpool on the Sun s10 machine, on a USB stick, I can take that
 USB stick and access it through the vbox virtual disk.  Just as expected.

 If I boot vbox from the s11 ISO, and install s11 onto USB stick (via the
 virtual device), I can boot the Sun machine from it, which puts up the grub
 menu but then fails to boot Solaris.  There's some kind of error which might
 not be making it to /SP/console, but after grub it seems to hang for a few
 seconds then reboot.

 The vbox happens to be running on Mac OS 10.6.x.

 This *should* work, yes?  Any thoughts as to why it doesn't?

 Not that this should matter, but on the vbox machine, sol11 sees the USB
 stick as a normal SATA hard drive, e.g. when I run 'format' it is in the
 list of drives.  On the Sun machine, it is seen as a removable drive by s10,
 e.g. I have to run 'format -e' to see the drive.

So basically the question is if you install solaris on one machine,
can you move the disk (in this case the usb stick) to another machine
and boot it there, right?

The answer, as far as I know, is NO, you can't. Of course, I could be
wrong though (and in this case I'll be happy if I'm wrong :D ). IIRC
the only supported way to move (or clone) solaris installation is by
using flash archive (flar), which (now) should also work on zfs.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Fajar A. Nugraha
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net wrote:
 On Mon, Nov 21, 2011 at 9:04 PM, Fajar A. Nugraha w...@fajar.net wrote:

 So basically the question is if you install solaris on one machine,
 can you move the disk (in this case the usb stick) to another machine
 and boot it there, right?

 Yes, but one of the machines is a virtual machine.

It shouldn't matter, really. As far as solaris (or any other is) is
concerned, it's just a different machine.


 The answer, as far as I know, is NO, you can't. Of course, I could be
 wrong though (and in this case I'll be happy if I'm wrong :D ). IIRC
 the only supported way to move (or clone) solaris installation is by
 using flash archive (flar), which (now) should also work on zfs.


 If we ignore the vbox aspect of it, and assume real hardware with real
 devices, of course you can install on one x86 hardware and move the drive to
 boot on another x86 hardware.  This is harder on SPARC (b/c hostid and zfs
 mount issues) but still possible.

Have you tried? :D

IIRC there was a discussion about it (several years ago, I think), and
the issue back then was that there might be some necessary device
nodes not available when you simply move the disk around.


 The weird thing here is that the install hardware is a virtual machine.  One
 thing I know is odd is that the USB drive is seen to the virtual machine as
 a SATA drive

That's how it works when you use rawdisk passthrough. Virtualbox does
not have the necessary USB-boot support (yet).
Think of it like you have a SATA disk with usb enclosure, but now you
remove the enclosure and plug it directly to the onboard SATA
controller.

 but when moved to the real hardware it's seen as a USB drive.

... and that's how it should be

 There may be something else going on here that someone more familiar with
 vbox may know more about.

 Since this works seamlessly when the zpool in question is just a data pool,
 I'm wondering why it doesn't work when it's a boot drive.

I have a hunch that it might be something related to grub. Trying something ...

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Fajar A. Nugraha
On Tue, Nov 22, 2011 at 12:53 PM, Frank Cusack fr...@linetwo.net wrote:
 On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha w...@fajar.net wrote:

 On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net wrote:
 
  If we ignore the vbox aspect of it, and assume real hardware with real
  devices, of course you can install on one x86 hardware and move the
  drive to
  boot on another x86 hardware.  This is harder on SPARC (b/c hostid and
  zfs
  mount issues) but still possible.

 Have you tried? :D

 Yes, I do this all the time.  Between identical hardware, though.  It used
 to be tricky when you had to know actual device paths and/or /dev/dsk/*
 names but with zfs that issue has gone away and it doesn't matter if drives
 show up at different locations when moving the boot drive around.


Ah, you're more experienced that I am then. In that case you might want to try:
- boot with live CD on your sun box
- plug your usb drive there
- force-import then export your usb root pool (to eliminate any disk
path or ID problem)
- try boot from usb drive
- if the above still doesn't work, try running installgrub:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_and_Boot_Issues

I'm still trying to install sol11 on USB, but it's dreadfully slow on
my system (not sure why)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Fajar A. Nugraha
On Fri, Nov 11, 2011 at 2:52 PM, darkblue darkblue2...@gmail.com wrote:
 I recommend buying either the oracle hardware or the nexenta on whatever
 they recommend for hardware.

 Definitely DO NOT run the free version of solaris without updates and
 expect it to be reliable.

 That's a bit strong.  Yes I do regularly update my supported (Oracle)
 systems, but I've never had problems with my own build Solaris Express
 systems.

 I waste far more time on (now luckily legacy) fully supported Solaris 10
 boxes!

 what does it mean?

It means some people have experienced problem on both supported and
unsupported solaris box, but using Oracle hardware would give you
higher chance of having less problem, since Oracle (supposedly) tests
their software on their hardware regularly to make sure they works
nicely.

 I am going to install solaris 10 u10 on this server.it that any problem
 about compatible?

As mentioned earlier, if you want fully-tested configuration, running
solaris on oracle hardware is a no-brainer choice.

Another alternative is using nexenta on hardware they certify, like
http://www.nexenta.com/corp/newsflashes/86-2010/728-nexenta-announces-supermicro-partnership
, since they've run enough tests on the combination.

Also, if you look at posts on this lists, the usual recommendation is
to use SAS disks instead of SATA for best performance and reliability.

 and which version of solaris or solaris derived do you suggest to build
 storage with the above hardware.

Why not the recently-released solaris 11?

And while we're on the subject, if using legal software is among your
concerns, and you don't have solaris support (something like
$2k/scoket/year, which is the only legal way to license solaris for
non-oracle hardware), why not use openindiana?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-11-11 Thread Fajar A. Nugraha
On Sat, Nov 12, 2011 at 9:25 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Linder, Doug

 All technical reasons aside, I can tell you one huge reason I love ZFS,
 and it's
 one that is clearly being completely ignored by btrfs: ease of use.  The
 zfs
 command set is wonderful and very English-like (for a unix command set).
 It's simple, clear, and logical.  The grammar makes sense.  I almost never
 have
 to refer to the man page.  The last time I looked, the commands for btrfs
 were the usual incomprehensible gibberish with a thousand squiggles and
 numbers.  It looked like a real freaking headache, to be honest.

 Maybe you're doing different things from me.  btrfs subvol create, delete,
 snapshot, mkfs, ...
 For me, both ZFS and BTRFS have normal user interfaces and/or command
 syntax.

the gramatically-correct syntax would be btrfs create subvolume, but
the current tool/syntax is an improvement over the old ones (btrfsctl,
btrfs-vol, etc).



 1) Change the stupid name.   Btrfs is neither a pronounceable word nor a
 good acromyn.  ButterFS sounds stupid.  Just call it BFS or something,
 please.

 LOL.  Well, for what it's worth, there are three common pronunciations for
 btrfs.  Butterfs, Betterfs, and B-Tree FS (because it's based on b-trees.)

... as long as you don't call it BiTterly bRoken FS :)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle releases Solaris 11 for Sparc and x86 servers

2011-11-09 Thread Fajar A. Nugraha
On Thu, Nov 10, 2011 at 6:54 AM, Fred Liu fred_...@issi.com wrote:


... so when will zfs-related improvement make it to solaris-derivatives :D ?

-- 
FAN
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-22 Thread Fajar A. Nugraha
On Sat, Oct 22, 2011 at 11:36 AM, Paul Kraus p...@kraus-haus.org wrote:
 Recently someone posted to this list of that _exact_ situation, they loaded
 an OS to a pair of drives while a pair of different drives containing an OS
 were still attached. The zpool on the first pair ended up not being able to
 be imported, and were corrupted. I can post more info when I am back in the
 office on Monday.

That is nasty :P

I wonder if Darik's approach for zfsonlinux is better. In Ubuntu's
(currently unofficial) zfs root support, the startup script
force-imports rpool (or whatever pool the user specifies on kernel
command line, if explicitly specified), and drops to a rescue shell if
there are more than one pool with the same name. This means:

- no problem with pools previously imported in another system
- no corruption due to duplicate pool name, as when that happens the
user needs to manually take action to import the correct pool by id
- the other pool remains untouched, and (if necessary) the user can
reimport it under different name

-- 
Fajar


 On Friday, October 21, 2011, Fred Liu fred_...@issi.com wrote:

 3. Do NOT let a system see drives with more than one OS zpool at the
 same time (I know you _can_ do this safely, but I have seen too many
 horror stories on this list that I just avoid it).


 Can you elaborate #3? In what situation will it happen?


 Thanks.

 Fred

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Fajar A. Nugraha
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih albert.s...@obspm.fr wrote:
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)

 The more the better :)

 Well, my employer is not so rich.

 It's first time I'm going to use ZFS on FreeBSD on production (I use on my
 laptop but that's mean nothing), so what's in your opinion the minimum ram
 I need ? Is something like 48 Go is enough ?

If you don't use dedup (recommended), should be more than enough.

If you use dedup, search zfs-discuss archive for some calculation method posted.

For comparison purposes, you could also look at Oracle's zfs storage
appliance configuration:
https://shop.oracle.com/pls/ostore/f?p=dstore:product:3479784507256153::NO:RP,6:P6_LPI,P6_PROD_HIER_ID:424445158091311922637762,114303924177622138569448

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-10-19 Thread Fajar A. Nugraha
On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov jimkli...@cos.ru wrote:
 2011-10-13 13:27, Darren J Moffat пишет:

 On 10/13/11 09:27, Fajar A. Nugraha wrote:

 On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
 darr...@opensolaris.org wrote:

 Have you looked at the time-slider functionality that is already in
 Solaris
 ?

 Hi Darren. Is it available for Solaris 10? I just installed Solaris 10
 u10 and couldn't find it.

 No it is not.

 Is there a reference on how to get/install this functionality on Solaris
 10?

 No because it doesn't exist on Solaris 10.


 Well, just for the sake of completeness: most of our systems are using
 zfs-auto-snap service, including Solaris 10 systems datiing from Sol10u6.
 Installation of relevant packages from SXCE (ranging snv_117-snv_130)
 was trivial, but some script-patching was in order. I think, replacement
 of the ksh interpreter to ksh93.

Yes, I remembered reading about that.


 I haven't used the GUI part and I guess my experience relates to the
 script-based zfs-auto-snap (before it was remade into current binary
 form, or so I read). We kind of got stuck with SXCE systems which
 still just work finely ;)

 The point is, even if unsupported (may be a problem in OP's case)
 it is likely that one or another version of zfs-auto-snap or TimeSlider
 can be made to work in Sol10 with little effort.

To be honest, if it's just to get it work, I'd just make my own. Or
running SE with a solaris 10 zone inside it, with SE managing
time-slider/replication and S10 zone running the application.

But for this particular case support is essential. That's why I
mentioned earlier if I can't get a supported solution for this setup
(with a reasonable price), storage-based replication would have to do.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-19 Thread Fajar A. Nugraha
On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih albert.s...@obspm.fr wrote:
 Hi

 Sorry to cross-posting. I don't knwon which mailing-list I should post this
 message.

 I'll would like to use FreeBSD with ZFS on some Dell server with some
 MD1200 (classique DAS).

 When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
 two options :

        1/ create a LV on the PERC H800 so the server see one volume and put
        the zpool on this unique volume and let the hardware manage the
        raid.

        2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
        and ZFS manage the raid.

 which one is the best solution ?

Neither.

The best solution is to find a controller which can pass the disk as
JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
(though others might disagree).


 Any advise about the RAM I need on the server (actually one MD1200 so 12x2To 
 disk)

The more the better :)

Just make sure do NOT use dedup untul you REALLY know what you're
doing (which usually means buying lots of RAM and SSD for L2ARC).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-19 Thread Fajar A. Nugraha
On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser dave@alfordmedia.com wrote:
 On 10/19/11 9:14 AM, Albert Shih albert.s...@obspm.fr wrote:

When we buy a MD1200 we need a RAID PERC H800 card on the server

 No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
 I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
 it presents the individual disks and ZFS can handle redundancy and
 recovery.

Exactly, thanks for suggesting an exact controller model that can
present disks as JBOD.

With hardware RAID, you'd pretty much rely on the controller to behave
nicely, which is why I suggested to simply create one big volume for
zfs to use (so you pretty much only use features like snapshot,
clones, etc, but don't use zfs self healing feature). Again, others
might (and have) disagree and suggest using volumes for individual
disk (even when you're still relying on hardware RAID controller). But
ultimately there's no question that the best possible setup would be
to present the disks as JBOD and let zfs handle it directly.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Fajar A. Nugraha
On Tue, Oct 18, 2011 at 8:38 PM, Gregory Shaw greg.s...@oracle.com wrote:
 I came to the conclusion that btrfs isn't ready for prime time.  I'll 
 re-evaluate as development continues and the missing portions are provided.

For someone with @oracle.com email address, you could probably arrive
to that conclusion faster by asking Chris Mason directly :)


 I'm seriously thinking about converting the Linux system in question into a 
 FreeBSD system so that I can use ZFS.

FreeBSD? Not Solaris? Hmmm ... :)

Anyway, the way I see it now Linux has more choices. You can try out
either btrfs or zfs (even without separate /boot) with a few tweaks.
Neither of it are labeled production-ready, but that doesn't stop some
people (which, presumably, know what they're doing) from putting in in
production.

I'm still hoping oracle would release source updates to zfs soon so
other OS can also use its new features (e.g. encryption).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Fajar A. Nugraha
On Tue, Oct 18, 2011 at 7:18 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 I recently put my first btrfs system into production.  Here are the
 similarities/differences I noticed different between btrfs and zfs:

 Differences:
 * Obviously, one is meant for linux and the other solaris (etc)
 * In btrfs, there is only raid1.  They don't have raid5, 6, etc yet.
 * In btrfs, snapshots are read-write.  Cannot be made read-only without
 quotas, which aren't implemented yet.

Minor correction: btrfs support ro snapshot. It's available on vanilla
linux, but IIRC it requires an (unofficial) updated btrfs-progs (which
basically tracks patches sent but not yet integrated to official
tree), but it works.

 * zfs supports quotas.  Also, by default creates snapshots read-only but
 could be made read-write by cloning.

There are proposed patches for btrfs quota support, but the kernel
part has not been accepted upstream.

 * In btrfs, there is no equivalent or alternative to zfs send | zfs
 receive

Planned. No actual working implementation yet.

 * In zfs, you have the hidden .zfs subdir that contains your snapshots.
 * In btrfs, your snapshots need to be mounted somewhere, inside the same
 filesystem.  So in btrfs, you do something like this...  Create a
 filesystem, then create a subvol called @ and use it to store all your
 work.  Later when you create snapshots, you essentially duplicate that
 subvol @2011-10-18-07-40-00 or something.

Yes. basically btrfs treats a subvolume and snapshot in the same way.

 * Both do compression.  By default zfs compression is fast but you could use
 zlib if you want.  By default btrfs uses zlib, but you could opt for fast if
 you want.

lzo is planned to be the default in the future.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-10-13 Thread Fajar A. Nugraha
On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
darr...@opensolaris.org wrote:
 Have you looked at the time-slider functionality that is already in Solaris
 ?

Hi Darren. Is it available for Solaris 10? I just installed Solaris 10
u10 and couldn't find it.


 There is a GUI for configuration of the snapshots

the screenshots that I can find all refer to opensolaris

 and time-slider can be
 configured to do a 'zfs send' or 'rsync'.  The GUI doesn't have the ability
 to set the 'zfs recv' command but that is set one-time in the SMF service
 properties.

Is there a reference on how to get/install this functionality on Solaris 10?

Thanks,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] commercial zfs-based storage replication software?

2011-09-30 Thread Fajar A. Nugraha
Hi,

Does anyone know a good commercial zfs-based storage replication
software that runs on Solaris (i.e. not an appliance, not another OS
based on solaris)?
Kinda like Amanda, but for replication (not backup).

Thanks,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] commercial zfs-based storage replication software?

2011-09-30 Thread Fajar A. Nugraha
On Fri, Sep 30, 2011 at 7:22 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha

 Does anyone know a good commercial zfs-based storage replication
 software that runs on Solaris (i.e. not an appliance, not another OS
 based on solaris)?
 Kinda like Amanda, but for replication (not backup).

 Please define replication, not backup?  To me, your question is unclear what
 you want to accomplish.  What don't you like about zfs send | zfs receive?

Basically I need something that does zfs send | zfs receive, plus
GUI/web interface to configure stuff (e.g. which fs to backup,
schedule, etc.), support, and a price tag.

Believe it or not the last two requirement are actually important
(don't ask :P ), and are the main reasons why I can't use automated
send - receive scripts already available from the internet.

CMIIW, Amanda can use zfs send but it only store the resulting
stream somewhere, while the requirement for this one is that the send
stream must be received on a different server (e.g. DR site) and be
accessible there.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-27 Thread Fajar A. Nugraha
On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 When a vdev resilvers, it will read each slab of data, in essentially time
 order, which is approximately random disk order, in order to reconstruct the
 data that must be written on the resilvering device.  This creates two
 problems, (a) Since each disk must fetch a piece of each slab, the random
 access time of the vdev as a whole is approximately the random access time
 of the slowest individual device.  So the more devices in the vdev, the
 worse the IOPS for the vdev...  And (b) the more data slabs in the vdev, the
 more iterations of random IO operations must be completed.

 In other words, during resilvers, you're IOPS limited.  If your pool is made
 of all SSD's, then problem (a) is basically nonexistent, since the random
 access time of all the devices are equal and essentially zero.  Problem (b)
 isn't necessarily a problem...  It's like, if somebody is giving you $1,000
 for free every month and then they suddenly drop down to only $500, you
 complain about what you've lost.   ;-)  (See below.)

If you regularly spend all of the given $1,000, then you're going to
complain hard when it suddenly drops to $500.

 So again:  Not a problem if you're making your pool out of SSD's.

Big problem if your system is already using most of the available IOPS
during normal operation.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Beginner Question: Limited conf: file-based storage pools vs. FSs directly on rpool

2011-09-21 Thread Fajar A. Nugraha
2011/9/22 Ian Collins i...@ianshome.com

 The OS is installed and working, and rpool is mirrored on the two disks.

 The question is: I want to create some ZFS file systems for sharing them via 
 CIFS. But given my limited configuration:

 * Am I forced to create the new filesystems directly on rpool?

 You're not forced, but in your situation is is the only practical option.

It might be easier if you put it under a sub-filesystem. e.g:
rpool/share
rpool/share/share1
rpool/share/share2

That way if someday you have more disk, you can just use recursive zfs
snapshot and send to copy the data to new pool. Easier than copying
the data one by one.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS

2011-09-13 Thread Fajar A. Nugraha
On Tue, Sep 13, 2011 at 3:48 PM, cephas maposah mapo...@gmail.com wrote:
 hello team
 i have an issue with my ZFS system, i have 5 file systems and i need to take
 a daily backup of these onto tape. how best do you think i should do these?
 the smallest filesystem is about 50GB

It depends.

You can backup the files (so it'd be the same whatever filesystem the
files was on), or you can backup the send/recv stream.

 here is what i have been doing i take snapshots of the 5 file systems, i zfs
 send these into a directory gzip the the files and then tar them onto tape.
 this takes a considerable amount of time.
 my question is there a faster and better way of doing this?

yes, that sucks as you need to write it to a temporary file first. Is
your tape a real tape or VTL? If it's VTL, it might be easier to just
use it as disk so you can directly write the compressed zfs stream
there without the need of temporary file first. Also, using lzop is
faster than gzip (and less compression), so you might want to try that
as well.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to mount zfs file system..pl help

2011-08-12 Thread Fajar A. Nugraha
On Fri, Aug 12, 2011 at 3:05 PM, Vikash Gupta vika...@cadence.com wrote:
 I use df command and its not showing the zfs file system in the list.

 zfs mount -a does not return any error.

First of all, please check whether you're posting to the right place.
zfs-discuss@opensolaris.org, as the name implies, mostly relevant for
discussion of zfs on solaris and derivaties. In your first post you
wrote:

# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010
x86_64 x86_64 x86_64 GNU/Linux

# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1

So I'm guessing you use an OLD version of zfsonlinux. Go
http://zfsonlinux.org/ , see the resources there (including the
correct maling list).

Second, IIRC that version of zfs does not support zpl (the
filesystem part, which is what is created when you run zfs create
without -V). It only supoorts zvol. Again, the web page and (the
right) maililing list (includling its archive) has more info, so start
from there.

-- 
Fajar


 Rgds
 Vikash

 -Original Message-
 From: Ian Collins [mailto:i...@ianshome.com]
 Sent: Friday, August 12, 2011 1:24 PM
 To: Vikash Gupta
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] unable to mount zfs file system..pl help

  On 08/12/11 04:42 PM, Vikash Gupta wrote:
 Hi Ian,

 It's there in the subject line.

 I am unable to see the zfs file system in df output.

 How did you mount it and did it fail?  As I said, what commands did you
 use and what errors you get?

 What is the output of zfs mount -a?

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Fajar A. Nugraha
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle lanky_doo...@hotmail.com wrote:
 Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a 
 parameter of the command or the slice of a disk - none of my 'data' disks 
 have been 'configured' yet. I wanted to ID them before adding them to pools.

For starters, try looking at what files are inside /dev/dsk/. There
shouldn't be a c9t7d0 file/symlink.

Next, Googling solaris disk notation found this entry:
http://multiboot.solaris-x86.org/iv/3.html. In short, for whole disk
you'd need /dev/dsk/c9t7d0p0

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Fajar A. Nugraha
On Wed, Aug 3, 2011 at 7:02 AM, Nomen Nescio nob...@dizum.com wrote:
 I installed a Solaris 10 development box on a 500G root mirror and later I
 received some smaller drives. I learned from this list its better to have
 the root mirror on the smaller small drives and then create another mirror
 on the original 500G drives so I copied everything that was on the small
 drives onto the 500G mirror to free up the smaller drives for a new install.

 After my install completes on the smaller mirror, how do I access the 500G
 mirror where I saved my data? If I simply create a tank mirror using those
 drives will it recognize there's data there and make it accessible? Or will
 it destroy my data? Thanks.

CREATING a pool on the drive will definitely destroy the data.

IMHO the easiest way is to:
- boot using live CD
- import the 500G pool, renaming it to something else other than rpool
(e.g. zpool import rpool datapool)
- export the pool
- install on the new disk

Also, there's actually a way to copy everthing that was installed on
the old pool to the new pool WITHOUT having to reinstall from scratch
(e.g. so that your customizations stays the same), but depending on
your level of expertise it might be harder. See
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Fajar A. Nugraha
On Wed, Aug 3, 2011 at 1:10 PM, Fajar A. Nugraha l...@fajar.net wrote:
 After my install completes on the smaller mirror, how do I access the 500G
 mirror where I saved my data? If I simply create a tank mirror using those
 drives will it recognize there's data there and make it accessible? Or will
 it destroy my data? Thanks.

 CREATING a pool on the drive will definitely destroy the data.

 IMHO the easiest way is to:
 - boot using live CD
 - import the 500G pool, renaming it to something else other than rpool
 (e.g. zpool import rpool datapool)
 - export the pool
 - install on the new disk

... and just is case it wasn't obvious already, after the installation
is complete, you can simply do a zpool import datapool (or whatever
you rename the old pool to) to access the old data.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-02 Thread Fajar A. Nugraha
On Wed, Aug 3, 2011 at 8:38 AM, Anonymous Remailer (austria)
mixmas...@remailer.privacy.at wrote:

 Hi Roy, things got alot worse since my first email. I don't know what
 happened but I can't import the old pool at all. It shows no errors but when
 I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
 nv) which looks like is fixed by patch 6801926 which is applied in Solaris
 10u9. But I cannot boot update 9 on this box! I tried Solaris Express, none
 of those will run right either. They all go into maintenance mode. The last
 thing I can boot is update 8 and that is the one with the ZFS bug.

If they go into maintenance mode but could recognize the disk, you
should still be able to do zfs stuff (zpool import, etc). If you're
lucky you'd only miss the GUI


 I have 200G of files I deliberately copied to this new mirror and now I
 can't get at them! Any ideas?

Another thing you can try (albeit more complex) is use another OS
(install, or even use a linux Live CD), install virtualbox or similar
on it, pass the disk as raw vmdk, and use solaris express on the VM.
You should be able to at least import the pool and recover the data.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] booting from ashift=12 pool..

2011-07-29 Thread Fajar A. Nugraha
On Fri, Jul 29, 2011 at 4:57 PM, Hans Rosenfeld hans.rosenf...@amd.com wrote:
 On Fri, Jul 29, 2011 at 01:04:49AM -0400, Daniel Carosone wrote:
 .. evidently doesn't work.  GRUB reboots the machine moments after
 loading stage2, and doesn't recognise the fstype when examining the
 disk loaded from an alernate source.

 This is with SX-151.  Here's hoping a future version (with grub2?)
 resolves this, as well as lets us boot from raidz.

 Just a note for the archives in case it helps someone else get back
 the afternoon I just burnt.

 I've noticed this behaviour this morning and have been debugging it
 since. I found out that, for some unknown reason, grub fails to get the
 disk geometry, assumes 0 sectors/track and then does a divide-by-zero.

 I don't think this is a zfs issue.

If the problem is on zfs code in grub/grub2, then it should be zfs issue, right?

Anyway, for comparison purposes, with ubuntu + grub2 + zfsonlinux
(which can force ashift at pool creation time) + zfs root,  grub2
won't even install on pools with ashift=12, while it works just fine
with ashift=9. There were also booting problems if you've scrubbed
rpool.

Does zfs code for grub/grub2 also depend on Oracle releasing updates,
or is it simply a matter of no one with enough skill have looked into
it yet?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-26 Thread Fajar A. Nugraha
On Tue, Jul 26, 2011 at 3:28 PM,  casper@oracle.com wrote:


Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
Second and sequent fills are at half that speed. I'm quite confident
that it's due to the flash erase cycle that's needed, and if stuff can
be TRIM:ed (and thus flash erased as well), speed would be regained.
Overwriting an previously used block requires a flash erase, and if that
can be done in the background when the timing is not critical instead of
just before you can actually write the block you want, performance will
increase.

 I think TRIM is needed both for flash (for speed) and for
 thin provisioning; ZFS will dirty all of the volume even though only a
 small part of the volume is used at any particular time.  That makes ZFS
 more or less unusable with thin provisioning; support for TRIM would fix
 that if the underlying volume management supports TRIM.

 Casper

Shouldn't modern SSD controllers be smart enough already that they know:
- if there's a request to overwrite a sector, then the old data on
that sector is no longer needed
- allocate a clean sector from pool of available sectors (part of
wear-leveling mechanism)
- clear the old sector, and add it to the pool (possibly done in
background operation)

It seems to be the case with sandforce-based SSDs. That would pretty
much let the SSD work just fine even without TRIM (like when used
under HW raid).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding mirrors to an existing zfs-pool

2011-07-26 Thread Fajar A. Nugraha
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
consult...@hennig-consulting.com wrote:
 G'Day,

 - zfs pool with 4 disks (from Clariion A)
 - must migrate to Clariion B (so I created 4 disks with the same size,
  avaiable for the zfs)

 The zfs pool has no mirrors, my idea was to add the new 4 disks from
 the Clariion B to the 4 disks which are still in the pool - and later
 remove the original 4 disks.

 I only found in all example how to create a new pool with mirrors
 but no example how to add to a pool without mirrors a mirror disk
 for each disk in the pool.

 - is it possible to add disks to each disk in the pool (they have different
  sizes, so I have exact add the correct disks form Clariion B to the
  original disk from Clariion B)

from man zpool

   zpool attach [-f] pool device new_device

   Attaches new_device to an existing zpool device. The
existing device cannot be part of a raidz configuration. If device is
not currently  part
   of  a  mirrored  configuration, device automatically
transforms into a two-way mirror of device and new_device. If device
is part of a two-way
   mirror, attaching new_device creates a three-way mirror,
and so on. In either case, new_device begins to resilver immediately.

   -fForces use of new_device, even if its appears to be
in use. Not all devices can be overridden in this manner.

 - can I later remove the disks from the Clariion A, pool is intact, user
  can work with the pool

   zpool detach pool device

   Detaches device from a mirror. The operation is refused if
there are no other valid replicas of the data.



If you're using raidz, you can't use zpool attach. Your best bet in
this case is zpool replace.

   zpool replace [-f] pool old_device [new_device]

   Replaces old_device with new_device. This is equivalent to
attaching new_device, waiting for it to resilver, and then detaching
old_device.

   The size of new_device must be greater than or equal to the
minimum size of all the devices in a mirror or raidz configuration.

   new_device  is  required  if the pool is not redundant. If
new_device is not specified, it defaults to old_device. This form of
replacement is
   useful after an existing disk has failed and has been
physically replaced. In this case, the new disk may have the same /dev
path as  the  old
   device, even though it is actually a different disk. ZFS
recognizes this.

   -fForces use of new_device, even if its appears to be
in use. Not all devices can be overridden in this manner.


-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover raidz from fried server ??

2011-07-22 Thread Fajar A. Nugraha
On Wed, Jul 20, 2011 at 1:46 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net 
wrote:
 Could you try to just boot up fbsd or linux on the box to see if zfs (native 
 or fuse-based, respecively) can see the drives?

Yup, that might seem to be the best idea.

Assuming that all those drives are the original drives with raidz, and
it originally has pool version 28 or lower, zfsonlinux should be able
to see it. You can test it using Ubuntu Live CD and download/compile
the additional zfs module. If you're interested see
https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem
, stop just before Step 2, and try to do zpool import.

Then again, vbox on top of windows SHOULD detect the disk as well. Are
you sure you export the WHOLE disk, and not a partition? If you still
have that setup, try again but this time testing with different slice
and parition (i.e. test zdb -l for all /dev/dsk/c7t6d0*, or whatever
your original radiz disk is now recognized at)

-- 
Fajar



 roy

 - Original Message -
 root@san:~# zdb -l /dev/dsk/c7t6d0s0
 cannot open '/dev/rdsk/c7t6d0s0': I/O error
 root@san:~# zdb -l /dev/dsk/c7t6d0p1
 
 LABEL 0
 
 failed to unpack label 0
 
 LABEL 1
 
 failed to unpack label 1
 
 LABEL 2
 
 failed to unpack label 2
 
 LABEL 3
 
 failed to unpack label 3
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover raidz from fried server ??

2011-07-19 Thread Fajar A. Nugraha
On Tue, Jul 19, 2011 at 4:29 PM, Brett repudi...@gmail.com wrote:
 Ok, I went with windows and virtualbox solution. I could see all 5 of my 
 raid-z disks in windows. I encapsulated them as entire disks in vmdk files 
 and subsequently offlined them to windows.

 I then installed a sol11exp vbox instance, attached the 5 virtualized disks 
 and can see them in my sol11exp (they are disks #1-#5).

 root@san:~# format
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
       0. c7t0d0 ATA    -VBOX HARDDISK  -1.0  cyl 26105 alt 2 hd 255 sec 63
          /pci@0,0/pci8086,2829@d/disk@0,0
       1. c7t2d0 ATA    -VBOX HARDDISK  -1.0  cyl 60798 alt 2 hd 255 sec 126
          /pci@0,0/pci8086,2829@d/disk@2,0
       2. c7t3d0 ATA    -VBOX HARDDISK  -1.0  cyl 60798 alt 2 hd 255 sec 126
          /pci@0,0/pci8086,2829@d/disk@3,0
       3. c7t4d0 ATA    -VBOX HARDDISK  -1.0  cyl 60798 alt 2 hd 255 sec 126
          /pci@0,0/pci8086,2829@d/disk@4,0
       4. c7t5d0 ATA    -VBOX HARDDISK  -1.0  cyl 60798 alt 2 hd 255 sec 126
          /pci@0,0/pci8086,2829@d/disk@5,0
       5. c7t6d0 ATA    -VBOX HARDDISK  -1.0  cyl 60798 alt 2 hd 255 sec 126
          /pci@0,0/pci8086,2829@d/disk@6,0
 Specify disk (enter its number):

 Great I thought, all i need to do is import my raid-z.
 root@san:~# zpool import
 root@san:~#

 Damn, that would have been just too easy I guess. Help !!!

 How do i recover my data? I know its still hiding on those disks. Where do i 
 go from here?

What does zdb -l /dev/dsk/c7t6d0s0 or zdb -l /dev/dsk/c7t6d0p1 show?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-18 Thread Fajar A. Nugraha
On Mon, Jul 18, 2011 at 3:28 PM, Tiernan OToole lsmart...@gmail.com wrote:
 Ok, so, taking 2 300Gb disks, and 2 500Gb disks, and creating an 800Gb
 mirrored striped thing is sounding like a bad idea... what about just
 creating a pool of all disks, without using mirrors? I seen something called
 copies, which if i am reading correctly, will make sure a number of copies
 of a file exist... Am i reading that correctly? If this does work the way i
 think it works, then taking all 4 disks, and making one large 1.6Tb pool,
 setting copies to 2, should, in theory, create a poor mans pool with
 striping, right?

Step back a moment.

What are your priorites? What is the most important thing for you? Is
it space? Is it data protection? Is it something else?
Once you determine that, it's easier to come up with a reasonable setup.

Back to your original question, I'd like to note some things.

First of all, using USB disks for permanent storage is a bad idea. Go
for e-sata instead (http://en.wikipedia.org/wiki/Serial_ata#eSATA). It
eliminates the overhead caused by USB-to-[P/S]ATA bridge. You can get
something like a e-sata bracket (If your controller supports port
multiplier) or e-sata PCI controller, plus e-sata enclosure with 2 or
4 drive bays (depending on your needs).

Second, using copies=2 + stripe is, again, a bad idea. While
copies=2 can protect you from something like bad sector, it will NOT
protect you from drive failure. So when one drive broke your pool will
still be unaccessible. Stick with strpe of mirrors instead. Go with
what Edward suggested: rearrange the disk, and create stripe of
(mirror of 500G internal + 500G external) + (mirror of 300G internal +
300G external).

Another option, if you go with external enclosure route, is to just
put all disks in the external enclosure, and go with the configuration
Jim suggested (2 x 200GB mirror, plus 4 x 300GB mirror/raidz1)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Fajar A. Nugraha
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov jimkli...@cos.ru wrote:
 2011-07-12 9:06, Brandon High пишет:

 On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproulespr...@omniti.com  wrote:

 Interesting-- what is the suspected impact of not having TRIM support?

 There shouldn't be much, since zfs isn't changing data in place. Any
 drive with reasonable garbage collection (which is pretty much
 everything these days) should be fine until the volume gets very full.

 I wonder if in this case it would be beneficial to slice i.e. 90%
 of an SSD for use in ZFS pool(s) and leave the rest of the
 disk unassigned to any partition or slice? This would reserve
 some sectors as never-written-to-by-OS. Would this ease the
 life for SSD devices without TRIM between them ans the OS?

Possibly so. That is, assuming your SSD has a controller (e.g.
sandforce-based) that's able to do some kind of wear-leveling. They
maximize the number of unused sector by using compression, dedup, and
reserving some space internally, but if you keep some space ununsed it
should add up the number of free sectors (thus enabling it to
rewrite the same sector less often, prolonging the disk lifetime).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-10 Thread Fajar A. Nugraha
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
 The `lofiadm' man page describes how to export a file as a block
 device and then use `mkfs -F pcfs' to create a FAT filesystem on it.

 Can't I do the same thing by first creating a zvol and then creating
 a FAT filesystem on it?

seems not.

  Nothing I've tried seems to work.  Isn't the
 zvol just another block device?

That's the problem: zvol is just another block device.

Some solaris tools (like fdisk, or mkfs -F pcfs) needs disk geometry
to function properly. zvols doesn't provide that. If you want to use
zvols to work with such tools, the easiest way would be using lofi, or
exporting zvols as iscsi share and import it again.

For example, if you have a 10MB zvol and use lofi, fdisk would show
these geometry

 Total disk size is 34 cylinders
 Cylinder size is 602 (512 byte) blocks

... which will then be used if you run mkfs -F pcfs -o
nofdisk,size=20480. Without lofi, the same command would fail with

Drive geometry lookup (need tracks/cylinder and/or sectors/track:
Operation not supported

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changed to AHCI, can not access disk???

2011-07-05 Thread Fajar A. Nugraha
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Orvar Korvar

 Here is my problem:
 I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
 I then changed to AHCI in BIOS, which results in severe problems: I can
 not
 boot the system.

 I suspect the problem is because I changed to AHCI.

 This is normal, no matter what OS you have.  It's the hardware.

 If you start using a disk in non-AHCI mode, you must always continue to use
 it in non-AHCI mode.  If you switch, it will make the old data inaccessible.

Really? old data inaccessible?
These two links seem to say that the data is there, and it's only a
matter whether the correct drivers are loaded or not:
http://en.wikipedia.org/wiki/Ahci
http://support.microsoft.com/kb/922976

So the question is, does similar workaround exists for (open)solaris?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 700GB gone? zfs list and df differs!

2011-07-04 Thread Fajar A. Nugraha
On Mon, Jul 4, 2011 at 5:19 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 The problem is more clearly stated here. Look, 700GB is gone (the correct 
 number is 620GB)!

Somehow you remind me of the story the boy who cried wolf (Look,
look! The wolf ate my disk space) :P


 First I do zfs list onto TempStorage/Backup which reports 800GB. This is 
 correct.

 Then I do df -h which reports only 180GB, which is not correct. So, it 
 should be 800GB of data, but df reports only 180GB. This means 620GB is 
 gone. Where is it? I know there is 620GB worth of data, which I can not 
 access. Where is my data, and how can I access it?

The behaviour is perfectly normal.





 root@solaris:/mnt/TempStorage/Stuff# zfs list
 NAME                      USED  AVAIL  REFER  MOUNTPOINT
 TempStorage               916G  45,1G  37,3G  /mnt/TempStorage
 TempStorage/Backup        799G  45,1G   177G  /mnt/TempStorage/Backup   
  OBS! 800GB!
 TempStorage/EmmasFolder  78,6G  45,1G  78,6G  /mnt/TempStorage/EmmasFolder
 TempStorage/Stuff        1,08G  45,1G  1,08G  /mnt/TempStorage/Stuff



 root@solaris:/mnt/TempStorage/Stuff# df -h
 Filesystem            Size  Used Avail Use% Mounted on
 TempStorage            83G   38G   46G  46% /mnt/TempStorage
 TempStorage/Backup    223G  178G   46G  80% /mnt/TempStorage/Backup    
 -- only 200GB!!!
 TempStorage/EmmasFolder
                      124G   79G   46G  64% /mnt/TempStorage/EmmasFolder
 TempStorage/Stuff      47G  1,1G   46G   3% /mnt/TempStorage/Stuff

I can't find the reference from the back of my head, but the short version is:
- in zfs, free space is shared (unless you do some fancy stuff like reservation)
- Avail, as reported by df, will match Avail, as reported by zfs list.
- Used, as reported by df, will match Used, as reported by zfs
list. Well, close anyway. See
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq/#HWhydoesdu1reportdifferentfilesizesforZFSandUFSWhydoesntthespaceconsumptionthatisreportedbythedfcommandandthezfslistcommandmatch
- zfs will show a fake number in df's Size, to be roughly the sum
of Used + Avail for that particular filesystem. fake, as in this
not the actual pool size.

So in short, for filesystem/dataset in the same pool, df will show the
same Avail, but different Used and Size.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 700GB gone? zfs list and df differs!

2011-07-04 Thread Fajar A. Nugraha
On Mon, Jul 4, 2011 at 5:45 PM, Fajar A. Nugraha w...@fajar.net wrote:
 - Used, as reported by df, will match Used, as reported by zfs
 list.

Sorry, it should be

Used, as reported by df, will match Refer, as reported by zfs list.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Fajar A. Nugraha
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith smith...@llnl.gov wrote:
 Generally, the log devices are listed after the pool devices.
 Did this pool have log devices at one time? Are they missing?

 Yes the pool does have logs.  I'll include a zpool status -v below
 from when I'm booted in solaris 10 U9.

I think what Cindy means is does zpool status on Solaris Express
(when you were having the problem) has pool devices listed as well?

If not, that would explain the faulted status: zfs can't find pool
devices. So we need to track why Solaris can't see it (probably driver
issues).

If it can see the pool devices, then the status of each device as seen
by zfs on Solaris Express would provide some info,

 My sense is that if you have remnants of the same pool name on some of
 your devices but as different pools, then you will see device problems
 like these.

I had a similar case (though my problem was on Linux). In my case the
solution was to rename /etc/zfs/zpool.cache, reboot the server, then
re-import the pool.

 Please let me know if you need more info...

If you're still interested in using this pool under Solaris Express,
then we'll need the output of format and zpool import when running
Solaris Express.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread Fajar A. Nugraha
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith smith...@llnl.gov wrote:
 When I tried out Solaris 11, I just exported the pool prior to the install of
 Solaris 11.  I was lucky in that I had mirrored the boot drive, so after I had
 installed Solaris 11 I still had the other disk in the mirror with Solaris 10 
 still
 installed.

 I didn't install any additional software in either environments with regards 
 to
 volume management, etc.

 From the format command, I did remember seeing 60 luns coming from the DDN and
 as I recall I disk see multiple paths as well under Solaris 11.  I think you 
 are
 correct however in that for some reason Solaris 11 could not read the devices.


So you mean the root cause of the problem is Solaris Express failed to
see the disks? Or are the disks available on solaris express as well?

When you boot with Solaris Express Live CD, what does zpool import show?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Fajar A. Nugraha
On Tue, Jun 14, 2011 at 7:15 PM, Jim Klimov jimkli...@cos.ru wrote:
 Hello,

  A college friend of mine is using Debian Linux on his desktop,
 and wondered if he could tap into ZFS goodness without adding
 another server in his small quiet apartment or changing the
 desktop OS. According to his research, there are some kernel
 modules for Debian which implement ZFS, or a FUSE variant.

  Can anyone comment how stable and functional these are?
 Performance is a secondary issue, as long as it does not
 lead to system crashes due to timeouts, etc. ;)

zfs-fuse has been around for a long time, and is quite stable. Ubuntu
natty has it on universe repository (don't know about Debian's
repository, but you should be able to use Ubuntu's). It has the
benefits and drawbacks of fuse implementation (namely: it does not
support zvol)

zfsonlinux is somewhat new, and has some problems relating memory
management (in some cases arc usage can get very high, and then you'll
see high cpu usage by arc_reclaim thread). It's not recommended for
32bit OS. Being in kernel, it has potential to be more stable and
faster than zfs-fuse. It has zvol support. Latest rc version is
somewhat stable for normal uses.

Performance-wise, from my test it can be 4 times slower compared to
ext4 (depending on the load).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Fajar A. Nugraha
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov j...@cos.ru wrote:
 However it seems that there may be some extra data beside the zfs
 pool in the actual volume (I'd at least expect an MBR or GPT, and
 maybe some iSCSI service data as an overhead). One way or another,
 the dcpool can not be found in the physical zfs volume:

 ===
 # zdb -l /dev/zvol/rdsk/pool/dcpool

 
 LABEL 0
 
 failed to unpack label 0

The volume is exported as whole disk. When given whole disk, zpool
creates GPT partition table by default. You need to pass the partition
(not the disk) to zdb.

 So the questions are:

 1) Is it possible to skip iSCSI-over-loopback in this configuration?

Yes. Well, maybe.

In Linux you can use kpartx to make the partitions available. I don't
know the equivalent command in Solaris.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup complete rpool structure and data to tape

2011-05-12 Thread Fajar A. Nugraha
On Thu, May 12, 2011 at 8:31 PM, Arjun YK arju...@gmail.com wrote:
 Thanks everyone. Your inputs helped me a lot.
 The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
 mount it. But I am not certain if that can cause any issue in the future, or
 that's a right thing to do. Any suggestions ?

The general answer is if it ain't broken, don't fix it.

See 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery
for example of bare metal rpool recovery example using nfs + zfs
send/receive. For your purpose, it's probably easier to just use the
example and have Legato back up the images created from zfs send.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Fajar A. Nugraha
On Fri, Apr 8, 2011 at 2:10 PM, Arjun YK arju...@gmail.com wrote:
 Hello,

 I have a situation where a host, which is booted off its 'rpool', need
 to temporarily import the 'rpool' of another host, edit some files in
 it, and export the pool back retaining its original name 'rpool'. Can
 this be done ?

 Here is what I am trying to do:

 # zpool import -R /a rpool temp-rpool

I think it'd be easier for this purpose if you simply use a live cd /
network boot. You can then just import the pool without having to
change the name.

 # zfs export temp-rpool    -- But, I want to give temp-rpool its
 original name 'rpool' before or after this export.

 I cannot see how this can be achieved.

Live CD / network boot

 So, I decided to live with the
 name 'temp-rpool'. But, is renaming 'rpool'  recommended or supported
 parctice ?

It should work, as long as you edit the necessary files. Whether it's
supported or not is an entirely different problem.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Fajar A. Nugraha
On Fri, Apr 8, 2011 at 2:24 PM, Arjun YK arju...@gmail.com wrote:
 Hi,

 Let me add another query.
 I would assume it would be perfectly ok to choose any name for root
 pool, instead of 'rpool', during the OS install. Please suggest
 otherwise.

Have you tried it?

Last time I try, the pool name is predetermined, you can't change it.
I tried cloning an exsisting openinstallation manually, changing the
pool name in the process. IIRC it works (sorry for the somewhat vague
detail, it was several years ago).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Fajar A. Nugraha
On Fri, Apr 8, 2011 at 2:37 PM, Stephan Budach stephan.bud...@jvm.de wrote:
 You can re-name a zpool at import time by simply issueing:

 zpool import oldpool newpool

Yes, I know :)

The last question from Arjun was can we choose any name for root
pool, instead of 'rpool', during the OS install :D

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 4:49 PM, For@ll for...@stalowka.info wrote:
 What can I do that zpool show new value?

 zpool set autoexpand=on TEST
 zpool set autoexpand=off TEST
  -- richard

 I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 7:58 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
 IIRC if you pass a DISK to zpool create, it would create
 partition/slice on it, either with SMI (the default for rpool) or EFI
 (the default for other pool). When the disk size changes (like when
 you change LUN size on storage node side), you PROBABLY need to resize
 the partition/slice as well.

 zpool create won't create a partition or slice, it'll just use the whole 
 drive unless you give it a partition or slice.

Do you have some reference backing it up? It creates EFI label on my
system when I give it whole disk.

There's even a warning on ZFS troubleshooting guide regarding root
pool: make sure you specify a bootable slice and not the whole disk
because the latter may try to install an EFI label
(http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 6:48 PM, For@ll for...@stalowka.info wrote:
 When I test with openindiana b148, simply setting zpool set
 autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
 required). Again, you might need to set both autoexpand=on and
 resize partition slice.

 As a first step, try choosing c2t1d0 in format, and see what the
 size of this first slice is.


 I choosed format and change type to the auto-configure and now I see new
 value if I choosed partition - print, but when I exit from format and
 reboot the old value is stay. How I can write new settings?

Be glad it DIDN't write the settings :D

My advice of running format was to see whether you already have a
partition/slice on it. If it does, it should print the settings it
currently has (and give warnings about some slice being part of zfs
pool). If it doesn't, then perhaps the disk doesn't have a partition
table.

Changing partition table/slice to cover new size is somewhat tricky,
and can easily lead to data loss if NOT done properly. Hopefully
someone else will be able to help you. If you don't know anything
about changing partitions, then don't even attempt it.

So, does the disk originally have partition/slice or not? If no, then
zpool set autoexpand=on should be enough.

If it does have partitions, you might want to learn how to resize
partitions/slice properly, or better yet try booting with
openindiana/solaris express live CD (after setting autoexpand=on),
import-export the pool, and see if it can recognize the size change.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Custom configuration for a small computer

2011-04-03 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 4:16 AM, Daxter xovat...@gmail.com wrote:
 My goal is to optimally have two 1TB drives inside of a rather small computer 
 of mine, running Solaris, which can sync with and be a backup of my somewhat 
 portable 2TB drive. Up to this point I have been using the 2TB drive without 
 any redundancy, and I want to change that ASAP.

 Now, as the computer only has two SATA ports, I will have to use a small 
 amount of one of the internal 1TB drives to hold the OS. This brings me to 
 the first issue: how can I do that? When installing Solaris I seemingly 
 successfully created a 30GB partition at the beginning of the drive, and yet 
 when booted rpool claims that it's in /dev/dsk/c10d0s0 (that is, slice vs. 
 partition).

Slices are created inside the partition you choose during installation/

 How am I supposed to then use c10d0p2 for my backup pool?

Just create a new pool for your data

zpool create ...

zfs can recognize and use any available file or block device that the
OS recognize (which includes slices, partitions, raw disk, etc).


 Once I solve the above and successfully set up a striped pool of the two 1TB 
 drives, what is the best method to mirror the two 1TB drives with the 2TB 
 drive? Must I split the 2TB drive into two separate partitions?

 As well, does zpool detach allow me to then connect the detached 2TB drive 
 to another system for regular usage?

If you're intending to use the external driver regularly for other
purposes (not just a backup that you store somewhere safe), then
it's probably easier to just use snapshots and incremental
send/receive.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] best migration path from Solaris 10

2011-03-22 Thread Fajar A. Nugraha
On Wed, Mar 23, 2011 at 7:33 AM, Jeff Bacon ba...@walleyesoftware.com wrote:
 I've also started conversations with Pogo about offering an
 OpenIndiana
 based workstation, which might be another option if you prefer more of

 Sometimes I'm left wondering if anyone uses the non-Oracle versions for
 anything but file storage... ?

Seeing that userland programs for *Solaris and derivatives (GUI,
daemons, tools, etc) is usually late compared to bleeding-edge Linux
distros (e.g. Ubuntu), with no particular dedicated team working on
improvement there, I'm guessing the answer will be highly unlikely.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] best migration path from Solaris 10

2011-03-20 Thread Fajar A. Nugraha
On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidek p...@freebsd.org wrote:
 On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:
 Newer versions of FreeBSD have newer ZFS code.

 Yes, we are at v28 at this point (the lastest open-source version).

 That said, ZFS on FreeBSD is kind of a 2nd class citizen still. [...]

 That's actually not true. There are more FreeBSD committers working on
 ZFS than on UFS.

How is the performance of ZFS under FreeBSD? Is it comparable to that
in Solaris, or still slower due to some needed compatibility layer?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS utility like Filefrag on linux to help analyzing the extents mapping

2011-02-16 Thread Fajar A. Nugraha
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu jeff@oracle.com wrote:
 Hello All,

 I'd like to know if there is an utility like `Filefrag' shipped with 
 e2fsprogs on linux, which is used to fetch the extents mapping info of a 
 file(especially a sparse file) located on ZFS?

Something like zdb - maybe?
http://cuddletech.com/blog/?p=407

-- 
Fajar


 I am working on efficient sparse file detection and backup through 
 lseek(SEEK_DATA/SEEK_HOLE)  on ZFS,  and I need to verify the result by 
 comparing the original sparse file
 and the copied file, so if there is such a tool available, it can be used to 
 analyze the start offset and length of each data extent.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Fajar A. Nugraha
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer white...@gmail.com wrote:
 Hi I wanted to get some expert advice on this. I have an ordinary hardware
 SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
 that if possible with my VMware environment where I run several Solaris /
 OpenSolaris virtual machines. My question is regarding the virtual disks.

 1. Should I create individual iSCSI LUNs and present those to the VMware
 ESXi host as iSCSI storage, and then create virtual disks from there on each
 Solaris VM?

  - or -

 2. Should I (assuming this is possible), let the Solaris VM mount the iSCSI
 LUNs directly (that is, NOT show them as VMware storage but let the VM
 connect to the iSCSI across the network.) ?

 Part of the issue is I have no idea if having a hardware RAID 5 or 6 disk
 set will create a problem if I then create a bunch of virtual disks and then
 use ZFS to create RAIDZ for the VM to use. Seems like that might be asking
 for trouble.

The ideal solution would be to present all disks directly as JBOD to
solaris without any raid/virtualization (either from the storage of
vmware).

If you use (1), you'd pretty much given up data integrity check to the
lower layer (SAN + ESXi). In this case you'd probably better off
simply using stripe on zfs side (there's not much advantage of using
raidz if the block device would reside on the same physical disk in
the SAN anyway).

If you use (2), you should have the option of exporting each raw disk
on the SAN as a LUN to solaris, and you can create mirror/raidz from
it. However this setup is more complicated (e.g. need to setup the SAN
in a specific way, which it may or may not be capable of), plus
there's a performance overhead from vmware virtual network.

Personally I'd chose (1), and use zfs simply for it's
snapshot/clone/compression capability, not for its data integrity
check.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] native ZFS on Linux

2011-02-13 Thread Fajar A. Nugraha
On Sun, Feb 13, 2011 at 7:40 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Sat, Feb 12, 2011 at 08:54:26PM +0100, Roy Sigurd Karlsbakk wrote:
  I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support.
  Any pointers to more info on this?

 There are some work in progress from http://zfsonlinux.org/, but the posix 
 layer was still lacking last I checked


 kqstor made the posix layer.

There was an effort to create a separate posix layer, parallel to the
one done by kq. It's not yet fully functional.

https://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/00692385519bf096#
https://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/5305355200ac0b38#

If a Linux distro is using zfs right now, it'd either be zfs-fuse or kq's.

-- 
Fajar

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best choice - file system for system

2011-01-30 Thread Fajar A. Nugraha
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
 On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey 
 opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
2- When you want to restore, it's all or nothing.  If a single bit is
corrupt in the data stream, the whole stream is lost.

Regarding point #2, I contend that zfs send is better than ufsdump.  I would
prefer to discover corruption in the backup, rather than blindly restoring
it undetected.

 OTOH, it renders ZFS send useless for backup or archival purposes.

... unless the backup/archive is also on zfs with enough redundancy
(e.g. raidz).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-08 Thread Fajar A. Nugraha
On Thu, Jan 6, 2011 at 11:36 PM, Garrett D'Amore garr...@nexenta.com wrote:
 On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
 See my point?  Next time I buy a server, I do not have confidence to
 simply expect solaris on dell to work reliably.  The same goes for solaris
 derivatives, and all non-sun hardware.  There simply is not an adequate
 qualification and/or support process.


 When you purchase NexentaStor from a top-tier Nexenta Hardware Partner, you

Where is the list? Is this the one on
http://www.nexenta.com/corp/technology-partners-overview/certified-technology-partners
?

 get a product that has been through a rigorous qualification process which
 includes the hardware and software configuration matched together, tested
 with an extensive battery.  You also can get a higher level of support than
 is offered to people who build their own systems.

 Oracle is *not* the only company capable of performing in depth testing of
 Solaris.

Does this roughly mean I can expect similar (or even better) hardware
compatibility support and with nexentastor on supermicro as solaris on
oracle/sun hardware?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Fajar A. Nugraha
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel rich...@ellipseinc.com wrote:
 I've tried ssh blowfish and scp arcfour. both are CPU limited long before the 
 10g link is.

 I'vw also tried mbuffer, but I get broken pipe errors part way through the 
 transfer.

 I'm open to ideas for faster ways to to either zfs send directly or through a 
 compressed file of the zfs send output.

 For the moment I;

 zfs send  pigz
 scp arcfour the file gz file to the remote host
 gunzip  to zfs receive

 This takes a very long time for 3 TB of data, and barely makes use the 10g 
 connection between the machines due to the CPU limiting on the scp and gunzip 
 processes.

I usually do

zfs send | lzop -c | ssh -c blowfish ip_of_remote_server lzop -dc |
zfs receive

lzop is much faster than gzip. I'd also check how fast your disks are,
make sure it's not the bottleneck.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Fajar A. Nugraha
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote:
 I am sorry you feel that way.  I will look at your issue as soon as I am 
 able, but I should say that it is almost certain that whatever the problem 
 is, it probably is inherited from OpenSolaris and the build of NCP you were 
 testing was indeed not the final release so some issues are not entirely 
 surprising.

So would you say that NCP / NexentaStore Community 3.0.3  is good
enough to use today as stand-in replacement for last available build
of Opensolaris when used primarily for storage server?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Fajar A. Nugraha
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
 I think the point is to say:  ZFS software raid is both faster and more
 reliable than your hardware raid.  Surprising though it may be for a
 newcomer, I have statistics to back that up,

Can you share it?

 You will do best if you configure the raid controller to JBOD.

Problem: HP's storage controller doesn't support that mode.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-19 Thread Fajar A. Nugraha
On Fri, Mar 19, 2010 at 12:38 PM, Rob slewb...@yahoo.com wrote:
 Can a ZFS send stream become corrupt when piped between two hosts across a 
 WAN link using 'ssh'?

unless the end computers are bad (memory problems, etc.), then the
answer should be no. ssh has its own error detection method, and the
zfs send stream itself is checksummed.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] why L2ARC device is used to store files ?

2010-03-06 Thread Fajar A. Nugraha
On Sat, Mar 6, 2010 at 3:15 PM, Abdullah Al-Dahlawi dahl...@ieee.org wrote:
 abdul...@hp_hdx_16:~/Downloads# zpool iostat -v hdd
    capacity operations    bandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 hdd 1.96G  17.7G 10 64  1.27M  7.76M
   c7t0d0p3  1.96G  17.7G 10 64  1.27M  7.76M

you only have 17.7GB free space there, not 50GB that you said earlier.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-23 Thread Fajar A. Nugraha
On Wed, Feb 24, 2010 at 9:11 AM, patrik s...@dentarg.net wrote:
 This is zpool import from my machine with OpenSolaris 2009.06 (all zpool's 
 are fine in FreeBSD). Notice that the zpool named temp can be imported. Why 
 not secure then? Is it because it is raidz1?

 status: One or more devices contains corrupted data.

            c8t3d0s8  UNAVAIL  corrupted data
            c8t4d0s8  UNAVAIL  corrupted data

I'd suggest you try reimporting them in FreeBSD. It's possible that
those disks are really corrupted.
Another option is to try latest opensolaris livecd from genunix.org,
and try to import it there.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirrored boot disks

2010-02-19 Thread Fajar A. Nugraha
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull t...@nrg-inc.com wrote:
 Interestingly, with the machine running, I can pull the first drive in the 
 mirror, replace it with an unformatted one, format it, mirror rpool over to 
 it, install the boot loader, and at that point the machine will boot with no 
 problems.   It s just when the first disk is missing that I have a problem 
 with it.

I had a problem cloning a disk for xVM domU where it hangs just after
displaying hostname, similar to your result. I had to boot with
livecd, force-import and export the pool, and reboot. It works. So you
might want to try that.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD and ZFS

2010-02-16 Thread Fajar A. Nugraha
On Sun, Feb 14, 2010 at 12:51 PM, Tracey Bernath tbern...@ix.netcom.com wrote:
 I went from all four disks of the array at 100%, doing about 170 read
 IOPS/25MB/s
 to all four disks of the array at 0%, once hitting nealyr 500 IOPS/65MB/s
 off the cache drive (@ only 50% load).


 And, keep  in mind this was on less than $1000 of hardware.

really? complete box and all, or is it just the disks? Cause the 4
disks alone should cost about $400. Did you use ECC RAM?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-11 Thread Fajar A. Nugraha
On Fri, Feb 12, 2010 at 10:55 AM, Tony MacDoodle tpsdoo...@gmail.com wrote:
 I am getting the following message when I try and remove a snapshot from a
 clone:

 bash-3.00# zfs destroy data/webser...@sys_unconfigd
 cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
 use '-R' to destroy the following datasets:

Is there something else below that line? Like the name of the clones?

 The datasets are being used, so why can't I delete the snapshot?

because it's used as base for clones.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Fajar A. Nugraha
On Sat, Feb 6, 2010 at 1:32 AM, J jahservan...@gmail.com wrote:
 saves me hundreds on HW-based RAID controllers ^_^

... which you might need to fork over to buy additional memory or faster CPU :P

Don't get me wrong, zfs is awesome, but to do so it needs more CPU
power and RAM (and possibly SSD) compared to other filesystems. If
your main concern is cost, then some HW raid controller might be more
effective.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-30 Thread Fajar A. Nugraha
On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
 Hi Michelle,

 You're almost there, but install the bootblocks in s0:

 # installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c19d0s0

One question. I thought -m installs in MBR (thus not really
installing in s0, like your description)?  Shouldn't it be fine
without -m?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >