Brandon High wrote:
On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems? From reading
the best practices guides , , it seems that I cannot have
Next stable (as in fedora or ubuntu releases) opensolaris version
will be 2008.11.
In my case I found 2008.05 is simply unusable (my
main interest is xen/xvm), but upgrading to the latest available build
with OS's pkg, (similar to apt-get) fixed the problem.
installed the original OS
On Fri, Oct 3, 2008 at 10:37 PM, Vasile Dumitrescu
[EMAIL PROTECTED] wrote:
VMWare 6.0.4 running on Debian unstable,
Linux bigsrv 2.6.26-1-amd64 #1 SMP Wed Sep 24 13:59:41 UTC 2008 x86_64
Solaris is vanilla snv_90 installed with no GUI.
in summary: physical disks, assigned
Per subject, has abyone sucessfully used b99 with HP hardware?
I've been using opensolaris for some time on HP Blade. Installing from
os200805 back in June works fine, with the caveat that I had to manually
add cpqary3 driver (v 1.90 was available back then).
After installation, I regulary
Fajar A. Nugraha wrote:
After that, I decided to upgrade my zpool (b98 has zpool v13). Surprise,
surpise, the system now doesn't boot at all. Apparently I got hit by
this bug :
b98 CD works as expected. I made a little
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
I think gzip compression is not supported on zfs root. Try compression=on.
Description: S/MIME Cryptographic Signature
In the end I managed to install OpenSolaris snv_101b on hp blade on smart
array drive directly from install cd. Everything is fine. The problems I
experienced with hangs on boot on snv_99+ is related to Qlogic driver, but
this is a different story.
(CC-ing both lists, since I post the question to both under different
In my situation qlogic cannot get insync on second FC port (I can see it on
If you want just bypass loading of qcl driver, before booting the system add
in grub this:
LEES, Cooper wrote:
Since I know (via backing up my sharetab) what shares I need to
have (all nfs share - no cifs on this 4500 - YAY) and have organised
downtime for this server, would it be easier for me to go to Solaris
10u6 (or opensolaris) by just installing from scratch and re-importing
On Sun, Jan 11, 2009 at 11:40 AM, Marvin Wang, Min mail...@yahoo.com wrote:
sorry, my question might be misleading, I mean the starting datablock of a
starting is somewhat misleading, since zfs will allocate a new block
whenever a block is updated, so the physical blocks for a file is not
On Tue, Jan 27, 2009 at 7:16 PM, Alex a...@pancentric.com wrote:
I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant
DL385 G2) with 10k 2.5 146GB SAS drives. The drives appear correctly,
however due to the controller not offering JBOD functionality I had to
On Tue, Jan 27, 2009 at 10:41 PM, Edmund White ewwh...@mac.com wrote:
Given that I have lots of ProLiant equipment, are there any recommended
controllers that would work in this situation? Is this an issue unique to
the Smart Array controllers?
It's an issue with controllers that can't prsent
On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
If there are two (or more) instances of ZFS in the end-to-end data
path, each instance is responsible for its own redundancy and error
recovery. There is no
On Tue, Feb 3, 2009 at 8:00 PM, Niels Van Hee nvan...@gmail.com wrote:
the snapshots are indeed large files (just over 4gb).
Just wondering, why didn't you compress it first? something like
zfs send | gzip backup.zfs.gz
It should save lots of network bandwitdh. Plus, IF it does
On Wed, Feb 4, 2009 at 1:28 AM, Bob Friesenhahn
On Tue, 3 Feb 2009, Fajar A. Nugraha wrote:
Just wondering, why didn't you compress it first? something like
zfs send | gzip backup.zfs.gz
The 'lzop' compressor is much better for this since it is *much
On Wed, Feb 4, 2009 at 10:41 AM, Fajar A. Nugraha fa...@fajar.net wrote:
Just wondering, why didn't you compress it first? something like
zfs send | gzip backup.zfs.gz
The 'lzop' compressor is much better for this since it is *much* faster.
Sure, when enabling compression on zfs fs.
On Wed, Feb 4, 2009 at 9:31 AM, Jean-Paul Rivet jri...@bigpond.net.au wrote:
I've been searching without success if this has been done or even discussed
Would it be possible now or in the future to use an SD Card on a laptop as a
cache for ZFS?
When using 'pfexec zpool add
On Wed, Feb 4, 2009 at 6:19 PM, Michael McKnight
#split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
But when I compare the checksum of the original snapshot to that of the
rejoined snapshot, I get a different result:
2009/2/18 Ragnar Sundblad ra...@csc.kth.se:
For our file- and mail servers we have been using mirrored raid-5
chassises, with disksuite and ufs. This has served us well, and the
By some reason that I haven't gotten
yet, zfs doesn't allow you to put raids upon each other, like
On Tue, Mar 3, 2009 at 8:35 AM, Julius Roberts hooliowobb...@gmail.com wrote:
but previously using zfs-fuse (on Ubuntu 8.10), this was not possible.
to look at a snapshot we had to clone the snapshot ala:
sudo zfs clone zpoolname/zfsn...@snapname_somedate
On Wed, Mar 4, 2009 at 2:07 PM, Stephen Nelson-Smith sanel...@gmail.com wrote:
As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar. The 7210
2009/3/27 Matthew Angelo bang...@gmail.com:
Doing an $( ls -lR | grep -i IO Error ) returns roughly 10-15 files which
If ls works then tar, cpio, etc. should work.
zfs-discuss mailing list
On Sat, Mar 28, 2009 at 5:05 AM, Harry Putnam rea...@newsguy.com wrote:
Now I'm wondering if the export/import sub commands might not be a
good bit faster.
I believe the greatest advantage of zfs send/receive over rsync is not
about speed, but rather it's on zfs send -R, which would (from man
I have a backup system using zfs send/receive (I know there are pros
and cons to that, but it's suitable for what I need).
What I have now is a script which runs daily, do zfs send, compress
and write it to a file, then transfer it with ftp to a remote host. It
does full backup every 1st, and do
On Sun, Mar 29, 2009 at 3:40 AM, Brent Jones br...@servuhome.net wrote:
I have since modified some scripts out there, and rolled them into my
own, you can see it here at pastebin.com:
Your script seems to handle failed replication and locking
On Mon, Apr 6, 2009 at 4:41 PM, John Levon john.le...@sun.com wrote:
I see a couple of bugs about lofi performance like 6382683, but I'm not sure
related, it seems to be a newer issue.
Isn't it 6806627?
On Wed, Apr 8, 2009 at 4:06 PM, Tomas Ögren st...@acc.umu.se wrote:
Do you think there is something that can be done to recover lost data?
Does 'zpool import' find anything? devfsadm -v to re-scan devices
... or info from the other thread
boot from disk into
On Sat, Apr 11, 2009 at 4:41 AM, Harry Putnam rea...@newsguy.com wrote:
Thanks for the input... I guess I'll have to wait and see what is
resolved in the other thread about what happens when you rsync data to
a zfs filesystem and how that inter plays with zfs snapshots going on too.
On Wed, Apr 15, 2009 at 10:49 PM, Uwe Dippel udip...@gmail.com wrote:
Bob Friesenhahn wrote:
Since it was not reported that user data was impacted, it seems likely
that there was a read failure (or bad checksum) for ZFS metadata which is
(Maybe I am too much of a
On Tue, Apr 28, 2009 at 11:49 AM, Julius Roberts
juli...@rainforest:~$ cat /etc/issue
Ubuntu 9.04 \n \l
juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse
ii zfs-fuse 0.5.1-1ubuntu5
First of all this question might be
On Tue, Apr 28, 2009 at 9:42 AM, Scott Lawson
Mainstream Solaris 10 gets a port of ZFS from OpenSolaris, so its
features are fewer and later. As time ticks away, fewer features
will be back-ported to Solaris 10. Meanwhile, you can get a production
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
IMHO it's probably
On Wed, May 6, 2009 at 10:17 PM, Richard Elling
Fajar A. Nugraha wrote:
IMHO it's probably best to set a limit on ARC size and treat it like
any other memory used by applications.
There are a few cases where this makes sense, but not many. The ARC
On Wed, May 13, 2009 at 11:32 AM, Erik Trimble erik.trim...@sun.com wrote:
the zfs send and zfs receive commands can be used analogously to
ufsdump and ufsrestore.
You'll have to create the root pool by hand when doing a system restore, but
it's not really any different than having to
On Wed, May 20, 2009 at 4:06 AM, Ahmed Kamal
Is anyone even using ZFS under Xen in production in some form. If so, what's
your impression of reliability ?
I'm using zfs-fuse on Linux domU on top of LVM on Linux dom0/Xen 3.1.
Not exactly a recommended
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Hommekjeti...@linpro.no wrote:
indeed. I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be handy, but
On Thu, Jun 18, 2009 at 10:56 AM, Dave Ringkorno-re...@opensolaris.org wrote:
But what if I used zfs send to save a recursive snapshot of my root pool on
the old server, booted my new server (with the same architecture) from the
DVD in single user mode and created a ZFS pool on its local
On Thu, Jun 18, 2009 at 4:28 AM, Miles Nordincar...@ivy.net wrote:
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
NAS (NSLU2-ish) things begging for ZFS.
Are they feasible
On Thu, Jun 18, 2009 at 8:01 AM, Cesar Augusto Suarezcemaa...@yahoo.com wrote:
I have Ubuntu jaunty already installed on my pc, on the second HD, i've
Now, i cant share info between this 2 OS.
I download and install ZFS-FUSE on jaunty, but the version is 6, instead in
On Sat, Jun 20, 2009 at 9:18 AM, Dave Ringkorno-re...@opensolaris.org wrote:
What would be wrong with this:
1) Create a recursive snapshot of the root pool on homer.
2) zfs send this snapshot to a file on some NFS server.
3) Boot my 220R (same architecture as the E450) into single user mode
On Sat, Jun 20, 2009 at 2:53 AM, Miles Nordincar...@ivy.net wrote:
fan == Fajar A Nugraha fa...@fajar.net writes:
et == Erik Trimble erik.trim...@sun.com writes:
fan The N610N that I have (BCM3302, 300MHz, 64MB) isn't even
fan powerful enough to saturate either the gigabit wired
On Sat, Jun 20, 2009 at 7:02 PM, Michael
One really interesting bit is how easily it is to make the disk in a pool
bigger by doing a zpool replace on the device. It couldn't have been any
easier with ZFS.
It's interesting how you achieved that,
On Tue, Jun 23, 2009 at 1:13 PM, Rossno-re...@opensolaris.org wrote:
Look at how the resilver finished:
c1t3d0 ONLINE 3 0 0 128K resilvered
c1t4d0 ONLINE 0 0 11 473K resilvered
c1t5d0 ONLINE 0 0 23 986K resilvered
On Wed, Jun 24, 2009 at 6:32 PM, Simon Bredenno-re...@opensolaris.org wrote:
Although, it seems possible to add a drive to form a mirror for the ZFS boot
pool 'rpool', the main problem I see is that in my case, I would be
attempting to form a mirror using a smaller drive
On Thu, Jul 23, 2009 at 12:29 PM, thomasno-re...@opensolaris.org wrote:
Hmm.. I guess that's what I've heard as well.
I do run compression and believe a lot of others would as well. So then, it
to me that if I have guests that run a filesystem formatted with 4k blocks for
On Fri, Jul 24, 2009 at 9:24 AM, Jorgen Lundmanlund...@gmo.jp wrote:
However, zpool detach appears to mark the disk as blank, so nothing will
find any pools (import, import -D etc). zdb -l will show labels,
If both disks are bootable (with installboot or installgrub), removing
the mirror and
On Mon, Jul 27, 2009 at 2:51 PM, Axelle
I've already sent a few posts around this issue, but haven't quite got the
answer - so I'll try to clarify my question :)
Since I have upgraded from 2008.11 to 2009.06 a new BE has been created. On
On Tue, Aug 11, 2009 at 4:14 PM, Martin
Did anyone reply to this question?
We have the same issue and our Windows admins do see why the iSCSI target
should be disconnected when the underlying storage is extended
Is there any iscsi target that can be
On Fri, Aug 14, 2009 at 2:05 PM, Denis Ahrensno-re...@opensolaris.org wrote:
Some developers here said a long time ago that someone should show
the code for LZO compression support for ZFS before talking about the
Isn't the main problem license, LZO being GPL, while zfs CDDL?
On Tue, Aug 18, 2009 at 2:37 PM, Matthew
So there must be basically lots of references to data that hide themselves
from the surface and can't really be found using zfs list.
zfs list -t all usually works for me. Look at USED and REFER
On Tue, Aug 18, 2009 at 4:09 PM, Matthew
Hi, thanks for the info.
Can you have a look at the attachment on the original post for me?
Everything you said is what I expected to see in the output there, but a lot
of the values are blank where I hoped
On Thu, Sep 10, 2009 at 3:27 PM, Ginodandr...@gmail.com wrote:
# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
Now there are no files in /dr/netapp11bkpVOL34, but
# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34 1.34T 158G 1.34T
On Thu, Sep 10, 2009 at 8:03 PM, Maurilio Longo
I have a question, let's say I have a zvol named vol1 which is a clone of a
snapshot of another zvol (its origin property is tank/my...@mysnap).
If I send this zvol to a different zpool through a zfs send
On Thu, Sep 10, 2009 at 8:29 PM, Maurilio Longo
It'll send all necessary data (without having to
promote anything) so
that the receiving zvol has a working vol1, and it's
not a clone.
thanks for clarifying, this is what I was calling
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote:
I can reboot into Linux and import the pools, but haven't figured out why I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse under Linux is using ZFS version 13, where
On Fri, Sep 18, 2009 at 4:08 AM, Paul Archer p...@paularcher.org wrote:
I did a little research and found that parted on Linux handles EFI
labelling. I used it to change the partition scheme on sda, creating an
sda1. I then offlined sda and replaced it with sda1. I wish I had just tried
On Mon, Sep 28, 2009 at 2:20 PM, Jorgen Lundman lund...@gmo.jp wrote:
We would like to run something like svn_117 (don't really care which version
per-se, that is just the one version we have done the most testing with).
But our Vendor will only support Solaris 10. After weeks of wrangling,
On Wed, Sep 30, 2009 at 10:54 PM, David Dyer-Bennet d...@dd-b.net wrote:
On Wed, September 30, 2009 10:07, Robert Thurlow wrote:
David Dyer-Bennet wrote:
And I haven't been able to make incremental replication send/receive
Supposed to be working on that, but now I'm having trouble
On Thu, Oct 1, 2009 at 8:46 AM, David Dyer-Bennet d...@dd-b.net wrote:
Fajar A. Nugraha wrote:
x86, OpenSolaris. But I'm not terribly attracted to the idea of switching
to another, less familiar, virtualization product in hopes that it will
work. I really rather expected Sun's
On Mon, Oct 5, 2009 at 10:04 AM, Sam s...@smugmug.com wrote:
I've been having some serious problems with my RaidZ2 array since I updated
to 2009.06 on friday (from 2008.05). Its 10 drives with 1 hot spare with 8
drives on a SAS card and 3 drives on the motherboards SATA connectors. I'm
On Tue, Oct 27, 2009 at 4:39 PM, Dennis Clarke dcla...@blastwave.org wrote:
So essentially there is no way to grow that zpool. Is this the case?
There's the option of getting a bigger disk and do a send - receive.
I'm guessing the restriction is necessary for simplicity sake to allow
On Wed, Dec 9, 2009 at 10:41 AM, Brent Jones br...@servuhome.net wrote:
I submitted a bug a while ago about this:
I'll escalate since I have a support contract. But yes, I see this as
a serious bug, I thought my machine had
On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-test
NAME USED AVAIL REFER MOUNTPOINT
rpool/rb-test 18K 170G 18K /rpool/rb-test
$ sudo zfs snapshot rpool/rb-t...@01
On Sat, Dec 26, 2009 at 4:10 PM, Saso Kiselkov skisel...@gmail.com wrote:
I'm still trying to find the case number I have open with Sunsolve or
whatever, it was for exactly this issue, and I believe the fix was to
add dozens more classes to the scheduler, to allow more fair disk
On Mon, Jan 4, 2010 at 5:52 AM, Mark Bennett mark.benn...@public.co.nz wrote:
Is it possible to import a zpool and stop it mounting the zfs file systems,
or override the mount paths?
Try zpool import -R ...
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster
we need to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.
I've been following this thread. Would it be faster to do the reverse.
Copy the 20% of disk then format then
On Thu, Jan 14, 2010 at 6:40 AM, Gregory Durham
The virtual machines coming up as if they were on is the least of my
worries, my biggest worry is keeping the filesystems of the vms alive i.e.
As Tim said, The snapshot disk are in the same
On Fri, Jan 15, 2010 at 12:33 AM, Gregory Durham
I have been recommended by several other users on this mailing list to use
inside the vm snapshots, vmware snapshots, and then use zfs snapshots. I
believe I understand the difference between filesystem snapshots
On Mon, Jan 18, 2010 at 10:22 AM, Richard Elling
On Jan 17, 2010, at 11:59 AM, Tristan Ball wrote:
Is there a way to check the recordsize of a given file, assuming that the
filesystems recordsize was changed at some point?
I don't know of an easy way to do
On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen
You're almost there, but install the bootblocks in s0:
# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c19d0s0
One question. I thought -m installs in MBR (thus not really
On Sat, Feb 6, 2010 at 1:32 AM, J jahservan...@gmail.com wrote:
saves me hundreds on HW-based RAID controllers ^_^
... which you might need to fork over to buy additional memory or faster CPU :P
Don't get me wrong, zfs is awesome, but to do so it needs more CPU
power and RAM (and possibly SSD)
On Fri, Feb 12, 2010 at 10:55 AM, Tony MacDoodle tpsdoo...@gmail.com wrote:
I am getting the following message when I try and remove a snapshot from a
bash-3.00# zfs destroy data/webser...@sys_unconfigd
cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
On Sun, Feb 14, 2010 at 12:51 PM, Tracey Bernath tbern...@ix.netcom.com wrote:
I went from all four disks of the array at 100%, doing about 170 read
to all four disks of the array at 0%, once hitting nealyr 500 IOPS/65MB/s
off the cache drive (@ only 50% load).
And, keep in
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull t...@nrg-inc.com wrote:
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to
it, install the boot loader, and at that point the machine will boot with
On Wed, Feb 24, 2010 at 9:11 AM, patrik s...@dentarg.net wrote:
This is zpool import from my machine with OpenSolaris 2009.06 (all zpool's
are fine in FreeBSD). Notice that the zpool named temp can be imported. Why
not secure then? Is it because it is raidz1?
status: One or more devices
On Sat, Mar 6, 2010 at 3:15 PM, Abdullah Al-Dahlawi dahl...@ieee.org wrote:
abdul...@hp_hdx_16:~/Downloads# zpool iostat -v hdd
capacity operations bandwidth
pool used avail read write read write
-- - - - - - -
On Fri, Mar 19, 2010 at 12:38 PM, Rob slewb...@yahoo.com wrote:
Can a ZFS send stream become corrupt when piped between two hosts across a
WAN link using 'ssh'?
unless the end computers are bad (memory problems, etc.), then the
answer should be no. ssh has its own error detection method, and
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Can you share it?
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote:
I am sorry you feel that way. I will look at your issue as soon as I am
able, but I should say that it is almost certain that whatever the problem
is, it probably is inherited from OpenSolaris and the build of NCP
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel rich...@ellipseinc.com wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
I'm open to ideas for faster ways
On Thu, Jan 6, 2011 at 11:36 PM, Garrett D'Amore garr...@nexenta.com wrote:
On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
See my point? Next time I buy a server, I do not have confidence to
simply expect solaris on dell to work reliably. The same goes for solaris
derivatives, and all
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the
On Sun, Feb 13, 2011 at 7:40 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Sat, Feb 12, 2011 at 08:54:26PM +0100, Roy Sigurd Karlsbakk wrote:
I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support.
Any pointers to more info on this?
There are some work in progress from
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer white...@gmail.com wrote:
Hi I wanted to get some expert advice on this. I have an ordinary hardware
SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
that if possible with my VMware environment where I run several Solaris
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu firstname.lastname@example.org wrote:
I'd like to know if there is an utility like `Filefrag' shipped with
e2fsprogs on linux, which is used to fetch the extents mapping info of a
file(especially a sparse file) located on ZFS?
Something like zdb
On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidek p...@freebsd.org wrote:
On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:
Newer versions of FreeBSD have newer ZFS code.
Yes, we are at v28 at this point (the lastest open-source version).
That said, ZFS on FreeBSD is kind
On Wed, Mar 23, 2011 at 7:33 AM, Jeff Bacon ba...@walleyesoftware.com wrote:
I've also started conversations with Pogo about offering an
based workstation, which might be another option if you prefer more of
Sometimes I'm left wondering if anyone uses the non-Oracle versions for
On Mon, Apr 4, 2011 at 4:16 AM, Daxter xovat...@gmail.com wrote:
My goal is to optimally have two 1TB drives inside of a rather small computer
of mine, running Solaris, which can sync with and be a backup of my somewhat
portable 2TB drive. Up to this point I have been using the 2TB drive
On Mon, Apr 4, 2011 at 4:49 PM, For@ll for...@stalowka.info wrote:
What can I do that zpool show new value?
zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
I tried your suggestion, but no effect.
Did you modify the partition table?
IIRC if you pass a DISK to zpool
On Mon, Apr 4, 2011 at 7:58 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size
On Mon, Apr 4, 2011 at 6:48 PM, For@ll for...@stalowka.info wrote:
When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.
On Fri, Apr 8, 2011 at 2:10 PM, Arjun YK arju...@gmail.com wrote:
I have a situation where a host, which is booted off its 'rpool', need
to temporarily import the 'rpool' of another host, edit some files in
it, and export the pool back retaining its original name 'rpool'. Can
On Fri, Apr 8, 2011 at 2:24 PM, Arjun YK arju...@gmail.com wrote:
Let me add another query.
I would assume it would be perfectly ok to choose any name for root
pool, instead of 'rpool', during the OS install. Please suggest
Have you tried it?
Last time I try, the pool name
On Fri, Apr 8, 2011 at 2:37 PM, Stephan Budach stephan.bud...@jvm.de wrote:
You can re-name a zpool at import time by simply issueing:
zpool import oldpool newpool
Yes, I know :)
The last question from Arjun was can we choose any name for root
pool, instead of 'rpool', during the OS install
On Thu, May 12, 2011 at 8:31 PM, Arjun YK arju...@gmail.com wrote:
Thanks everyone. Your inputs helped me a lot.
The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
mount it. But I am not certain if that can cause any issue in the future, or
that's a right thing to do.
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov j...@cos.ru wrote:
However it seems that there may be some extra data beside the zfs
pool in the actual volume (I'd at least expect an MBR or GPT, and
maybe some iSCSI service data as an overhead). One way or another,
the dcpool can not be found in
On Wed, Jun 1, 2011 at 7:06 AM, Bill Sommerfeld sommerf...@hamachi.org wrote:
On 05/31/11 09:01, Anonymous wrote:
Hi. I have a development system on Intel commodity hardware with a 500G ZFS
root mirror. I have another 500G drive same as the other two. Is there any
way to use this disk to good
On Tue, Jun 14, 2011 at 7:15 PM, Jim Klimov jimkli...@cos.ru wrote:
A college friend of mine is using Debian Linux on his desktop,
and wondered if he could tap into ZFS goodness without adding
another server in his small quiet apartment or changing the
desktop OS. According to his
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith smith...@llnl.gov wrote:
When I tried out Solaris 11, I just exported the pool prior to the install of
Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had
installed Solaris 11 I still had the other disk in the mirror
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith smith...@llnl.gov wrote:
Generally, the log devices are listed after the pool devices.
Did this pool have log devices at one time? Are they missing?
Yes the pool does have logs. I'll include a zpool status -v below
from when I'm booted in
1 - 100 of 174 matches
Mail list logo