The question that has occurred to me is:
I *must* choose one of those support options for how long?
I mean if I buy support for a machine for a year and put S11 Express
in production on it, then I don't renew the support, am I now
violating the license?
That's bogus. I could be wrong but I
Does OpenSolaris/Solaris11 Express have a driver for it already?
Anyone used one already?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
I'd like to give my machine a little more swap.
I ran:
zfs get volsize rpool/swap
and saw it was 2G
So I ran:
zfs set volsize=4G rpool/swap
to double it. zfs get shows it took affect, but swap -l doesn't show
any change.
I ran swap
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I?m in love. But for one thing. The interconnect between
the head storage.
1G Ether is so cheap, but not as fast as
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm shopping for an SSD for a ZIL.
Looking around on NewEgg, at the claimed (not sure I beleive them)
IOPS, these caught my attention:
Corsair Force 80GB CSSD-F80GBP2-BRKT50K 4K aligned ran.
write IOPS
OCZ Vertex 2 120GB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 8/7/2010 4:11 PM, Terry Hull wrote:
It is just that lots of the PERC controllers do not do JBOD very well. I've
done it several times making a RAID 0 for each drive. Unfortunately, that
means the server has lots of RAID hardware that is not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/25/2010 3:39 AM, Markus Kovero wrote:
Any other feasible alternatives for Dell hardware? Wondering, are these
issues mostly related to Nehalem-architectural problems, eg. c-states.
So is there anything good in switching hw vendor? HP
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi All,
I'm currently considering purchasing 1 or 2 Dell R515's.
With up to 14 drives, and up to 64GB of RAM, it seems like it's well
suited
for a low-end ZFS server.
I know this box is new, but I wonder if anyone out there has any
experience
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 4:28 AM, Habony, Zsolt wrote:
I worry about head thrashing.
Why?
If your SAN group gives you a LUN that is at the opposite end of the
array, I would think that was because they had already assigned the
space in the middle to other
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/2010 5:40 AM, Habony, Zsolt wrote:
(I do not mirror, as the storage gives redundancy behind LUNs.)
By not enabling redundancy (Mirror or RAIDZ[123]) at the ZFS level,
you are opening yourself to corruption problems that the underlying
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/17/2010 9:38 AM, Edward Ned Harvey wrote:
The default blocksize is 128K. If you are using mirrors, then
each block on disk will be 128K whenever possible. But if you're
using raidzN with a capacity of M disks (M disks useful capacity +
to work on
the code myself if it were available.
Anyone have any ideas?
-Kyle
On 7/7/2010 3:12 PM, Kyle McDonald wrote:
On 6/24/2010 6:31 PM, James C. McPherson wrote:
hi Kyle,
the serveraid driver was only ever a community effort; the
fact that it was done by a Sun engineer is actually
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/28/2010 10:30 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tristram Scott
If you would like to try it out, download the package from:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've very in-frequently seen the RAMSAN devices mentioned here. Probably
due to price.
However a long time ago I think I remember someone suggesting a build it
yourself RAMSAN.
Where is the down side of one or 2 OS boxes with a whole lot of RAM
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/11/2010 12:32 AM, Erik Trimble wrote:
On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com
wrote:
We at KQInfotech, initially started on an independent port of ZFS to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/9/2010 5:04 PM, Edward Ned Harvey wrote:
Everything is faster with more ram. There is no limit, unless the total
used disk in your system is smaller than the available ram in your system
... which seems very improbable.
Off topic, but...
On 5/27/2010 2:45 PM, Jan Kryl wrote:
Hi Frank,
On 24/05/10 16:52 -0400, Frank Middleton wrote:
Many many moons ago, I submitted a CR into bugs about a
highly reproducible panic that occurs if you try to re-share
a lofi mounted image. That CR has AFAIK long since
disappeared - I
On 5/27/2010 9:30 PM, Reshekel Shedwitz wrote:
Some tips…
(1) Do a zfs mount -a and a zfs share -a. Just in case something didn't get
shared out correctly (though that's supposed to automatically happen, I think)
(2) The Solaris automounter (i.e. in a NIS environment) does not seem to
Hi,
I know the general discussion is about flash SSD's connected through
SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
something that makes no sense...
I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
300GB for a data pool, and 1 18GB for the root pool.
On 5/25/2010 11:39 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive
SNIP a whole lot of ZIL/SLOG discussion
Hi guys.
yep I know about the ZIL, and SSD Slogs.
While setting Nextenta up it offered to disable the ZIL entirely. For
now I left it on. In the end (hopefully for only specifc filesystems -
once that feature is released.) I'll end up disabling the ZIL
Hi all,
I recently installed Nexenta Community 3.0.2 on one of my servers:
IBM eSeries X346
2.8Ghz Xeon
12GB DDR2 RAM
1 builtin BGE interface for management
4 port Intel GigE card aggregated for Data
IBM ServRAID 7k with 256MB BB Cache with (isp driver)
6 RAID0 single drive LUNS (so I can use
On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
valrh...@gmail.com valrh...@gmail.com writes:
I have been using DVDs for small backups here and there for a decade
now, and have a huge pile of several hundred. They have a lot of
overlapping content, so I was thinking of feeding the
On 5/3/2010 7:41 AM, Michelle Knight wrote:
The long ls command worked, as in it created the links, but they didn't work
properly under the ZFS SMB share.
I'm guessing you meant the 'long ln' command?
If you look at what those 2 commadns create you'll notice (in the output
of ls -l) that
On 5/3/2010 4:56 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
If you're only sharing them to Linux machines, then NFS would be so
much
easier to use. You'll still want relative links though
On 3/9/2010 1:55 PM, Matt Cowger wrote:
That's a very good point - in this particular case, there is no option to
change the blocksize for the application.
I have no way of guessing the effects it would have, but is there a
reason that the filesystem blocks can't be a multiple of the
On 4/17/2010 9:03 AM, Edward Ned Harvey wrote:
It would be cool to only list files which are different.
Know of any way to do that?
cmp
Oh, no. Because cmp and diff require reading both files, it could take
forever, especially if you have a lot of snapshots to check,
On 4/16/2010 10:30 AM, Bob Friesenhahn wrote:
On Thu, 15 Apr 2010, Eric D. Mudama wrote:
The purpose of TRIM is to tell the drive that some # of sectors are no
longer important so that it doesn't have to work as hard in its
internal garbage collection.
The sector size does not typically
On 4/6/2010 3:41 PM, Erik Trimble wrote:
On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
latest recommendations for a log device?
http://bit.ly/aL1dne
The Vertex LE models should do well as ZIL (though
On 4/4/2010 11:04 PM, Edward Ned Harvey wrote:
Actually, It's my experience that Sun (and other vendors) do exactly
that for you when you buy their parts - at least for rotating drives, I
have no experience with SSD's.
The Sun disk label shipped on all the drives is setup to make the drive
I've seen the Nexenta and EON webpages, but I'm not looking to build my own.
Is there anything out there I can just buy?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/2/2010 8:08 AM, Edward Ned Harvey wrote:
I know it is way after the fact, but I find it best to coerce each
drive down to the whole GB boundary using format (create Solaris
partition just up to the boundary). Then if you ever get a drive a
little smaller it still should fit.
It
On 3/27/2010 3:14 AM, Svein Skogen wrote:
On 26.03.2010 23:55, Ian Collins wrote:
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that much.
That is
On 3/30/2010 2:44 PM, Adam Leventhal wrote:
Hey Karsten,
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on
the F20 card are optimized more for concurrent IOPS than single-threaded
latency.
On 3/10/2010 3:27 PM, Robert Thurlow wrote:
As said earlier, it's the string returned from the reverse DNS lookup
that needs to be matched.
So, to make a long story short, if you log into the server
from the client and do who am i, you will get the host
name you need for the share.
Another
dick hoogendijk wrote:
glidic anthony wrote:
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
use the sharemgr command.
Then you prefere wrong.
To each their own.
ZFS filesystems are not shared this way.
They can be. I do it all the time. There's nothing
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi all,
I can not find any instructions on how to set the file quota (i.e.
maximum number of files per filesystem/directory) or directory quota
(maximum size that files in particular directory can consume) in ZFS.
That is because it doesn't exist.
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format isn't and vice versa.
Jacob Ritorto wrote:
With the web redesign, how does one get to zfs-discuss via the
opensolaris.org website?
Sorry for the ot question, but I'm becoming desperate after
clicking circular links for the better part of the last hour :(
You can get the web pages to load? All I get are
David Magda wrote:
On Oct 24, 2009, at 08:53, Joerg Schilling wrote:
The article that was mentioned a few hours ago did mention
licensing problems without giving any kind of evidence for
this claim. If there is evidence, I would be interested in
knowing the background, otherwise it looks to me
Mike Bo wrote:
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a
Bob Friesenhahn wrote:
On Fri, 23 Oct 2009, Anand Mitra wrote:
One of the biggest questions around this effort would be “licensing”.
As far as our understanding goes; CDDL doesn’t restrict us from
modifying ZFS code and releasing it. However GPL and CDDL code cannot
be mixed, which implies
Owen Davies wrote:
Thanks. I took a look and that is exactly what I was looking for. Of course I
have since just reset all the permissions on all my shares but it seems that
the proper way to swap UIDs for users with permissions on CIFS shares is to:
Edit /etc/passwd
Edit /var/smb/smbpasswd
Scott Meilicke wrote:
I am still not buying it :) I need to research this to satisfy myself.
I can understand that the writes come from memory to disk during a txg write
for async, and that is the behavior I see in testing.
But for sync, data must be committed, and a SSD/ZIL makes that faster
Adam Sherman wrote:
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it
Adam Sherman wrote:
On 6-Aug-09, at 11:50 , Kyle McDonald wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to
allow for usb booting. Most of todays computers DO. Personally i
like compact flash because
Martin wrote:
C,
I appreciate the feedback and like you, do not wish to start a side rant, but
rather understand this, because it is completely counter to my experience.
Allow me to respond based on my anecdotal experience.
What's wrong with make a new pool.. safely copy the data. verify
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
Your best bet would be to install the new
Will Murnane wrote:
I'm using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare. We reconfigured disks
a while ago and now the controller is c4 instead of c2. The hot spare
was originally on c2, and apparently on rebooting it didn't get
Volker A. Brandt wrote:
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well supported? (in AHCI mode?)
Be careful with nVidia if you want to use Samsung SATA disks.
There is a
Hi all,
I think I've read that the AMD 790FX/750SB chipset's SATA controller is
upported, but may have recently had bugs?
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well
dick hoogendijk wrote:
On Fri, 31 Jul 2009 18:38:16 +1000
Tristan Ball tristan.b...@leica-microsystems.com wrote:
Because it means you can create zfs snapshots from a non solaris/non
local client...
Like a linux nfs client, or a windows cifs client.
So if I want a snapshot of i.e.
Ralf Gans wrote:
Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.
The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.
This is why I don't use the mountpoint settings in ZFS. I
Andriy Gapon wrote:
What do you think about the following feature?
Subdirectory is automatically a new filesystem property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
Darren J Moffat wrote:
Kyle McDonald wrote:
Andriy Gapon wrote:
What do you think about the following feature?
Subdirectory is automatically a new filesystem property - an
administrator turns
on this magic property of a filesystem, after that every mkdir *in
the root* of
that filesystem
Tristan Ball wrote:
It just so happens I have one of the 128G and two of the 32G versions in
my drawer, waiting to go into our DR disk array when it arrives.
Hi Tristan,
Just so I can be clear, What model/brand are the drives you were testing?
-Kyle
I dropped the 128G into a spare
Michael McCandless wrote:
I've read in numerous threads that it's important to use ECC RAM in a
ZFS file server.
My question is: is there any technical reason, in ZFS's design, that
makes it particularly important for ZFS to require ECC RAM?
I think, basically the idea is, that if you're
Bob Friesenhahn wrote:
Of course, it is my understanding that the zfs slog is written
sequentially so perhaps this applies instead:
Actually, reading up on these drives I've started to wonder about the
slog writing pattern. While these drives do seem to do a great job at
random writes,
Miles Nordin wrote:
km == Kyle McDonald kmcdon...@egenera.com writes:
km hese drives do seem to do a great job at random writes, most
km of the promise shows at sequential writes, so Does the slog
km attempt to write sequentially through the space given to it?
thwack
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog doesn't work
still isn't resolved. A solution is under it's way, according to George Wilson. But in
the mean time, IF something happens you might be in a lot of trouble. Even without
Brian Hechinger wrote:
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under it's way,
according to George Wilson
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't
Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't
Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write)
Adam Sherman wrote:
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
You're right, it supposedly has less than half the the write speed, and
that probably won't matter for me, but I can't find a 64GB version of it
for sale, and the 80GB
I've started reading up on this, and I know I have alot more reading to
do, but I've already got some questions... :)
I'm not sure yet that it will help for my purposes, but I was
considering buying 2 SSD's for mirrored boot devices anyway.
My main question is: Can a pair of say 60GB SSD's
chris wrote:
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I've been looking at the same thing recently.
I found these 4 AM3 motherboard with optional ECC memory support. I don't
know whether this means ECC works,
Erik Ableson wrote:
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo
Hi all,
I'm setting up a new fileserver, and while I'm not planning on enabling
CIFS right away, I know I will in the future.
I know there are several ZFS properties or attributes that affect how
CIFS behaves. I seem to recall that at least one of those needs to be
set early (like when the
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait
Darren J Moffat wrote:
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that
it was
faster
Joep Vesseur wrote:
All,
I was wondering why zfs destroy -r is so excruciatingly slow compared to
parallel destroys.
SNIP
while a little handy-work with
# time for i in `zfs list | awk '/blub2\\// {print $1}'` ;\
do ( zfs destroy $i ) ; done
yields
real0m8.191s
On 2/20/2009 9:33 AM, Gary Mills wrote:
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote:
Gary Mills wrote:
Should I file an RFE for this addition to ZFS? The concept would be
to run ZFS on a file server, exporting storage to an application
server where ZFS also runs
On 2/13/2009 5:58 AM, Ross wrote:
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to zpool export
them before you unplug.
I think there is a subcommand of cfgaadm you should run to to notify
Solariss that you intend to unplug the
On 2/10/2009 3:37 PM, D. Eckert wrote:
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over
On 2/10/2009 4:48 PM, Roman V. Shaposhnik wrote:
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
These posts do sound like someone who is blaming their parents after
breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that bad
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed to use
the larger disk, even if its usable space is truncated.
I don't dispute that, but I don't understand it
On 2/11/2009 12:35 PM, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to handle
a hot
On 2/11/2009 12:57 PM, Tomas Ögren wrote:
On 11 February, 2009 - Kyle McDonald sent me these 1,2K bytes:
On 2/11/2009 12:11 PM, Bob Friesenhahn wrote:
My understanding is that 1TB is the maximum bootable disk size since
EFI boot is not supported. It is good that you were allowed
On 2/11/2009 1:03 PM, Kyle McDonald wrote:
Since you can't mix EFI and FDisk partition tables, and you can't have
more than one Solaris fdisk partition (that I'm aware of anyway) it
looks like 1TB is all you can give Solaris at the moment.
I should have qualified that with If you need
On 2/11/2009 1:50 PM, Richard Elling wrote:
Solaris can now (as of b105) use extended partitions.
http://www.opensolaris.org/os/community/on/flag-days/pages/2008120301/
That's interesting, but I'm not sure how it helps.
It's my understanding that Solaris doesn't like it if more than one of
On 2/10/2009 2:50 PM, D. Eckert wrote:
(..)
Dave made a mistake pulling out the drives with out exporting them first.
For sure also UFS/XFS/EXT4/.. doesn't like that kind of operations but only
with ZFS you risk to loose ALL your data.
that's the point!
(...)
I did that many times after
On 2/10/2009 2:54 PM, D. Eckert wrote:
I disagree, see posting above.
ZFS just accepts it 2 or 3 times. after that, your data are passed away to
nirvana for no reason.
And it should be legal, to have an external USB drive with a ZFS. with all
respect, why should a user always care for
Hi Dave,
Having read through the whole thread, I think there are several things
that could all be adding to your problems.
At least some of which are not related to ZFS at all.
You mentioned the ZFS docs not warning you about this, and yet I know
the docs explictly tell you that:
1. While a
D. Eckert wrote:
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that on
USB drives zfs umount pool is not allowed.
It is allowed. But it's not enough. You need to read both the 'zpool '
and
I jumpstarted my machine with sNV b106, and installed with ZFS root/boot.
It left me at a shell prompt in the JumpStart environment, with my ZFS
root on /a.
I wanted to try out some things that I planned on scripting for the
JumpStart to run, one of these waas creating a new ZFS pool from the
On 1/28/2009 12:16 PM, Nicolas Williams wrote:
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote:
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 27 Jan 2009, Frank Cusack wrote:
i was wondering if you have a zfs
Brad Hudson wrote:
Thanks for the response Peter. However, I'm not looking to create a
different boot environment (bootenv). I'm actually looking for a way within
JumpStart to separate out the ZFS filesystems from a new installation to have
better control over quotas and reservations for
Tim Haley wrote:
Ross wrote:
While it's good that this is at least possible, that looks horribly
complicated to me.
Does anybody know if there's any work being done on making it easy to remove
obsolete
boot environments?
If the clones were promoted at the time of their
Ian Collins wrote:
Stephen Le wrote:
Is it possible to create a custom Jumpstart profile to install Nevada
on a RAID-10 rpool?
No, simple mirrors only.
Though a finish sscript could add additional simple mirrors to create
the config his example would have created.
Pretty sure
kristof wrote:
I don't think this is possible.
I already tried to add extra vdevs after install, but I got an error message
telling me that multiple vdevs for rpool are not allowed.
K
Oh. Ok. Good to know.
I always put all my 'data' diskspace in a separate pool anyway to make
Douglas R. Jones wrote:
4) I change the auto.ws map thusly:
Integration chekov:/mnt/zfs1/GroupWS/
Upgradeschekov:/mnt/zfs1/GroupWS/
cstools chekov:/mnt/zfs1/GroupWS/
com chekov:/mnt/zfs1/GroupWS
This is standard NFS behavior (prior to NFSv4). Child
Darren J Moffat wrote:
John Cecere wrote:
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area and the dump
device.
And indeed when I try to use a zvol as both, I get:
zvol cannot be used as a swap device and a dump device
My question
Richard Elling wrote:
Bob Friesenhahn wrote:
On Tue, 23 Sep 2008, Eric Schrock wrote:
See:
http://www.opensolaris.org/jive/thread.jspa?threadID=73740tstart=0
I must apologize for anoying everyone. When Richard Elling posted the
GreenBytes link without saying
Paul Raines wrote:
I am having a very odd problem on one of our ZFS filesystems
On certain files, when accessed on the Solaris server itself locally
where the zfs fs sits, we get an error like the following:
[EMAIL PROTECTED] # ls -l
./README: Value too large for defined data type
total 36
Daniel Rock wrote:
Kenny schrieb:
2. c6t600A0B800049F93C030A48B3EA2Cd0
SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0
SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]
Disk 2: 931GB
Kenny wrote:
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 GB vs 932 MB from this??
2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0
/scsi_vhci/[EMAIL PROTECTED]
It's in the part
mike wrote:
Sorry :)
Okay, so you can create a zpool from multiple vdevs. But you cannot
add more vdevs to a zpool once the zpool is created. Is that right?
Nope. That's exactly what you *CAN* do.
So say today you only really need 6TB usable, you could go buy 8 of your
1TB disks,
and setup
mike wrote:
Or do smaller groupings of raidz1's (like 3 disks) so I can remove
them and put 1.5TB disks in when they come out for instance?
I wouldn't reduce it to 3 disks (should almost mirror if you go that low.)
Remember, while you can't take a drive out of a vDev, or a vDev out of a
1 - 100 of 157 matches
Mail list logo