In general, your backup software should handle making
incremental dumps, even from a split mirror. What are
you using to write data to tape? Are you simply
dumping the whole file system, rather than using
standard backup software?
We are using Veritas Netbackup 5 MP4. It is performing a
Sickness, which case are you using? I've been looking for something that
supports many HDDs. Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Thanks, did it come with the hardware to mount HDD's in 5.25 slots?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-bash-3.2$ zfs share tank
cannot share 'tank': share(1M) failed
-bash-3.2$
how do i figure out what's wrong?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1010M 227G 1010M /tank
# zfs create tank/storage
cannot share 'tank/storage': share(1M) failed
filesystem successfully created, but not shared
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1010M 227G 1010M
# zfs set sharenfs=on tank/storage
cannot share 'tank/storage': share(1M) failed
#
i'm fairly certain i installed the nfs stuff, and it looks like the rpc stuff
is running
http://www.student.cs.uwaterloo.ca/~jouellet/netstat.txt
This message posted from opensolaris.org
/storage
Invalid protocol specified: nfs
#
Jason, are you getting the same return value when you try to set sharesmb=on
# zfs set sharesmb=on tank/storage
cannot share 'tank/storage': share(1M) failed
SMB: Unable to enable service
#
i'll worry about smb when i get nfs working
This message posted
You're missing the server bits, check for the following packages:
SUNWnfsskr, SUNWnfssr, and SUNWnfssu
-- richard
i added those packages and rebooted then did
# svcadm enable network/nfs/server
but nfs still doesn't work
# zfs share tank/storage
cannot share 'tank/storage': share(1M) failed
i got all nfs/server dependancies online, but nfs/server is disabled because
No NFS filesystems are shared
# svcs -l nfs/server
fmri svc:/network/nfs/server:default
name NFS server
enabled false (temporary)
statedisabled
next_state none
state_time Sun Feb 17
Try sharing something else, maybe:
share -F nfs /mnt
After that, you should see the services started.
Once you get that to work, then try sharing the
zfs file systems. Your problems aren't zfs related...
at least not yet.
-- richard
# share -F nfs /mnt
share: illegal option -- F
Usage:
that doesn't work
it looks like something maybe corrupt, maybe something didn't get installed
properly or i have a bad disc, for some reason my share command doesn't have an
-F option
i'm going to get a new disc and reinstall everything
thanks for the help everyone
This message posted from
btw, my machine doesn't have a dns name so i had to enter a phony one to get
nfs/server online
can that have any ill effects?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
sharing tank/nfs/vmware and the zfs
filesystems mounted into that directory tree? It would make for a very neat
solution if it did.
If not I can get around it with one nfs mount per virtual machine, but that is
extra overhead I was hoping to avoid.
Thanks in advance
Jason
--
This message
I've been looking to build my own cheap SAN to explore HA scenarios with VMware
hosts, though not for a production environment. I'm new to opensolaris but I
am familiar with other clustered HA systems. The features of ZFS seem like
they would fit right in with attempting to build an HA
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there was
a huge performance hit using ZFS over NFS. I didn’t get the specifics but his
group is usually pretty sharp. I’ll have to check back with him.
Specifically I remember storage vmotion being supported on NFS last as well as
jumbo frames. Just the impression I get from past features, perhaps they are
doing better with that.
I know the performance problem had specifically to do with ZFS and the way it
handled something. I know lots of
So aside from the NFS debate, would this 2 tier approach work? I am a bit
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to
the VMware host as a raw device. Is that possible or is my understanding
wrong? Also could it be defined as a clustered resource?
--
This
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The idea
being that I can just throw up another x86/x64 box to add more storage. Has
anyone tried anything similar?
--
This message posted from
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the water.
If you have 2 iscsi target boxes that replicate and one dies, you are OK but
you then have to have a 2:1 total storage to usable ratio (excluding expensive
shared disks).
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?
Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when
I've done mpxio over multiple ip links in linux using multipathd. Works just
fine. It's not part of the initiator but accomplishes the same thing.
It was a linux IET target. Need to try it here with a COMSTAR target.
-Original Message-
From: Ross Walker rswwal...@gmail.com
Sender:
On Tue, Dec 21, 2010 at 7:58 AM, Jeff Bacon ba...@walleyesoftware.comwrote:
One thing I've been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that's a quasi-database. It reads files by
mmap()ing them. (writes are done via
Use the Solaris cp (/usr/bin/cp) instead
On Wed, Mar 16, 2011 at 8:59 AM, Fred Liu fred_...@issi.com wrote:
It is from ZFS ACL.
Thanks.
Fred
From: Fred Liu
Sent: Wednesday, March 16, 2011 9:57 PM
To: ZFS Discussions
Subject: GNU 'cp -p' can't work well with ZFS-based-NFS
versions of everything.
On Tue, Oct 16, 2012 at 2:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
Behalf Of Jason
When you update our install
Hi,
One of my server's zfs faulted and it shows following:
NAMESTATE READ WRITE CKSUM
backup UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
c4t0d0 ONLINE 0 0 0
! ;-)
-Jason =:^/
--
[EMAIL PROTECTED] ANU Supercomputer Facility
APAC Grid ProgramLeonard Huxley Bldg 56, Mills Road
Ph: +61 2 6125 5449 Australian National University
Fax: +61 2 6125 8199 Canberra, ACT, 0200, Australia
I am seeing the same behavior on my SunBlade 2500 while running firefox. I
think my disks are
quieter than yours though, because I don't really notice the difference that
much.
This message posted from opensolaris.org
___
zfs-discuss mailing list
) results where we had a RAIDZ pool containing 9
volumes vs other combinations (we also did 8,7,6,5,4 volumes. Groups
of 3 and 9 seemed to form the boundary cases).
Regards, Jason
More detailed at http://svn.joyent.com/public/experiments/equal-sun-
iscsi-zfs-fun.txt
## Hardware Setup
No arguement from me. For better or for worse, most of the customers I
speak with minimize their OS distributions. The more we can accurately
describe dependencies within our current methods, the better.
/jason
Jim Connors wrote:
Included below is a a thread which dealt with trying
In Nevada, it showed up in build 42. The current community edition
(at least when I updated last week) was 43 and spares are in http://
flickr.com/photo_zoom.gne?id=200572122size=m
- Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
AVAILCAP HEALTH ALTROOT
backups4 3.81T 3.39T434G88% ONLINE -
Regards, J
Jason A. Hoffman, PhD | Founder, CTO, Joyent Inc.
Applications = http://joyent.com/
Hosting = http://textdrive.com
have the same experience, but we were very
impressed. Particularly, going between different Solaris versions and different
CPU architectures.
Best Regards,
Jason
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Anxiously anticipating the ability to boot off zfs, I know there's been some
talk about leveraging some of the snapshotting/cloning features in conjunction
with upgrades and patches.
What I am really hoping for is the ability to clone /, patch the clone, then
boot off the clone (by doing a
c0t3d0 ONLINE 0 0 0
spares
c5t1d0AVAIL
c5t2d0AVAIL
c5t3d0AVAIL
errors: No known data errors
Regards, Jason
Jason A. Hoffman, PhD | Founder, CTO, Joyent Inc.
Applications
When messing around with zfs trying to break it, I creating a new pool using
files on an existing zfs filesystem. It seem to work fine until I created a
snapshot of the original filesystem and then tried to destroy the pool using
the files. The system appeared to deadlock and had to be
After importing some pools after a re-install of the OS, i hit that ..:
Permission denied problem. I figured out I could unmount, chmod, and mount to
fix it but that wouldn't be a good situation on a production box. Is there
anyway to fix this problem without unmounting?
This message
I should clarify. Say I have a zfs with the mount point /u00 that I import on
the system. When it creates the /u00 directory on the UFS root, it's created
with 700, and then the zfs is mounted and it appears to have the permissions of
the root of the zfs. 755 in this case.
But, if a
Which part is the bug? The crash or allowing pools of files that are on a zfs?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
A bit off the subject but what would be the advantage in virtualization using a
pool of files verse just creating another zfs on an existing pool. My purpose
for using the file pools was to experiment and learn about any quirks before I
go production. It let me do things like set up a large
available,
Jan 23 18:51:38 newponit so panic to ensure data integrity.
Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I tried it and it worked great. Even cloned my boot environment, and BFU'd the
clone and it seemed to work (minus a few unrelated annoyances I haven't tracked
down yet). I'm quite excited about the possibilities :)
I am wondering though, is it possible to skip the creation of the pool and
I've had at least some success (tried it once so far) doing a BFU to cloned
filesystem from a b62 zfs root system, I could probably document that if
there is interest.
I have not tried taking a new ISO and installing the new packages ontop of a
cloned fileystem though.
On 5/31/07, Lori Alt
Just playing around a bit w/ zfs + zfs root (no particularly good
reason other than to just mess around a bit), and I hit an issue that
I suspect is simple to fix, but I cannot seem to figure out what that
is.
I wanted to try (essentially) doing a very manual install to an empty
zfs filesystem.
On 9/13/07, Brian Hechinger [EMAIL PROTECTED] wrote:
On Thu, Sep 13, 2007 at 10:54:41AM -0600, Lori Alt wrote:
In-place upgrade of zfs datasets is not supported and probably
never will be (LiveUpgrade will be the way to go with zfs because
the cloning features of zfs make it a natural).
On 9/25/07, Gregory Shaw [EMAIL PROTECTED] wrote:
On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
Dale Ghent wrote:
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
The problem with this is that wrong information is much worse than no
information, there is no way to automatically
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot. I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the
elaborate.
Lori
Jason King wrote:
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot. I ran
into this myself due to some boneheaded mistakes while doing
I am using ZFS on FreeBSD 7.0_beta3. This is the first time i have
used ZFS and I have run into something that I am not sure if this is
normal, but am very concerned about.
SYSTEM INFO:
hp 320s (storage array)
12 disks (750GB each)
2GB RAM
1GB flash drive (running the OS)
When I take a disk
Edit the kernel$ line and add '-k' at the end. That should drop you
into the kernel debugger after the panic (typing '$q' will exit the
debugger, and resume whatever it was doing -- in this case likely
rebooting).
On Dec 18, 2007 6:26 PM, Michael Hale [EMAIL PROTECTED] wrote:
Begin forwarded
On Thu, May 8, 2008 at 8:59 PM, EchoB [EMAIL PROTECTED] wrote:
I cannot recall if it was this (-discuss) or (-code) but a post a few
months ago caught my attention.
In it someone detailed having worked out the math and algorithms for a
flexible expansion scheme for ZFS. Clearly this is very
On Wed, May 14, 2008 at 6:42 PM, Dave Koelmeyer
[EMAIL PROTECTED] wrote:
Hi All, first time caller here, so please be gentle...
I'm on OpenSolaris 2008.05, and following the really useful guide here to
create a CIFs share in domain mode:
On Tue, Jul 1, 2008 at 8:10 AM, Mike Gerdts [EMAIL PROTECTED] wrote:
On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat [EMAIL PROTECTED] wrote:
Mike Gerdts wrote:
On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat [EMAIL PROTECTED]
wrote:
Instead we should take it completely out of their hands
On Tue, Jul 15, 2008 at 4:17 AM, Ross [EMAIL PROTECTED] wrote:
Well I haven't used a J4500, but when we had an x4500 (Thumper) on loan they
had Solaris pretty well integrated with the hardware. When a disk failed, I
used cfgadm to offline it and as soon as I did that a bright blue Ready to
On Aug 3, 2008, at 8:46 PM, Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt k.n...@zonnet.nl wrote:
On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
dma...@ee.ryerson.ca wrote:
On Jan 6, 2009, at 14:21, Rob wrote:
Obviously ZFS is ideal for large databases served out via
application level or web servers. But what other
On Fri, Feb 20, 2009 at 2:59 PM, Darin Perusich
darin.perus...@cognigencorp.com wrote:
Hello All,
I'm in the process of migrating a file server from Solaris 9, where
we're making extensive use of POSIX-ACLs, to ZFS and I have a question
that I'm hoping someone can clear up for me. I'm using
On Mon, Mar 9, 2009 at 5:31 PM, Jan Hlodan jan.hlo...@sun.com wrote:
Hi Tomas,
thanks for the answer.
Unfortunately, it didn't help much.
However I can mount all file systems, but system is broken - desktop
wont come up.
Could not update ICEauthority file /.ICEauthority
There is a problem
Is there a solaris build or any information on how you're compiling it
on solaris something?
Regards, Jason
On May 14, 2009, at 5:17 AM, Shehjar Tikoo wrote:
Hi Folks!
GlusterFS is a clustered file system that runs on commodity
off-the-shelf hardware, delivering multiple times
Nevermind, found it at
http://www.gluster.org/docs/index.php/Install_guide#Solaris
-J
On May 14, 2009, at 1:15 PM, Jason A. Hoffman wrote:
Is there a solaris build or any information on how you're compiling
it on solaris something?
Regards, Jason
On May 14, 2009, at 5:17 AM, Shehjar
On Tue, Jun 30, 2009 at 1:36 PM, Erik Trimbleerik.trim...@sun.com wrote:
Bob Friesenhahn wrote:
On Tue, 30 Jun 2009, Neal Pollack wrote:
Actually, they do quite a bit more than that. They create jobs, generate
revenue for battery manufacturers, and tech's that change batteries and do
PM
Mark J Musante wrote:
On Tue, 30 Jun 2009, John Hoogerdijk wrote:
i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log
device. to see how well it works, i ran bonnie++, but never saw any
io's on the log device (using iostat -nxce) . pool status is good -
no issues or
John Hoogerdijk wrote:
so i guess there is some porting to do - no O_DIRECT in solaris...
anyone have bonnie++ 1.03e ported already?
For your purposes, couldn't you replace O_DIRECT with O_SYNC as a hack?
If you're trying to benchmark the log device, the important thing is to
generate
and thousands and thousands of zpools. I started
collecting such zpools back in 2005. None have been lost.
Best regards, Jason
Jason A. Hoffman, PhD | Founder, CTO, Joyent Inc.
ja...@joyent.com
http://joyent.com/
mobile: +1-415-279-6196
This is an odd question, to be certain, but I need to find out what size a 1.5
TB drive is to help me create a sparse/fake array.
Basically, if I could have someone do a dd if=1.5 TB disk of=somefile and
then post the ls -l size of that file, it would greatly assist me.
Here's what I'm doing:
Thanks for the reply!
The reason I'm not waiting until I have the disks is mostly because it will
take me several months to get the funds together and in the meantime, I need
the extra space 1 or 2 drives gets me. Since the sparse files will only take
up the space in use, if I've migrated 2
am I doing wrong? Why wont the whole thing copy? I've tried an
incremental from origin to @now, but it still doesn't work right...
Thanks for all your help.
-Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
It does seem to come up regularly... perhaps someone with access could
throw up a page under the ZFS community with the conclusions (and
periodic updates as appropriate)..
On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble erik.trim...@sun.com wrote:
Nathan wrote:
While I am about to embark on
So, my Areca controller has been complaining via email of read errors for a
couple days on SATA channel 8. The disk finally gave up last night at 17:40.
I got to say I really appreciate the Areca controller taking such good care of
me.
For some reason, I wasn't able to log into the server
there, but nothing I've found is comprehensive.
Jason
On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock eric.schr...@sun.com wrote:
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?
I
that the history records
internal scrubs, but not resilvers, It also doesn't record failed
commands, or disk failures in a zpool. It would be sweet to have a
line that said something like marking vdev /dev/dsk/c8t7d0s0 as
UNAVAIL due to X read errors in Y minutes, Then we can really see
what happened.
Jason
On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins i...@ianshome.com wrote:
Dale Ghent wrote:
So looking at the README for patch 14144[45]-09, there are ton of ZFS
fixes and feature adds.
The big features are already described in the update 8 release docs, but
would anyone in-the-know care to
On Thu, Oct 15, 2009 at 9:25 AM, Enda O'Connor enda.ocon...@sun.com wrote:
Jason King wrote:
On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins i...@ianshome.com wrote:
Dale Ghent wrote:
So looking at the README for patch 14144[45]-09, there are ton of ZFS
fixes and feature adds.
The big
it's beefs with Sun does). But, I can
live with detaching them if I have to.
Another thing that would be nice would be to receive notification of
disk failures from the OS via email or SMS (like the vendor I
previously alluded to), but I know I'm talking crazy now.
Jason
On Thu, Oct 22, 2009
On Sun, Nov 8, 2009 at 7:55 AM, Robert Milkowski mi...@task.gda.pl wrote:
fyi
Robert Milkowski wrote:
XXX wrote:
| Have you actually tried to roll-back to previous uberblocks when you
| hit the issue? I'm asking as I haven't yet heard about any case
| of the issue witch was not solved
with the internal sata drives and there are no
issues transferring data on those ports. What should I try to isolate and
hopefully resolve the issue?
Thanks,
Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 3 Dec 2009, Erik Ableson wrote:
Much depends on the contents of the files. Fixed size binary blobs that
align nicely with 16/32/64k boundaries, or variable sized text files.
Note that the default zfs
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem
On Wed, Jan 27, 2010 at 08:25:48PM -0800, borov wrote:
SAS disks more expensive. Besides, there is no 2Tb SAS 7200 drives on market
yet.
Seagate released a 2 TB SAS drive last year.
http://www.seagate.com/ww/v/index.jsp?locale=en-USvgnextoid=c7712f655373f110VgnVCM10f5ee0a0aRCRD
--
Jason
On Wed, Feb 10, 2010 at 6:45 PM, Paul B. Henson hen...@acm.org wrote:
We have an open bug which results in new directories created over NFSv4
from a linux client having the wrong group ownership. While waiting for a
patch to resolve the issue, we have a script running hourly on the server
On Sat, Feb 13, 2010 at 9:58 AM, Jim Mauro james.ma...@sun.com wrote:
Using ZFS for Oracle can be configured to deliver very good performance.
Depending on what your priorities are in terms of critical metrics, keep in
mind
that the most performant solution is to use Oracle ASM on raw disk
My problem is when you have 100+ luns divided between OS and DB,
keeping track of what's for what can become problematic. It becomes
even worse when you start adding luns -- the chance of accidentally
grabbing a DB lun instead of one of the new ones is non-trivial (then
there's also the chance
If you're doing anything with ACLs, the GNU utilities have no
knowledge of ACLs, so GNU chmod will not modify them (nor will GNU ls
show ACLs), you need to use /bin/chmod and /bin/ls to manipulate them.
It does sound though that GNU chmod is explicitly testing and skipping
any entry that's a link
Could also try /usr/gnu/bin/ls -U.
I'm working on improving the memory profile of /bin/ls (as it gets
somewhat excessive when dealing with large directories), which as a
side effect should also help with this.
Currently /bin/ls allocates a structure for every file, and doesn't
output anything
On Wed, Mar 31, 2010 at 7:53 PM, Erik Trimble erik.trim...@oracle.com wrote:
Brett wrote:
Hi Folks,
Im in a shop thats very resistant to change. The management here are
looking for major justification of a move away from ufs to zfs for root file
systems. Does anyone know if there are any
On Thu, Apr 1, 2010 at 9:06 AM, David Magda dma...@ee.ryerson.ca wrote:
On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
ZFS root will be the supported root filesystem for Solaris Next; we've
been using it for OpenSolaris for a couple of years.
This is already supported:
Starting in the
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
This is my first time playing around with open solaris
Thank you for the replies guys!
I was actually already planning to get another 4 gigs of ram for the box right
away anyway, but thank you for mentioning it! As there appears to be a couple
ways to skin the cat here i think i am going to try both a 14 spindle RaidZ2
and 2 X 7 RaidZ2
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i will
be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only option
is to add another 7 spindle RaidZ2 array correct?
Thanks for all the help guys !
--
This
I am booting from a single 74gig WD raptor attached to the motherboards onboard
SATA port.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Freddie,
now you have brought up another question :) I had always assumed that i would
just used open solaris for this file server build, as i had not actually done
any research in regards to other operatin systems that support ZFS. Does anyone
have any advice as to wether i should be
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if i
am wrong but it looks like it is Open Solaris based and has basically
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property ('directory' ?) that provides this behavior (with
suitable warnings
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).
On Wed, Apr 21, 2010 at 6:01 PM, Brandon High bh...@freaks.com wrote:
On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned
If you're just wanting to do something like the netapp .snapshot
(where it's in every directory), I'd be curious if the CIFS shadow
copy support might already have done a lot of the heavy lifting for
this. That might be a good place to look
On Mon, May 3, 2010 at 7:25 PM, Peter Jeremy
Well the GUI I think is just Windows, it's all just APIs that are
presented to windows.
On Mon, May 3, 2010 at 10:16 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
Behalf Of Jason King
If you're just wanting to do
In the meantime, you can use autofs to do something close to this if
you like (sort of like the pam_mkhomedir module) -- you can have it
execute a script that returns the appropriate auto_user entry (given a
username as input). I wrote one a long time ago that would do a zfs
create if the dataset
Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
something like this
Disk # Slice 1 Slice 2
1 raid5 raid0
2 raid5 raid0
3 raid5 raid0
I want to have some fast scratch space (raid0) and some protected (raidz)
Greetings
J
--
This message posted from opensolaris.org
Ok,
I got it working: however I set up two partitions on each disk using fdisk
inside of format
what's the difference to slices (I checked with gparted)
Bye
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
1 - 100 of 241 matches
Mail list logo