S,
Are you sure you have MPXIO turned on? I haven't dealt with Solaris
for a while (will again soon as I get some virtual servers setup) but in
the past you had to manually turn it on. I believe the path was
/kernel/drv/scsi_vhci.h (I may be missing some of the path) and you
changed the
of
undetected errors compared to standard error checking performed by
TCP/IP. The CRC bits may be added to either Data Digest, Header Digest,
or both.
DataCore has been really good at implementing all the features of the
'high end' arrays for the 'low end' price point.
Dave
Richard Elling wrote
You might want to look into the products from a company called
DataCore Software, http://datacore.com/products/prod_home.asp. I've
used them and they are great stuff. They make very high performing
iSCSI and FC storage controllers out of leveraging commodity hardware,
like the one
or recommendations?
I have a setup similar to this. The most important thing I can recommend
is to create a mirrored zpool from the iscsi disks.
-Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
I have RTFM'd through this list and a number of Sun docs at docs.sun
and can't find any information on how I might be able to write out 'hard
zeros' to the unused blocks on a ZFS. The reason I'd like to do this is
because if the storage (LUN/s) I'm providing to the ZFS is
thin-provisioned
Hi,
I have a customer with the following question...
She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If
this is possible how is this done? Is there any documentation on this
that I can provide to them?
--
Regards,
Dave
--
My normal working hours are Sunday through
Try something like this:
zsfs set sharenfs=options mypool/mydata
where options is:
sharenfs=[EMAIL PROTECTED]/24:@10.9.9.5/32,[EMAIL PROTECTED]/24:@10.9.9.5/32
--
Dave
Michael Stalnaker wrote:
All;
I’m sure I’m missing something basic here. I need to do the following
things, and can’t
out there
than you might think :)
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
issues for thumper.
Strongly suggest applying this patch to thumpers going forward.
u6 will have the fixes by default.
I'm assuming the fixes listed in these patches are already committed in
OpenSolaris (b94 or greater)?
--
Dave
___
zfs-discuss mailing
failure or disconnect. :(
I don't think there's a bug filed for it. That would probably be the
first step to getting this resolved (might also post to storage-discuss).
--
Dave
Ross wrote:
Has anybody here got any thoughts on how to resolve this problem:
http://www.opensolaris.org/jive/thread.jspa
works well enough for the user's purposes without
swap since the boot from the CD won't have used any swap.
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
than that.
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Keith Bierman wrote:
On Jun 24, 2008, at 11:01 AM, Dave Miner wrote:
I doubt we'd have interest in providing more configurability in the
interactive installer. As Richard sort of points out subsequently,
most
people wouldn't know what to do here, anyway, and the ones who do
usually use
WayCool stuff man, nice post! :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for that realm, but I
think we should do something to format or another tool to address the
non-ZFS case.
Dave
Jeff
On Wed, Jun 04, 2008 at 10:55:18AM -0500, Bob Friesenhahn wrote:
On Tue, 3 Jun 2008, Dave Miner wrote:
Putting into the zpool command would feel odd to me, but I agree
could promote test_td.c into a useful sys-admin command.
http://cvs.opensolaris.org/source/xref/caiman/snap_upgrade/usr/src/lib/libtd/test_td.c
Putting into the zpool command would feel odd to me, but I agree that
there may be a useful utility here.
Dave
Hi All,
After a LOT of tinkering I eventually determined exactly where I am going
wrong. It's to do with the ZFS ACLs I am applying. Why this is resulting in the
behaviour I am seeing, I do not know - perhaps someone can spell it out to me.
To cut a long story short, this is how I am
bump Yeah, can anyone help him? As a passive observer without much of a clue
myself, I'm dying to know from the experts what this poor chap's problem might
be.
Cheers,
Dave
This message posted from opensolaris.org
___
zfs-discuss mailing list
I've got a screen grab of this here:
http://web.mac.com/davekoelmeyer/Dave_Koelmeyer/Dave_Koelmeyer_-_OpenSolaris_2008.05_CIFS_File_Server_OS_Win_XP_Copy_Prob_1.html
Also, I am also seeing the behaviour in that Nexenta forum link, where file
copy seemingly gets right to the very end, then craps
This is what my collegue is seeing:
http://web.mac.com/davekoelmeyer/Dave_Koelmeyer/Dave_Koelmeyer_-_OpenSolaris_2008.05_CIFS_File_Server_OS_10.5.2_Prob_1.html
Any experts can help with any leads on this one?
This message posted from opensolaris.org
configuration is here:
http://web.mac.com/davekoelmeyer/Dave_Koelmeyer/Dave_Koelmeyer_-_OpenSolaris_2008.05_CIFS_File_Server.html
which is mostly a gathering of other folks' blogs as I learn how to do this.
Cheers,
Dave
This message posted from opensolaris.org
Hi All,
Another oddity I have noticed is this, and sounds close to what is described
here after Googling:
http://www.nexenta.com/corp/index.php?option=com_fireboardfunc=viewid=202catid=11
I have a share on Windows fileserver (server1) in the domain my OpenSolaris
ZFS+CIFS box (server2) is
Just a quick thank-you, that was the prob alright :-)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi All, first time caller here, so please be gentle...
I'm on OpenSolaris 2008.05, and following the really useful guide here to
create a CIFs share in domain mode:
http://blogs.sun.com/timthomas/entry/configuring_the_opensolaris_cifs_server
Works like a charm. Now, I want to be able to view
failure?
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/08/2008 11:29 AM, Luke Scharf wrote:
Dave wrote:
On 05/08/2008 08:11 AM, Ross wrote:
It may be an obvious point, but are you aware that snapshots need to
be stopped any time a disk fails? It's something to consider if
you're planning frequent snapshots.
I've never heard
on my Tyan MB with the MCP55
chipset. I bought Supermicro AOL-SAT2-MV8's and moved all my disks to
them. Haven't had a problem since.
http://de.opensolaris.org/jive/thread.jspa?messageID=204736
--
Dave
On 05/03/2008 01:44 PM, Simon Breden wrote:
@Max: I've not tried this with other file systems
Nice putrid spew of FUD regarding 3Ware cards.
Regarding the SuperMicro 8-port SATA PCI-X card, yes, that is a good
recommendation.
-=dave
- Original Message -
From: Rob Windsor
To: zfs-discuss@opensolaris.org
Sent: Tuesday, February 12, 2008 12:39 PM
Subject: Re: [zfs
Try it, it doesn't work.
Format sees both but you can't import a clone of pool u001 if pool
u001 is already imported, even by giving it a new name.
Darren J Moffat wrote:
Dave Lowenstein wrote:
Nope, doesn't work.
Try presenting one of those lun snapshots to your host, run cfgadm
-al
Nope, doesn't work.
Try presenting one of those lun snapshots to your host, run cfgadm -al,
then run zpool import.
#zpool import
no pools available to import
It would make my life so much simpler if you could do something like
this: zpool import --import-as yourpool.backup yourpool
Couldn't we move fixing panic the system if it can't find a lun up to
the front of the line? that one really sucks.
John wrote:
I asked the question last week.. the reply i got from Matt was:
It's still a high priority on our road map, just pushed back a bit. Our
current goal is to
Okay, my order for an x4500 went through so sometime soon I'll be using
it as a big honkin area for DSUs and DSSUs for netbackup.
Does anybody have any experience with using zfs compression for this
purpose? The thought of doubling 48tb to 96 tb is enticing. Are there
any other zfs tweaks that
So we have a zpool that was grown over time by using the 'zpool add'
command to add some space. This has ended up with a zpool made of three
devices on our old hp san. This is on solaris 10 sparc.
I want to move the zpool to fc netapp storage using zpool replace,
completely getting it off the
/araid/ulc
orbits/myear 656G 155G 656G /orbits/myear
Regards,
Dave
--
Sun Microsystems
Mailstop ubur04-206
1 Network Drive
Burlington, MA 01803
*Dave Bevans - Technical Support Engineer*
*Phone: 1-800-USA-4SUN (800-872-4786)
(opt-2), (case #) (press 0
No compression enabled. Zpool status and more info on the the config is listed
in this other thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=44033tstart=0
Wasn't getting a response here, so I looped in the code forum.
This message posted from opensolaris.org
again i say (eventually) some zfs sendndmp type of mechanism seems the right
way to go here *shrug*
-=dave
Date: Mon, 5 Nov 2007 05:54:15 -0800 From: [EMAIL PROTECTED] To:
zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] HAMMER Peter
Tribble wrote: I'm not worried about
While doing some testing of ZFS on systems which house the storage backend for
a custom imap data store I have witnessed 90-100% sys utilization during
moderately high file creation periods. I'm not sure if this is something
inherent in the design of ZFS or if this can be tuned out. But the sys
---8--- run last in client_end_script ---8---
#!/bin/sh
zpool list | grep -w data /dev/null || exit 0
echo /sbin/zpool export data
/sbin/zpool export data
echo /sbin/mount -F lofs /devices /a/devices
/sbin/mount -F lofs /devices /a/devices
echo chroot /a /sbin/zpool import data
I've been wrestling with implementing some ZFS mounts for /var and
/usr into a jumpstart setup. I know that jumpstart does know anything
about zfs as in your can't define ZFS volumes or pools in the profile.
I've gone ahead and let the JS do a base install into a single ufs slice
and then
a more than natural candidate. in this scenario,
compression would be a boon since the blocks would already be in a
compressed state. I'd imagine this fitting into the 'zfs send' codebase
somewhere.
thoughts (on either c9n and/or 'zfs send ndmp') ?
-=dave
- Original Message -
From
the system just plain get pounded?
-=dave
- Original Message -
From: roland [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Tuesday, October 16, 2007 12:44 PM
Subject: Re: [zfs-discuss] HAMMER
and what about compression?
:D
This message posted from opensolaris.org
From: Anton B. Rang [EMAIL PROTECTED]
For many databases, most of the I/O is writes (reads wind up
cached in memory).
2 words: table scan
-=dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Yes, if you have any MFM/RLL drives in your possession, please disregard my
recomendation ;)
-=dave
- Original Message -
From: Paul Kraus [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Friday, September 07, 2007 5:31 AM
Subject: Re: [zfs-discuss] New zfs pr0n server
the up/down/up/down/... scenario should give the best results in minimizing
accumulative rotation vibration.
-=dave
- Original Message -
From: [EMAIL PROTECTED]
To: Dave Johnson [EMAIL PROTECTED]
Cc: Christopher Gibbs [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Sent: Friday
. It certainly
couldn't hurt.
-=dave
- Original Message -
From: Christopher Gibbs [EMAIL PROTECTED]
To: Diego Righi [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, September 06, 2007 8:06 AM
Subject: Re: [zfs-discuss] New zfs pr0n server :)))
Wow, what a creative idea
.
--
Regards,
Dave
--
Sun Microsystems
Mailstop ubur04-206
1 Network Drive
Burlington, MA 01803
*Dave Bevans - Technical Support Engineer*
*Phone: 800-872-4786
(opt-1), (opt-2), (case #)
* *Email: david.bevans mailto:[EMAIL PROTECTED]@Sun.com
mailto:[EMAIL PROTECTED]
TSC Systems
roland [EMAIL PROTECTED] wrote:
there is also no filesystem based approach in compressing/decompressing a
whole filesystem. you can have 499gb of data on a 500gb partition - and if
you need some more space you would think turning on compression on that fs
would solve your problem. but
Richard Elling [EMAIL PROTECTED] wrote:
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem.
one could kludge this by setting the compression parameters desired
the electronic
discovery and records and information management space data deduplication and
policy-based aging are the foremost topics of the day but this is at the file
level while block-level deduplication would lend no benefit to that regardless.
-=dave
This message posted from
equivolently low to the
checksum collision probability.
-=dave
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
such a movement but I
would think getting at least a placeholder Goals and Objectives page into
the OZFS community pages would be a good start even if movement on this
doesn't come for a year or more.
Thoughts ?
-=dave
- Original Message -
From: Gary Mills [EMAIL PROTECTED]
To: Erik Trimble
using more than a single one of these drive sleds, If your data
is important to you, I seriously urge you to consider staggering the
orientation of them, however ugly it may appear.
You've been warned ;)
-=dave
- Original Message -
From: Rob Logan [EMAIL PROTECTED]
To: ZFS
Random I/O you need somewhere between 284,281 and 426,421 disks each
delivering between 100 and 150 IOPS.
Dave
Richard Elling wrote:
Anton B. Rang wrote:
Thumper seems to be designed as a file server (but curiously, not for
high availability).
hmmm... Often people think that because a system
Hi all,br
br
So I am new here (both using Solaris and also posting on this forum) and I
needbr some advice.br
I have a plan on making a machine set up as a Network Storage Server and I
justbr want some of your recommendations and opinions on how to go about
this.br
br
I do a lot of video
then I don't believe I
willbr
even attempt to install/create this server.br
br
Thaks for your answers and advice Richard. It has given me lots to think
about.br
br
Dave.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
system
read-only to multiple other servers?
2. Can I expand on the fly a ZFS volume within a file system when the
file system it's attached to a server read-write?
Thanks for your help,
Dave
___
zfs-discuss mailing list
zfs-discuss
workload.
Cheers,
Dave (the ORtera man)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the numbers. However, the forum would
be the better place to post the reports.
Regards,
Dave
Eric Schrock wrote:
On Wed, Aug 09, 2006 at 03:29:05PM -0700, Dave Fisk wrote:
For example the COW may or may not have to read old data for a small
I/O update operation, and a large portion
101 - 158 of 158 matches
Mail list logo