On Mar 14, 2013, at 5:55 PM, Jim Klimov jimkli...@cos.ru wrote:
However, recently the VM virtual hardware clocks became way slow.
Does NTP help correct the guest's clock?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.
Whose description still reads, everything ZFS running on illumos-based
distributions.
-Gary
real memory to the swap
device is certainly beneficial. Swapping out complete processes is a
desperation move, but paging out most of an idle process is a good
thing.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs
On Dec 4, 2012, Eugen Leitl wrote:
Either way I'll know the hardware support situation soon
enough.
Have you tried contacting Sonnet?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
of their customers encountered this
performance problem because almost all of them used their Netapp only
for NFS or CIFS. Our Netapp was extremely reliable but did not have
the Iscsi LUN performance that we needed.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
will find no errors. If ZFS does
find an error, there's no nice way to recover. Most commonly, this
happens when the SAN is powered down or rebooted while the ZFS host
is still running.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
this by
specifying the `cachefile' property on the command line. The `zpool'
man page describes how to do this.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of the paperback.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, then yeah, that's horrible.
This all sounds like a good use for LD_PRELOAD and a tiny library
that intercepts and modernizes system calls.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs
an option for testing either
of those?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've used in my still lengthy
benchmarks was 16Gb. If you use the sizes you've proposed, it could
take several days or weeks to complete. Try a web search for iozone
examples if you want more details on the command switches.
-Gary
___
zfs-discuss mailing
a lot of 16GB files?) then you'll want
to test for that. Caching anywhere in the pipeline is important for
benchmarks because you aren't going to turn off a cache or remove RAM
in production are you?
-Gary
___
zfs-discuss mailing list
zfs-discuss
I've seen a couple sources that suggest prices should be dropping by
the end of April -- apparently not as low as pre flood prices due in
part to a rise in manufacturing costs but about 10% lower than they're
priced today.
-Gary
___
zfs-discuss mailing
It looks like the first iteration has finally launched...
http://tenscomplement.com/our-products/zevo-silver-edition
http://www.macrumors.com/2012/01/31/zfs-comes-to-os-x-courtesy-of-apples-former-chief-zfs-architect
___
zfs-discuss mailing list
it.
This is a separate problem, introduced with an upgrade to the Iscsi
service. The new one has a dependancy on the name service (typically
DNS), which means that it isn't available when the zpool import is
done during the boot. Check with Oracle support to see if they have
found a solution.
--
-Gary Mills
, `svcs' will
show you the services listed in order of their completion times. The
ZFS mount is done by this service:
svc:/system/filesystem/local:default
The zpool import (without the mount) is done earlier. Check to see
if any of the FC services run too late during the boot.
As Gary
the reboot.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
when the storage
became 50% full. It would increase markedly when the oldest snapshot
was deleted.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of ZFS writes to the disk, then data
belonging to ZFS will be modified. I've heard of RAID controllers or
SAN devices doing this when they modify the disk geometry or reserved
areas on the disk.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
I can't comment on their 4U servers but HP's 12U includwd SAS
controllers rarely allow JBOD discovery of drives. So I'd recommend an
LSI card and an external storage chassis like those available from
Promise and others.
-Gary
___
zfs-discuss mailing
of years...
Best you can do is try but if you don't see each drive individually
you'll know it's by design and not lack of skill on your part.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
there are no contiguous blocks available. Deleting
a snapshot provides some of these, but only temporarily.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
version of the X2-2. Has that changed with the Solaris x86
versions of the appliance? Also, does OCZ or someone make an
equivalent to the F20 now?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
What kind of drives are we talking about? Even SATA drives are
available according to application type (desktop, enterprise server,
home PVR, surveillance PVR, etc). Then there are drives with SAS
fiber channel interfaces. Then you've got Winchester platters vs SSD
vs hybrids. But even before
and they work wonderfully with ZFS.
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is zdb still the only way to dive in to the file system? I've seen the
extensive work by Max Bruning on this but wonder if there are any tools that
make this easier...?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Nov 23, 2011, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. :
did you see this link
Thank you for this. Some of the other refs it lists will come in handy as well.
kind regards,
Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
wondering if anyone has had to touch this or
other settings with ZFS appliances they've built...?
-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
system.' error, or
will the import succeed? Does the cache change the import behavior?
Does it recognize that the server is the same system? I don't want
to include the `-f' flag in the commands above when it's not needed.
--
-Gary Mills--Unix Group--Computer and Network Services
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
The `lofiadm' man page describes how to export a file as a block
device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
Can't I do
device?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bandwidth
* Up to 72 Gb/sec of total bandwidth
* Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth)
* Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth)
* Scales up to 48 drives
--
-Gary Mills--Unix Group--Computer and Network Services
to have `setuid=off' for improved security, for
example.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
scheduling interfere with I/O scheduling
already done by the storage device?
Is there any reason not to use one LUN per RAID group?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
Is there any reason not to use one LUN per RAID group?
[...]
In other words, if you build a zpool with one vdev of 10GB and
another with two vdev's each
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
redundancy reside in the storage device on the SAN. ZFS
certainly can't do any disk management in this situation. Error
detection and correction is still a debatable issue, one that quickly
becomes exceedingly complex. The decision rests on probabilities
rather than certainties.
--
-Gary Mills
has to be done on the SAN storage device.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is how to remove the files from the orginal rpool/export/home (non
mount point) rpool? I a bit nervous to do a:
zfs destroy rpool/export/home
Is the the correct and safe methodology?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs
Looking at migrating zones built on an M8000 and M5000 to a new M9000. On the
M9000 we started building new deployments using ZFS. The environments on the
M8/M5 are UFS. these are whole root zones. they will use global zone resources.
Can this be done? Or would a ZFS migration be needed?
to /export/home
So, what are the appropriate commands for these steps?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Norm,
Thank you. I just wanted to double-check to make sure I didn't mess up things.
There were steps that I was head-scratching after reading the man page. I'll
spend a bit more time re-reading it using the steps outlined so I understand
these fully.
Gary
--
This message posted from
provided by reliable SAN devices.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. It should also specify a `single_instance/' and
`transient' service. The method script can do whatever the mount
requires, such as creating the ramdisk.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing
recipients of any such
Covered Software in Executable form as to how they can obtain such
Covered Software in Source Code form in a reasonable manner on or
through a medium customarily used for software exchange.
--
-Gary Mills--Unix Group--Computer and Network Services
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'/space/log': Device busy
cannot unmount '/space/mysql': Device busy
2 filesystems upgraded
Do I have to shut down all the applications before upgrading the
filesystems? This is on a Solaris 10 5/09 system.
--
-Gary Mills--Unix Group--Computer and Network Services
. Mapping them to
services can be difficult. The server is essentially down during the
upgrade.
For a root filesystem, you might have to boot off the failsafe archive
or a DVD and import the filesystem in order to upgrade it.
--
-Gary Mills--Unix Group--Computer and Network Services
how to do this on a normal pool, but is there any restrictions for
doing this on the root pool? Are there any grub issues?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thanks for quick response. I appreciate it much.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
before, I assume it assembles everything the way it was before,
including the filesytem and such.
Or am I incorrect about this?
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I have seen this too
I 'm guessing you have SATA disks which are on a iSCSI target.
I'm also guessing you have used something like
iscsitadm create target --type raw -b /dev/dsk/c4t0d00 c4t0d0
ie you are not using a zfs shareiscsi property on a zfs volume but creating
the target from the
to the zpool will double the bandwidth.
/var/log/syslog is quite large, reaching about 600 megabytes before
it's rotated. This takes place each night, with compression bringing
it down to about 70 megabytes. The server handles about 500,000
messages a day.
--
-Gary Mills--Unix Group
that they have special hardware in the SATA
version that simulates SAS dual interface drives. That's what lets
you use SATA drives in a two-node configuration. There's also some
additional software setup for that configuration.
That would be the SATA interposer that does that.
--
-Gary Mills
On Thu, May 06, 2010 at 07:46:49PM -0700, Rob wrote:
Hi Gary,
I would not remove this line in /etc/system.
We have been combatting this bug for a while now on our ZFS file
system running JES Commsuite 7.
I would be interested in finding out how you were able to pin point
the problem.
Our
paths.
I plan to use ZFS everywhere, for the root filesystem and the shared
storage. The only exception will be UFS for /globaldevices .
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
be easy to
find a pair of 1U servers, but what's the smallest SAS array that's
available? Does it need an array controller? What's needed on the
servers to connect to it?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs
I'm not sure I like this at all. Some of my pools take hours to scrub. I have
a cron job run scrubs in sequence... Start one pool's scrub and then poll
until it's finished, start the next and wait, and so on so I don't create too
much load and bring all I/O to a crawl.
The job is launched
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm destroys the oldest snapshots and creates new ones, both
gm recursively.
I'd be curious if you try taking the same snapshots non-recursively
instead, does the pause go
much physical memory does this system have?
Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus
IMAP processes, thousands of them, use memory mapping extensively.
I don't know if this design affects the snapshot recycle behavior.
--
-Gary Mills--Unix Group--Computer
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones
.
Is it destroying old snapshots or creating new ones that causes this
dead time? What does each of these procedures do that could affect
the system? What can I do to make this less visible to users?
--
-Gary Mills--Unix Group--Computer and Network Services
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote:
On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins [1]...@ianshome.com
wrote:
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built
My guess is that the grub bootloader wasn't upgraded on the actual boot disk.
Search for directions on how to mirror ZFS boot drives and you'll see how to
copy the correct grub loader onto the boot disk.
If you want to do this simpler, swap the disks. I did this when I was moving
from SXCE
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote:
On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
Yes, I understand that, but do filesystems have separate queues of any
sort within the ZIL?
I'm not sure. If you can experiment and measure a benefit,
understanding
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote:
This line was a workaround for bug 6642475 that had to do with
searching for for large contiguous pages. The result was high system
time and slow response. I can't find any public information on this
bug, although I assume it's
Thanks for all the suggestions. Now for a strange tail...
I tried upgrading to dev 130 and, as expected, things did not go well. All
sorts of permission errors flew by during the upgrade stage and it would not
start X-windows. I've heard that things installed from the contrib and extras
7 6 3525830G 32G
10:50:361K 117 9 105812 5344 1030G 32G
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
On Tue, 12 Jan 2010, Gary Mills wrote:
Is moving the databases (IMAP metadata) to a separate ZFS filesystem
likely to improve performance? I've heard that this is important, but
I'm not clear why
of disks raidz bug I
reported.
Looks like I've got to bite the bullet and upgrade to the dev tree and hope for
the best.
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
fixed by now. It may have only
affected Oracle database.
I'd like to remove this line from /etc/system now, but I don't know
if it will have any adverse effect on ZFS or the Cyrus IMAP server
that runs on this machine. Does anyone know if ZFS uses large memory
pages?
--
-Gary Mills--Unix
to be the issue I'd like to track down the source. Any
docs on how to do this?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mattias Pantzare wrote:
On Sun, Jan 10, 2010 at 16:40, Gary Gendel g...@genashor.com wrote:
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b
start method (/lib/svc/method/fs-local) ]
[ Dec 19 08:09:12 Method start exited with status 0 ]
Is a dependancy missing?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
memory. There were no disk errors reported. I suppose we can blame
the memory.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
for OpenSolaris advocacy in this arena
while the topic is hot.
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
You might be able to identify these object numbers with zdb, but
I'm not sure how do that.
You can try to use zdb this way to check if these objects still exist
zdb -d space/dcc 0x11e887 0xba25aa
--
-Gary Mills--Unix Group--Computer and Network Services
opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hope that helps.
Gary
--
Gary Pennington
Solaris Core OS
Sun Microsystems
gary.penning...@sun.com
Apple is known to strong arm in licensing negotiations. I'd really like to
hear the straight-talk about what transpired.
That's ok, it just means that I won't be using mac as a server.
--
This message posted from opensolaris.org
___
zfs-discuss
that should be done when
dealing specifically with ZFS. Any advice would be greatly appreciated.
Thanks,
--
--
Gary Gogick
senior systems administrator | workhabit,inc.
// email: g...@workhabit.com
and see what
happens.
Thanks for the replies, appreciate the help!
On Tue, Oct 20, 2009 at 1:43 PM, Trevor Pretty trevor_pre...@eagle.co.nzwrote:
Gary
Where you measuring the Linux NFS write performance? It's well know that
Linux can use NFS in a very unsafe mode and report the write complete
or I can share in a summary
anything else that might be of interest
You are welcome to share this information.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for us back
in May.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You shouldn't hit the Raid-Z issue because it only happens with an odd number
of disks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alan,
Thanks for the detailed explanation. The rollback successfully fixed my 5-disk
RAID-Z errors. I'll hold off another upgrade attempt until 124 rolls out.
Fortunately, I didn't do a zfs upgrade right away after installing 121. For
those that did, this could be very painful.
Gary
Alan,
Super find. Thanks, I thought I was just going crazy until I rolled back to
110 and the errors disappeared. When you do work out a fix, please ping me to
let me know when I can try an upgrade again.
Gary
--
This message posted from opensolaris.org
default
archive secondarycachealldefault
And each of the sub-pools look like this:
g...@phoenix[~]101zfs get all archive/gary
archive/gary type filesystem -
archive/gary creation Mon Jun 18 20:56 2007
of problem way back
around build 40-50 ish, but haven't seen it after that until now.
Anyone else experiencing this problem or knows how to isolate the problem
definitively?
Thanks,
Gary
--
This message posted from opensolaris.org
___
zfs-discuss mailing
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote:
Andre van Eyssen wrote:
On Mon, 6 Jul 2009, Gary Mills wrote:
As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs. If we
have another one like that, our
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote:
Gary Mills wrote:
On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote:
ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC
instead of the Solaris page cache. But mmap() uses the latter. So if
anyone
to
optimize the two caches in this environment? Will mmap(2) one day
play nicely with ZFS?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
increased. Our problem was indirectly a result of
fragmentation, but it was solved by a ZFS patch. I understand that
this patch, which fixes a whole bunch of ZFS bugs, should be released
soon. I wonder if this was your problem.
--
-Gary Mills--Unix Support--U of M Academic Computing
On Mon, Apr 27, 2009 at 04:47:27PM -0500, Gary Mills wrote:
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens, the number of processes increases, the
load average increases
unlikely)
Since the LUN is just a large file on the Netapp, I assume that all
it can do is to put the blocks back into sequential order. That might
have some benefit overall.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills [1]mi...@cc.umanitoba.ca
wrote:
We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
filer. There's a great deal of churn in e-mail folders, with
messages
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
Gary Mills wrote:
Does anyone know about this device?
SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket
for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Servers, RoHS-6
Compliant
for ZFS? Is there any way I could use this in
a T2000 server? The brackets appear to be different.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
1 - 100 of 200 matches
Mail list logo