real memory to the swap
device is certainly beneficial. Swapping out complete processes is a
desperation move, but paging out most of an idle process is a good
thing.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs
of their customers encountered this
performance problem because almost all of them used their Netapp only
for NFS or CIFS. Our Netapp was extremely reliable but did not have
the Iscsi LUN performance that we needed.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
will find no errors. If ZFS does
find an error, there's no nice way to recover. Most commonly, this
happens when the SAN is powered down or rebooted while the ZFS host
is still running.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
this by
specifying the `cachefile' property on the command line. The `zpool'
man page describes how to do this.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, then yeah, that's horrible.
This all sounds like a good use for LD_PRELOAD and a tiny library
that intercepts and modernizes system calls.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs
it.
This is a separate problem, introduced with an upgrade to the Iscsi
service. The new one has a dependancy on the name service (typically
DNS), which means that it isn't available when the zpool import is
done during the boot. Check with Oracle support to see if they have
found a solution.
--
-Gary Mills
and Bob mentioned, I saw this Issue with ISCSI Devices.
Instead of export / import is a zpool clear also working?
mpathadm list LU
mpathadm show LU /dev/rdsk/c5t1d1s2
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
the reboot.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
when the storage
became 50% full. It would increase markedly when the oldest snapshot
was deleted.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of ZFS writes to the disk, then data
belonging to ZFS will be modified. I've heard of RAID controllers or
SAN devices doing this when they modify the disk geometry or reserved
areas on the disk.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada
there are no contiguous blocks available. Deleting
a snapshot provides some of these, but only temporarily.
--
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
system.' error, or
will the import succeed? Does the cache change the import behavior?
Does it recognize that the server is the same system? I don't want
to include the `-f' flag in the commands above when it's not needed.
--
-Gary Mills--Unix Group--Computer and Network Services
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote:
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
The `lofiadm' man page describes how to export a file as a block
device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
Can't I do
device?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bandwidth
* Up to 72 Gb/sec of total bandwidth
* Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth)
* Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth)
* Scales up to 48 drives
--
-Gary Mills--Unix Group--Computer and Network Services
to have `setuid=off' for improved security, for
example.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
scheduling interfere with I/O scheduling
already done by the storage device?
Is there any reason not to use one LUN per RAID group?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
Is there any reason not to use one LUN per RAID group?
[...]
In other words, if you build a zpool with one vdev of 10GB and
another with two vdev's each
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
redundancy reside in the storage device on the SAN. ZFS
certainly can't do any disk management in this situation. Error
detection and correction is still a debatable issue, one that quickly
becomes exceedingly complex. The decision rests on probabilities
rather than certainties.
--
-Gary Mills
has to be done on the SAN storage device.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
provided by reliable SAN devices.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. It should also specify a `single_instance/' and
`transient' service. The method script can do whatever the mount
requires, such as creating the ramdisk.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing
recipients of any such
Covered Software in Executable form as to how they can obtain such
Covered Software in Source Code form in a reasonable manner on or
through a medium customarily used for software exchange.
--
-Gary Mills--Unix Group--Computer and Network Services
.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'/space/log': Device busy
cannot unmount '/space/mysql': Device busy
2 filesystems upgraded
Do I have to shut down all the applications before upgrading the
filesystems? This is on a Solaris 10 5/09 system.
--
-Gary Mills--Unix Group--Computer and Network Services
. Mapping them to
services can be difficult. The server is essentially down during the
upgrade.
For a root filesystem, you might have to boot off the failsafe archive
or a DVD and import the filesystem in order to upgrade it.
--
-Gary Mills--Unix Group--Computer and Network Services
to the zpool will double the bandwidth.
/var/log/syslog is quite large, reaching about 600 megabytes before
it's rotated. This takes place each night, with compression bringing
it down to about 70 megabytes. The server handles about 500,000
messages a day.
--
-Gary Mills--Unix Group
that they have special hardware in the SATA
version that simulates SAS dual interface drives. That's what lets
you use SATA drives in a two-node configuration. There's also some
additional software setup for that configuration.
That would be the SATA interposer that does that.
--
-Gary Mills
of
issues, much the same as what you've had in the past, ps and prstats
hanging.
are you able to tell me the IDR number that you applied?
The IDR was only needed last year. Upgrading to Solaris 10 10/09
and applying the latest patches resolved the problem.
--
-Gary Mills--Unix Group
paths.
I plan to use ZFS everywhere, for the root filesystem and the shared
storage. The only exception will be UFS for /globaldevices .
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
be easy to
find a pair of 1U servers, but what's the smallest SAS array that's
available? Does it need an array controller? What's needed on the
servers to connect to it?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm destroys the oldest snapshots and creates new ones, both
gm recursively.
I'd be curious if you try taking the same snapshots non-recursively
instead, does the pause go
much physical memory does this system have?
Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus
IMAP processes, thousands of them, use memory mapping extensively.
I don't know if this design affects the snapshot recycle behavior.
--
-Gary Mills--Unix Group--Computer
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones
.
Is it destroying old snapshots or creating new ones that causes this
dead time? What does each of these procedures do that could affect
the system? What can I do to make this less visible to users?
--
-Gary Mills--Unix Group--Computer and Network Services
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote:
On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins [1]...@ianshome.com
wrote:
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote:
On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
Yes, I understand that, but do filesystems have separate queues of any
sort within the ZIL?
I'm not sure. If you can experiment and measure a benefit,
understanding
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote:
This line was a workaround for bug 6642475 that had to do with
searching for for large contiguous pages. The result was high system
time and slow response. I can't find any public information on this
bug, although I assume it's
7 6 3525830G 32G
10:50:361K 117 9 105812 5344 1030G 32G
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
On Tue, 12 Jan 2010, Gary Mills wrote:
Is moving the databases (IMAP metadata) to a separate ZFS filesystem
likely to improve performance? I've heard that this is important, but
I'm not clear why
fixed by now. It may have only
affected Oracle database.
I'd like to remove this line from /etc/system now, but I don't know
if it will have any adverse effect on ZFS or the Cyrus IMAP server
that runs on this machine. Does anyone know if ZFS uses large memory
pages?
--
-Gary Mills--Unix
start method (/lib/svc/method/fs-local) ]
[ Dec 19 08:09:12 Method start exited with status 0 ]
Is a dependancy missing?
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss
memory. There were no disk errors reported. I suppose we can blame
the memory.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
.
You might be able to identify these object numbers with zdb, but
I'm not sure how do that.
You can try to use zdb this way to check if these objects still exist
zdb -d space/dcc 0x11e887 0xba25aa
--
-Gary Mills--Unix Group--Computer and Network Services
or I can share in a summary
anything else that might be of interest
You are welcome to share this information.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for us back
in May.
--
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote:
Andre van Eyssen wrote:
On Mon, 6 Jul 2009, Gary Mills wrote:
As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs. If we
have another one like that, our
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote:
Gary Mills wrote:
On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote:
ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC
instead of the Solaris page cache. But mmap() uses the latter. So if
anyone
to
optimize the two caches in this environment? Will mmap(2) one day
play nicely with ZFS?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
increased. Our problem was indirectly a result of
fragmentation, but it was solved by a ZFS patch. I understand that
this patch, which fixes a whole bunch of ZFS bugs, should be released
soon. I wonder if this was your problem.
--
-Gary Mills--Unix Support--U of M Academic Computing
On Mon, Apr 27, 2009 at 04:47:27PM -0500, Gary Mills wrote:
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens, the number of processes increases, the
load average increases
unlikely)
Since the LUN is just a large file on the Netapp, I assume that all
it can do is to put the blocks back into sequential order. That might
have some benefit overall.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills [1]mi...@cc.umanitoba.ca
wrote:
We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
filer. There's a great deal of churn in e-mail folders, with
messages
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
Gary Mills wrote:
Does anyone know about this device?
SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket
for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Servers, RoHS-6
Compliant
for ZFS? Is there any way I could use this in
a T2000 server? The brackets appear to be different.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thread: mail systems using ZFS filesystems?
Thanks. Those problems do sound similar. I also see positive
experiences with T2000 servers, ZFS, and Cyrus IMAP from UC Davis.
None of the people involved seem to be active on either the ZFS
mailing list or the Cyrus list.
--
-Gary Mills--Unix
code for handling two different sizes of memory pages. You can find
more information here:
http://forums.sun.com/thread.jspa?threadID=5257060
Also, open a support case with Sun if you haven't already.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens, the number of processes increases, the
load average increases
On Sat, Apr 18, 2009 at 09:41:39PM -0500, Tim wrote:
On Sat, Apr 18, 2009 at 9:01 PM, Gary Mills [1]mi...@cc.umanitoba.ca
wrote:
On Sat, Apr 18, 2009 at 06:53:30PM -0400, Ellis, Mike wrote:
In case the writes are a problem: When zfs sends a sync-command
On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote:
[perf-discuss cc'd]
On Sat, Apr 18, 2009 at 4:27 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
Many other layers are involved in this server. We use scsi_vhci for
redundant I/O paths and Sun's Iscsi initiator to connect
for the slow performance?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Apr 18, 2009 at 05:25:17PM -0500, Bob Friesenhahn wrote:
On Sat, 18 Apr 2009, Gary Mills wrote:
How do we determine which layer is responsible for the slow
performance?
If the ARC size is diminishing under heavy load then there must be
excessive pressure for memory from
?
(You're not by chance using any type of ssh-transfers etc as part of
the backups are you)
No, Networker use RPC to connect to the backup server, but there's no
encryption or compression on the client side.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
On Sat, Apr 18, 2009 at 06:06:49PM -0700, Richard Elling wrote:
[CC'ed to perf-discuss]
Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens, the number
0 0 0
c4t60A98000433469764E4A2D456A696579d0 ONLINE 0 0 0
c4t60A98000433469764E4A476D2F6B385Ad0 ONLINE 0 0 0
c4t60A98000433469764E4A476D2F664E4Fd0 ONLINE 0 0 0
errors: No known data errors
--
-Gary
that are
memory-mapped by all processes. I can move these from ZFS to UFS if
this is likely to help.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sun, Apr 12, 2009 at 10:49:49AM -0700, Richard Elling wrote:
Gary Mills wrote:
We're running a Cyrus IMAP server on a T2000 under Solaris 10 with
about 1 TB of mailboxes on ZFS filesystems. Recently, when under
load, we've had incidents where IMAP operations became very slow. The
general
5.87K 0 7 67.5K 108 2.34M zfs
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
him. Is there a way to determine this from the Iscsi initiator
side? I do have a test mail server that I can play with.
That could make a big difference...
(Perhaps disabling the write-flush in zfs will make a big difference
here, especially on a write-heavy system)
--
-Gary Mills--Unix
On Thu, Apr 09, 2009 at 04:25:58PM +0200, Henk Langeveld wrote:
Gary Mills wrote:
I've been watching the ZFS ARC cache on our IMAP server while the
backups are running, and also when user activity is high. The two
seem to conflict. Fast response for users seems to depend on their
data being
.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that, but in this case ZFS is starved for memory
and the whole thing slows to a crawl. Is there a way to set a
minimum ARC size so that this doesn't happen?
We are going to upgrade the memory, but a lower limit on ARC size
might still be a good idea.
--
-Gary Mills--Unix Support--U of M Academic Computing
.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
respond.
I appreciate that.
I thought
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence of ZFS's end-to-end
On Thu, Feb 19, 2009 at 12:36:22PM -0800, Brandon High wrote:
On Thu, Feb 19, 2009 at 6:18 AM, Gary Mills mi...@cc.umanitoba.ca wrote:
Should I file an RFE for this addition to ZFS? The concept would be
to run ZFS on a file server, exporting storage to an application
server where ZFS also
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote:
Gary Mills wrote:
Should I file an RFE for this addition to ZFS? The concept would be
to run ZFS on a file server, exporting storage to an application
server where ZFS also runs on top of that storage. All storage
management
around these problems.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
instead of just the IT professionals.
That implies that ZFS will have to detect removable devices and treat
them differently than fixed devices. It might have to be an option
that can be enabled for higher performance with reduced data security.
--
-Gary Mills--Unix Support--U of M
On Mon, Feb 02, 2009 at 09:53:15PM +0700, Fajar A. Nugraha wrote:
On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
If there are two (or more) instances of ZFS in the end-to-end data
path, each instance
systems, redundancy only on the
file server, and end-to-end error detection and correction, does
not exist. What additions to ZFS are required to make this work?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss
can identify the source
of the data in the event of an error?
Does this additional exchange of information fit into the Iscsi
protocol, or does it have to flow out of band somehow?
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
checksums
reasonably detect? Certainly if some of the other error checking
failed to detect an error, ZFS would still detect one. How likely
are these other error checks to fail?
Is there anything else I've missed in this analysis?
--
-Gary Mills--Unix Support--U of M Academic Computing
'.
And how/what do I do to reverse to the non-patched system in case
something goes terribly wrong? ;-)
Just revert to the old BE.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss
On Sat, Dec 20, 2008 at 03:52:46AM -0800, Uwe Dippel wrote:
This might sound sooo simple, but it isn't. I read the ZFS Administration
Guide and it did not give an answer; at least no simple answer, simple enough
for me to understand.
The intention is to follow the thread Easiest way to
`zpool' a complete disk, by omitting the slice part, it
will write its own label to the drive. If you specify it with a
slice, it expects that you have already defined that slice. For a
root pool, it has to be a slice.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
On Thu, Dec 11, 2008 at 10:41:26PM -0600, Bob Friesenhahn wrote:
On Thu, 11 Dec 2008, Gary Mills wrote:
The split responsibility model is quite appealing. I'd like to see
ZFS address this model. Is there not a way that ZFS could delegate
responsibility for both error detection and correction
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
is responsible for
integrity of the filesystem. How can it be made to behave in a
reliable manner? Can ZFS be better than UFS in this configuration?
Is a different form of communication between the two components
necessary in this case?
--
-Gary Mills--Unix Support--U of M Academic Computing
On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote:
On 11/27/08 17:18, Gary Mills wrote:
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
I'm
. They believe that a separate /var is still good practice.
If your mount options are different for /var and /, you will need
a separate filesystem. In our case, we use `setuid=off' and
`devices=off' on /var for security reasons. We do the same thing
for home directories and /tmp .
--
-Gary Mills
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
I'm currently working with an organisation who
want use ZFS for their full zones. Storage is SAN
again. Disabling it with `-t' after the system's up
seems to do no harm.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
and the same disk controller
would be most suitable.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
One of our storage guys would like to put a thumper into service, but
he's looking for a smaller model to use for testing. Is there something
that has the same CPU, disks, and disk controller as a thumper, but
fewer disks? The ones I've seen all have 48 disks.
--
-Gary Mills--Unix Support
On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
There isn't a de-populated version.
Would X4540 with 250 or 500 GB drives meet your needs?
That might be our only choice.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking
1 - 100 of 134 matches
Mail list logo