On Thu, Sep 23, 2010 at 08:48, Haudy Kazemi kaze0...@umn.edu wrote:
Mattias Pantzare wrote:
ZFS needs free memory for writes. If you fill your memory with dirty
data zfs has to flush that data to disk. If that disk is a virtual
disk in zfs on the same computer those writes need more memory
On Wed, Sep 22, 2010 at 20:15, Markus Kovero markus.kov...@nebula.fi wrote:
Such configuration was known to cause deadlocks. Even if it works now (which
I don't expect to be the case) it will make your data to be cached twice.
The CPU utilization will also be much higher, etc.
All in all
On Wed, Sep 8, 2010 at 06:59, Edward Ned Harvey sh...@nedharvey.com wrote:
On Tue, Sep 7, 2010 at 4:59 PM, Edward Ned Harvey sh...@nedharvey.com
wrote:
I think the value you can take from this is:
Why does the BPG say that? What is the reasoning behind it?
Anything that is a rule of thumb
On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey sh...@nedharvey.com wrote:
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
Mattias Pantzare
It
is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
vdev you have to read half the data compared to 1 vdev
On Sat, Aug 28, 2010 at 02:54, Darin Perusich
darin.perus...@cognigencorp.com wrote:
Hello All,
I'm sure this has been discussed previously but I haven't been able to find an
answer to this. I've added another raidz1 vdev to an existing storage pool and
the increased available storage isn't
On Sun, May 30, 2010 at 23:37, Sandon Van Ness san...@van-ness.com wrote:
I just wanted to make sure this is normal and is expected. I fully
expected that as the file-system filled up I would see more disk space
being used than with other file-systems due to its features but what I
didn't
On Sat, May 1, 2010 at 16:23, casper@sun.com wrote:
I understand you cannot lookup names by inode number in general, because
that would present a security violation. Joe User should not be able to
find the name of an item that's in a directory where he does not have
permission.
But,
On Sat, May 1, 2010 at 16:49, casper@sun.com wrote:
No, a NFS client will not ask the NFS server for a name by sending the
inode or NFS-handle. There is no need for a NFS client to do that.
The NFS clients certainly version 2 and 3 only use the file handle;
the file handle can be decoded
If the kernel (or root) can open an arbitrary directory by inode number,
then the kernel (or root) can find the inode number of its parent by looking
at the '..' entry, which the kernel (or root) can then open, and identify
both: the name of the child subdir whose inode number is already
OpenSolaris needs support for the TRIM command for SSDs. This command is
issued to an SSD to indicate that a block is no longer in use and the SSD
may erase it in preparation for future writes.
There does not seem to be very much `need' since there are other ways that a
SSD can know that a
On Mon, Apr 12, 2010 at 19:19, David Magda dma...@ee.ryerson.ca wrote:
On Mon, April 12, 2010 12:28, Tomas Ögren wrote:
On 12 April, 2010 - David Magda sent me these 0,7K bytes:
On Mon, April 12, 2010 10:48, Tomas Ögren wrote:
For flash to overwrite a block, it needs to clear it first.. so
On Fri, Apr 2, 2010 at 16:24, Edward Ned Harvey solar...@nedharvey.com wrote:
The purpose of the ZIL is to act like a fast log for synchronous
writes. It allows the system to quickly confirm a synchronous write
request with the minimum amount of work.
Bob and Casper and some others clearly
These days I am a fan for forward check access lists, because any one who
owns a DNS server can say that for IPAddressX returns aserver.google.com.
They can not set the forward lookup outside of their domain but they can
setup a reverse lookup. The other advantage is forword looking access
On Sat, Feb 20, 2010 at 11:14, Lutz Schumann
presa...@storageconcepts.de wrote:
Hello list,
beeing a Linux Guy I'm actually quite new to Opensolaris. One thing I miss is
udev.
I found that when using SATA disks with ZFS - it always required manual
intervention (cfgadm) to do SATA hot plug.
Ext2/3 uses 5% by default for root's usage; 8% under FreeBSD for FFS.
Solaris (10) uses a bit more nuance for its UFS:
That reservation is to preclude users to exhaust diskspace in such a way
that ever root can not login and solve the problem.
No, the reservation in UFS/FFS is to keep the
On Sun, Jan 10, 2010 at 16:40, Gary Gendel g...@genashor.com wrote:
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I'm finding that scrub
On Wed, Dec 30, 2009 at 19:23, roland devz...@web.de wrote:
making transactional,logging filesystems thin-provisioning aware should be
hard to do, as every new and every changed block is written to a new location.
so what applies to zfs, should also apply to btrfs or nilfs or similar
On Tue, Dec 29, 2009 at 18:16, Brad bene...@yahoo.com wrote:
@eric
As a general rule of thumb, each vdev has the random performance
roughly the same as a single member of that vdev. Having six RAIDZ
vdevs in a pool should give roughly the performance as a stripe of six
bare drives, for
On Thu, Dec 24, 2009 at 04:36, Ian Collins i...@ianshome.com wrote:
Mattias Pantzare wrote:
I'm not sure how to go about it. Basically, how should i format my
drives in FreeBSD, create a ZPOOL which can be imported into
OpenSolaris.
I'm not sure about BSD, but Solaris ZFS works with whole
An EFI label isn't OS specific formatting!
It is. Not all OS will read an EFI label.
You misunderstood the concept of OS specific, I feel. EFI is indeed OS
independent; however, that doesn't necesssarily imply that all OSs can read
EFI disks. My Commodore 128D could boot CP/M but couldn't
I'm not sure how to go about it. Basically, how should i format my
drives in FreeBSD, create a ZPOOL which can be imported into OpenSolaris.
I'm not sure about BSD, but Solaris ZFS works with whole devices. So there
isn't any OS specific formatting involved. I assume BSD does the same.
I have already run into one little snag that I don't see any way of
overcoming with my chosen method. I've upgraded to snv_129 with high hopes
for getting the most out of deduplication. But using iSCSI volumes I'm not
sure how I can gain any benefit from it. The volumes are a set size,
Is there better solution to this problem, what if the machine crashes?
Crashes are abnormal conditions. If it crashes you should fix the problem to
avoid future crashes and probably you will need to clear the pool dir
hierarchy prior to import the pool.
Are you serious? I really hope that
On Sat, Dec 12, 2009 at 18:08, Richard Elling richard.ell...@gmail.com wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new motherboard
and it no longer recognised the
Thanks for the info. Glad to hear it's in the works, too.
It is not in the works. If you look at the bug IDs in the bug database
you will find no indication of work done on them.
Paul
1:21pm, Mark J Musante wrote:
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in
On Mon, Sep 21, 2009 at 13:34, David Magda dma...@ee.ryerson.ca wrote:
On Sep 21, 2009, at 06:52, Chris Ridd wrote:
Does zpool destroy prompt are you sure in any way? Some admin tools do
(beadm destroy for example) but there's not a lot of consistency.
No it doesn't, which I always found
On Wed, Sep 16, 2009 at 09:34, Erik Trimble erik.trim...@sun.com wrote:
Carson Gaspar wrote:
Erik Trimble wrote:
I haven't see this specific problem, but it occurs to me thus:
For the reverse of the original problem, where (say) I back up a 'zfs
send' stream to tape, then later on, after
On Tue, Aug 18, 2009 at 22:22, Paul Krauspk1...@gmail.com wrote:
Posted from the wrong address the first time, sorry.
Is the speed of a 'zfs send' dependant on file size / number of files ?
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional
It would be nice if ZFS had something similar to VxFS File Change Log.
This feature is very useful for incremental backups and other
directory walkers, providing they support FCL.
I think this tangent deserves its own thread. :)
To save a trip to google...
Adding another pool and copying all/some data over to it would only
a short term solution.
I'll have to disagree.
What is the point of a filesystem the can grow to such a huge size and
not have functionality built in to optimize data layout? Real world
implementations of filesystems
On Sat, Aug 8, 2009 at 20:20, Ed Spencered_spen...@umanitoba.ca wrote:
On Sat, 2009-08-08 at 08:14, Mattias Pantzare wrote:
Your scalability problem may be in your backup solution.
We've eliminated the backup system as being involved with the
performance issues.
The servers are Solaris 10
If they accept virtualisation, why can't they use individual filesystems (or
zvol) rather than pools? What advantage do individual pools have over
filesystems? I'd have thought the main disadvantage of pools is storage
flexibility requires pool shrink, something ZFS provides at the
On Thu, Aug 6, 2009 at 12:45, Ian Collinsi...@ianshome.com wrote:
Mattias Pantzare wrote:
If they accept virtualisation, why can't they use individual filesystems
(or
zvol) rather than pools? What advantage do individual pools have over
filesystems? I'd have thought the main disadvantage
On Thu, Aug 6, 2009 at 16:59, Rossno-re...@opensolaris.org wrote:
But why do you have to attach to a pool? Surely you're just attaching to the
root
filesystem anyway? And as Richard says, since filesystems can be shrunk
easily
and it's just as easy to detach a filesystem from one machine
On Fri, Jul 24, 2009 at 09:33, Markus Koveromarkus.kov...@nebula.fi wrote:
During our tests we noticed very disturbing behavior, what would be causing
this?
System is running latest stable opensolaris.
Any other means to remove ghost files rather than destroying pool and
restoring from
. You can
find those with:
fuser -c /testpool
But if you can't find the space after a reboot something is not right...
-Original Message-
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias
Pantzare
Sent: 24. heinäkuuta 2009 10:56
To: Markus Kovero
Cc: zfs
On Sun, Jul 19, 2009 at 08:25, lf yangno-re...@opensolaris.org wrote:
Hi Guys
I have a SunFire X4200M2 and the Xyratex RS1600 JBOD which I try to
run the ZFS on it.But I found a problem:
I set mpxio-disable=yes in the /kernel/drv/fp.conf to enable the MPxIO,
I assume you mean mpxio-disable=no
O I feel like I understand what tar is doing, but I'm curious about what is it
that ZFS is looking at that makes it a successful incremental send? That
is, not send the entire file again. Does it have to do with how the
application (tar in this example) does a file open, fopen(), and what mode
On Thu, Apr 16, 2009 at 11:38, Uwe Dippel udip...@gmail.com wrote:
On Thu, Apr 16, 2009 at 1:05 AM, Fajar A. Nugraha fa...@fajar.net wrote:
[...]
Thanks, Fajar, et al.
What this thread actually shows, alas, is that ZFS is rocket science.
In 2009, one would expect a file system to 'just
A useful way to obtain the mount point for a directory is with the
df' command. Just do 'df .' while in a directory to see where its
filesystem mount point is:
% df .
Filesystem 1K-blocks Used Available Use% Mounted on
Sun_2540/home/bfriesen
119677846
MP It would be nice to be able to move disks around when a system is
MP powered off and not have to worry about a cache when I boot.
You don't have to unless you are talking about share disks and
importing a pool on another system while the original is powered off
and the pool was not
I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS
disks, and import the ones with the correct host name and with a import flag
set, without using the cache file. Maybe just use the cache file for non-EFI
disk/partitions, but without the storing the pool name, but you should
On Mon, Mar 23, 2009 at 22:15, Richard Elling richard.ell...@gmail.com wrote:
Mattias Pantzare wrote:
I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS
disks, and import the ones with the correct host name and with a import
flag
set, without using the cache file. Maybe
On Tue, Mar 24, 2009 at 00:21, Tim t...@tcsac.net wrote:
On Mon, Mar 23, 2009 at 4:45 PM, Mattias Pantzare pantz...@gmail.com
wrote:
If I put my disks on a diffrent controler zfs won't find them when I
boot. That is bad. It is also an extra level of complexity.
Correct me if I'm wrong
On Tue, Mar 10, 2009 at 23:57, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 10 Mar 2009, Moore, Joe wrote:
As far as workload, any time you use RAIDZ[2], ZFS must read the entire
stripe (across all of the disks) in order to verify the checksum for that
data block. This means
On Tue, Feb 24, 2009 at 19:18, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Mon, Feb 23, 2009 at 10:05:31AM -0800, Christopher Mera wrote:
I recently read up on Scott Dickson's blog with his solution for
jumpstart/flashless cloning of ZFS root filesystem boxes. I have to say
that it
Right, well I can't imagine it's impossible to write a small app that can
test whether or not drives are honoring correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
zfs test cXtX. Of course, then you can't just blame the hardware
What filesystem likes it when disks are pulled out from a LIVE
filesystem? Try that on UFS and you're f** up too.
Pulling a disk from a live filesystem is the same as pulling the power
from the computer. All modern filesystems can handle that just fine.
UFS with logging on do not even need
On Sun, Feb 8, 2009 at 22:12, Vincent Fox vincent_b_...@yahoo.com wrote:
Thanks I think I get it now.
Do you think having log on a 15K RPM drive with the main pool composed of 10K
RPM drives will show worthwhile improvements? Or am I chasing a few
percentage points?
I don't have money
On Sat, Feb 7, 2009 at 19:33, Sriram Narayanan sri...@belenix.org wrote:
How do I set the number of copies on a snapshot ? Based on the error
message, I believe that I cannot do so.
I already have a number of clones based on this snapshot, and would
like the snapshot to have more copies
On Tue, Feb 3, 2009 at 20:55, SQA sqa...@gmail.com wrote:
I set up a ZFS system on a Linux x86 box.
[b] zpool history
History for 'raidpool':
2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0
c4t4d0 c4t5d0
2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o
On Wed, Jan 14, 2009 at 20:03, Tim t...@tcsac.net wrote:
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.edu
wrote:
Does creating ZFS pools on multiple partitions on the same physical drive
still run into the performance and other issues that putting pools in slices
does?
ZFS will always flush the disk cache at appropriate times. If ZFS
thinks that is alone it will turn the write cache on the disk on.
I'm not sure if you're trying to argue or agree. If you're trying to argue,
you're going to have to do a better job than zfs will always flush disk
cache at
Now I want to mount that external zfs hdd on a different notebook running
solaris and
supporting zfs as well.
I am unable to do so. If I'd run zpool create, it would wipe out my external
hdd what I of
course want to avoid.
So how can I mount a zfs filesystem on a different machine
On Tue, Dec 30, 2008 at 11:30, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to
Interestingly, the size fields under top add up to 950GB without getting
to the bottom of the list, yet it
shows NO swap being used, and 150MB free out of 768 of RAM! So how can the
size of the existing processes
exceed the size of the virtual memory in use by a factor of 2, and the size
If the critical working set of VM pages is larger than available
memory, then the system will become exceedingly slow. This is
indicated by a substantial amount of major page fault activity.
Since disk is 10,000 times slower than RAM, major page faults can
really slow things down
On Sat, Nov 29, 2008 at 22:19, Ray Clark [EMAIL PROTECTED] wrote:
Pantzer5: Thanks for the top size explanation.
Re: eeprom kernelbase=0x8000
So this makes the kernel load at the 2G mark? What is the default, something
like C00... for 3G?
Yes on both questions (i have not checked the
On Sun, Nov 30, 2008 at 00:04, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Sat, 29 Nov 2008, Mattias Pantzare wrote:
The big difference in memory usage between UFS and ZFS is that ZFS
will have all data it caches mapped in the kernel address space. UFS
leaves data unmapped.
Another big
On Sun, Nov 30, 2008 at 01:10, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Sun, 30 Nov 2008, Mattias Pantzare wrote:
Another big difference I have heard about is that Solaris 10 on x86 only
uses something like 64MB of filesystem caching by default for UFS. This
is
different than SPARC where
I think you're confusing our clustering feature with the remote
replication feature. With active-active clustering, you have two closely
linked head nodes serving files from different zpools using JBODs
connected to both head nodes. When one fails, the other imports the
failed node's pool and
On Sat, Nov 15, 2008 at 00:46, Richard Elling [EMAIL PROTECTED] wrote:
Adam Leventhal wrote:
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
That is _not_ active-active, that is active-passive.
If you have a active-active system I can access the same data via both
Planning to stick in a 160-gig Samsung drive and use it for lightweight
household server. Probably some Samba usage, and a tiny bit of Apache
RADIUS. I don't need it to be super-fast, but slow as watching paint dry
won't
You know that you need a minimum of 2 disks to form a (mirrored)
On Sun, Oct 26, 2008 at 5:31 AM, Peter Baumgartner [EMAIL PROTECTED] wrote:
I have a 7x150GB drive (+1 spare) raidz pool that I need to expand.
There are 6 open drive bays, so I bought 6 300GB drives and went to
add them as a raidz vdev to the existing zpool, but I didn't realize
the raidz
On Sun, Oct 26, 2008 at 3:00 PM, Peter Baumgartner [EMAIL PROTECTED] wrote:
On Sun, Oct 26, 2008 at 4:02 AM, Mattias Pantzare [EMAIL PROTECTED] wrote:
On Sun, Oct 26, 2008 at 5:31 AM, Peter Baumgartner [EMAIL PROTECTED] wrote:
I have a 7x150GB drive (+1 spare) raidz pool that I need to expand
On Wed, Oct 8, 2008 at 10:29 AM, Ross [EMAIL PROTECTED] wrote:
bounce
Can anybody confirm how bug 6729696 is going to affect a busy system running
synchronous NFS shares? Is the sync activity from NFS
going to be enough to prevent resilvering from ever working, or have I
mis-understood
On Sun, Sep 14, 2008 at 12:37 AM, Anon K Adderlan
[EMAIL PROTECTED] wrote:
How do I add my own Attributes to a ZAP object, and then search on it?
For example, I want to be able to attach the gamma value to each image, and
be able to search and
sort them based on it. From reading the on disk
2008/8/27 Richard Elling [EMAIL PROTECTED]:
Either the drives should be loaded with special firmware that
returns errors earlier, or the software LVM should read redundant data
and collect the statistic if the drive is well outside its usual
response latency.
ZFS will handle this case as
2008/8/26 Richard Elling [EMAIL PROTECTED]:
Doing a good job with this error is mostly about not freezing
the whole filesystem for the 30sec it takes the drive to report the
error.
That is not a ZFS problem. Please file bugs in the appropriate category.
Who's problem is it? It can't be the
2008/8/13 Jonathan Wheeler [EMAIL PROTECTED]:
So far we've established that in this case:
*Version mismatches aren't causing the problem.
*Receiving across the network isn't the issue (because I have the exact same
issue restoring the stream directly on
my file server).
*All that's left was
2008/8/10 Jonathan Wheeler [EMAIL PROTECTED]:
Hi Folks,
I'm in the very unsettling position of fearing that I've lost all of my data
via a zfs send/receive operation, despite ZFS's legendary integrity.
The error that I'm getting on restore is:
receiving full stream of faith/[EMAIL
Therefore, I wonder if something like block unification (which seems to be
an old idea, though I know of it primarily through Venti[1]) would be useful
to ZFS. Since ZFS checksums all of the data passing through it, it seems
natural to hook those checksums and have a hash table from checksum
4. While reading an offline disk causes errors, writing does not!
*** CAUSES DATA LOSS ***
This is a big one: ZFS can continue writing to an unavailable pool. It
doesn't always generate errors (I've seen it copy over 100MB
before erroring), and if not spotted, this *will* cause data
2008/7/20 James Mauro [EMAIL PROTECTED]:
Is there an optimal method of making a complete copy of a ZFS, aside from the
conventional methods (tar, cpio)?
We have an existing ZFS that was not created with the optimal recordsize.
We wish to create a new ZFS with the optimal recordsize (8k), and
2008/6/21 Andrius [EMAIL PROTECTED]:
Hi,
there is a small confusion with send receive.
zfs andrius/sounds was snapshoted @421 and should be copied to new zpool
beta that on external USB disk.
After
/usr/sbin/zfs send andrius/[EMAIL PROTECTED] | ssh host1 /usr/sbin/zfs recv
beta
or
The problem with that argument is that 10.000 users on one vxfs or UFS
filesystem is no problem at all, be it /var/mail or home directories.
You don't even need a fast server for that. 10.000 zfs file systems is
a problem.
So, if it makes you happier, substitute mail with home directories.
2008/6/6 Richard Elling [EMAIL PROTECTED]:
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon, Google,
Amazon, etc. You
should
A single /var/mail doesn't work well for 10,000 users either. When you
start getting into that scale of service provisioning, you might look at
how the big boys do it... Apple, Verizon, Google, Amazon, etc. You
[EMAIL PROTECTED] /var/mail echo *|wc
1 20632 185597
[EMAIL PROTECTED]
2008/5/24 Hernan Freschi [EMAIL PROTECTED]:
I let it run while watching TOP, and this is what I got just before it hung.
Look at free mem. Is this memory allocated to the kernel? can I allow the
kernel to swap?
No, the kernel will not use swap for this.
But most of the memory used by the
2. in a raidz do all the disks have to be the same size?
I think this one has been answered, but I'll add/ask this: I'm not sure what
would
happen if you had 3x 320gb and 3x 1tb in a 6 disk raidz array. I know you'd
have a
6 * 320gb array, but I don't know if the unused space on the
2008/3/7, Paul Kraus [EMAIL PROTECTED]:
On Thu, Mar 6, 2008 at 8:56 PM, MC [EMAIL PROTECTED] wrote:
1. In zfs can you currently add more disks to an existing raidz? This is
important to me
as i slowly add disks to my system one at a time.
No, but solaris and linux raid5 can do
2008/3/6, Brian Hechinger [EMAIL PROTECTED]:
On Thu, Mar 06, 2008 at 11:39:25AM +0100, [EMAIL PROTECTED] wrote:
I think it's specfically problematic on 32 bit systems with large amounts
of RAM. Then you run out of virtual address space in the kernel quickly;
a small amount of RAM (I
I've just put my first ZFS into production, and users are complaining about
some regressions.
One problem for them is that now, they can't see all the users directories in
the automount point: the homedirs used to be part of a single UFS, and were
browsable with the correct autofs option.
2008/2/17, Bob Friesenhahn [EMAIL PROTECTED]:
I am attempting to create per-user ZFS filesystems under an exported
/home ZFS filesystem. This would work fine except that the
ownership/permissions settings applied to the mount point of those
per-user filesystems on the server are not seen by
2008/2/17, Bob Friesenhahn [EMAIL PROTECTED]:
On Sun, 17 Feb 2008, Mattias Pantzare wrote:
Have the clients mounted your per-user filesystems? It is not enough
to mount /home.
It is enough to mount /home if the client is Solaris 10. I did not
want to mess with creating per-user mounting
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
keep that forever...
Files have nothing to do with it. The recordsize is a file system
parameter. It gets a little more complicated because the
2008/2/13, Sam [EMAIL PROTECTED]:
I saw some other people have a similar problem but reports claimed this was
'fixed in release 42' which is many months old, I'm running the latest
version. I made a RAIDz2 of 8x500GB which should give me a 3TB pool:
Disk manufacturers use ISO units, where
2007/11/10, asa [EMAIL PROTECTED]:
Hello all. I am working on an NFS failover scenario between two
servers. I am getting the stale file handle errors on my (linux)
client which point to there being a mismatch in the fsid's of my two
filesystems when the failover occurs.
I understand that the
2007/11/9, Anton B. Rang [EMAIL PROTECTED]:
The comment in the header file where this error is defined says:
/* volume is too large for 32-bit system */
So it does look like it's a 32-bit CPU issue. Odd, since file systems don't
normally have any sort of dependence on the CPU type
2007/10/12, Krzys [EMAIL PROTECTED]:
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently
2007/9/23, James L Baker [EMAIL PROTECTED]:
I'm a small-time sysadmin with big storage aspirations (I'll be honest
- for a planned MythTV back-end, and *ahem*, other storage), and I've
recently discovered ZFS. I'm thinking about putting together a
homebrew SAN with a NAS head, and am wondering
The problems I'm experiencing are as follows:
ZFS creates the storage pool just fine, sees no errors on the drives, and
seems to work great...right up until I attempt to put data on the drives.
After only a few moments of transfer, things start to go wrong. The system
doesn't power off,
2007/6/25, [EMAIL PROTECTED] [EMAIL PROTECTED]:
I wouldn't de-duplicate without actually verifying that two blocks were
actually bitwise identical.
Absolutely not, indeed.
But the nice property of hashes is that if the hashes don't match then
the inputs do not either.
I.e., the likelyhood of
2007/6/10, arb [EMAIL PROTECTED]:
Hello, I'm new to OpenSolaris and ZFS so my apologies if my questions are naive!
I've got solaris express (b52) and a zfs mirror, but this command locks up my
box within 5 seconds:
% cmp first_4GB_file second_4GB_file
It's not just these two 4GB files, any
94 matches
Mail list logo