Thanks to all who have responded. I spent 2 weekends working through
the best practices tthat Jerome recommended -- it's quite a mouthful.
On 8/17/06, Roch [EMAIL PROTECTED] wrote:
My general principles are:
If you can, to improve you 'Availability' metrics,
let ZFS handle one
Daniel,
This is cool. I've convinced my DBA to attempt the same stunt. We
are just starting with the testing so I'll post results as I get them.
Will appreciate if you can share your zpool layout.
--
Just me,
Wire ...
On 8/26/06, Daniel Rock [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]
I imagine that extending zpool attach to attach new devices to
RAID-Z sets would be exactly what you want. Of course, I am wholy
unaware of the implementation details so the technical difficulties
might be tremendrous.
In any case, there is no removing device yet so either senarios listed
by
Hi,
On 9/4/06, UNIX admin [EMAIL PROTECTED] wrote:
[Solaris 10 6/06 i86pc]
...
Then I added two more disks to the pool with the `zpool add -fn space c2t10d0
c2t11d0`, whereby I determined that those would be added as a RAID0, which is
not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0`
On 9/13/06, Thomas Burns [EMAIL PROTECTED] wrote:
BTW -- did I guess right wrt where I need to set arc.c_max (etc/system)?
I think you need to use mdb. As Mark and Johansen mentioned, only do
this as your last resort.
# mdb -kw
arc::print -a c_max
d3b0f874 c_max = 0x1d0fe800
d3b0f874 /W
On 9/13/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Sure, if you want *everything* in your pool to be mirrored, there is no
real need for this feature (you could argue that setting up the pool
would be easier if you didn't have to slice up the disk though).
Not necessarily. Implementing this
On 9/15/06, can you guess? [EMAIL PROTECTED] wrote:
Implementing it at the directory and file levels would be even more flexible:
redundancy strategy would no longer be tightly tied to path location, but
directories and files could themselves still inherit defaults from the
filesystem and
Edward,
/etc/zpool.cache contains data pointing to devices involved in a
zpool. Changes to ZFS datasets are reflected in the actual zpool so
destroying a zfs dataset should not change zpool.cache.
zfs destroy is the correct command to destroy a file system.
It will be easier if we can know
-
Check the permission of your mountpoint after you unmount the dataset.
Most likely, you have something like rwx--.
On 10/5/06, Stefan Urbat [EMAIL PROTECTED] wrote:
I want to know, if anybody can check/confirm the following issue I observed
with a fully patched Solaris 10 u2 with ZFS
On 10/5/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Unmount all the ZFS filesystems and check the permissions on the mount
points and the paths leading up to them.
I experienced the same problem and narrowed it down to that
essentially, chdir(..) in rm -rf failed to ascend up the
directory.
Jeremy,
The intended use of both are vastly different.
A snapshot is a point-in-time image of a file system that as you have
pointed out, may have missed several versions of changes regardless of
frequency.
Versioning (ala VAX -- ok, I feel old now) keeps versions of every
changes up to a
On 10/6/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
One of the big problems with CVS and SVN and Microsoft SourceSafe is
that you don't have the benefits of version control most of the time,
because all commits are *public*.
David,
That is exactly what branch is for in CVS and SVN. Dunno
On 10/7/06, Ben Gollmer [EMAIL PROTECTED] wrote:
On Oct 6, 2006, at 6:15 PM, Nicolas Williams wrote:
What I'm saying is that I'd like to be able to keep multiple
versions of
my files without echo * or ls showing them to me by default.
Hmm, what about file.txt - ._file.txt.1, ._file.txt.2,
On 10/9/06, Jonathan Edwards [EMAIL PROTECTED] wrote:
We want to differentiate files that are created intentionally from
those that are just versions. If files starts showing up on their
own, a lot of my scripts will break. Still, an FV-aware
shell/program/API can accept an environment
On 11/11/06, Bart Smaalders [EMAIL PROTECTED] wrote:
It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
I'll be more concerned about the
On 11/14/06, Jeremy Teo [EMAIL PROTECTED] wrote:
I'm more inclined to split instead of fork. ;)
I prefer split too since that's what most of the storage guys are
using for mirrors. Still, we are not making any progress on helping
Rainer out of his predicaments.
--
Just me,
Wire ...
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if it knows that another filesystem
presides in the target vdev.
The second error was caused by your removal of the slice.
What I find discerning is that the zpool created.
Can you provide the
On 12/8/06, Mark Maybee [EMAIL PROTECTED] wrote:
Yup, your assumption is correct. We currently do compression below the
ARC. We have contemplated caching data in compressed form, but have not
really explored the idea fully yet.
Hmm... interesting idea.
That will incur CPU to do a decompress
Luke,
On 12/11/06, Luke Lonergan [EMAIL PROTECTED] wrote:
The performance comes from the parallel version of pgsql, which uses all
CPUs and I/O channels together (and not special settings of ZFS). What sets
this apart from Oracle is that it's an automatic parallelism that leverages
the
On 1/13/07, Richard Elling [EMAIL PROTECTED] wrote:
And a third choice is cutting your 40GByte drives in two such that you
have a total of 6x 20 GByte partitions spread across your 80 and 40 GByte
drives. Then install three 2-way mirrors across the disks. Some people
like such things, and
On 1/15/07, mike [EMAIL PROTECTED] wrote:
1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely
be considered a replacement for that? I assume that anything below the
filesystem level in regards to redundancy could be an added bonus, but
is it necessary at all?
ZFS is more
Hi Robert,
On 1/14/07, Robert Milkowski [EMAIL PROTECTED] wrote:
I did 'zfs umount -a' in a global zone and all (non busy) datasets
also in local zone were unmounted (one dataset was delegated to the
local zone and other datasets were created inside). Well, I belive it
shouldn't be
On 1/25/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Having snapshots in the filesystem that work so well is really nice.
How are y'all quiescing the DB?
So the DBA has a cronjob that puts the DB (Oracle) into hot backup
mode, takes a snapshot of all affected filesystems (i.e. log +
On 1/30/07, David Magda [EMAIL PROTECTED] wrote:
What about a rotating spare?
When setting up a pool a lot of people would (say) balance things
around buses and controllers to minimize single points of failure,
and a rotating spare could disrupt this organization, but would it be
useful at
On 2/1/07, Marion Hakanson [EMAIL PROTECTED] wrote:
There's also the potential of too much seeking going on for the raidz pool,
since there are 9 LUN's on top of 7 physical disk drives (though how Hitachi
divides/stripes those LUN's is not clear to me).
Marion,
That is the part of your setup
Correct me if I'm wrong but fma seems like a more appropriate tool to
track disk errors.
--
Just me,
Wire ...
On 2/22/07, TJ Easter [EMAIL PROTECTED] wrote:
All,
I think dtrace could be a viable option here. crond to run a
dtrace script on a regular basis that times a series of reads and
Jens,
What's the output of
zpool list
zfs list
?
On 2/26/07, Jens Elkner [EMAIL PROTECTED] wrote:
Is somebody able to explain this?
elkner.isis /zpool1 df -h
...
zpool1 21T 623G20T 3%/zpool1
...
elkner.isis /zpool1 ls -al
total 1306050271
drwxr-xr-x 2
Jeff,
This is great information. Thanks for sharing.
Quickio is almost required if you want vxfs with Oracle. We ran a
benchmark a few years back and found that vxfs is fairly cache hungry
and ufs with directio beats vxfs without quickio hands down.
Take a look at what mpstat says on xcalls.
On 3/24/07, Frank Cusack [EMAIL PROTECTED] wrote:
On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan [EMAIL PROTECTED] wrote:
I should be able to reply to you next Tuesday -- my 6140 SATA
expansion tray is due to arrive. Meanwhile, what kind of problem do
you have with the 3511?
Frank
Cool blog! I'll try a run at this on the benchmark.
On 3/27/07, Rayson Ho [EMAIL PROTECTED] wrote:
BTW, did anyone try this??
http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for
Rayson
On 3/27/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
As promised. I got my 6140 SATA
On 3/28/07, Fred Oliver [EMAIL PROTECTED] wrote:
Has consideration been given to setting multiple properties at once in a
single zfs set command?
For example, consider attempting to maintain quota == reservation, while
increasing both. It is impossible to maintain this equality without some
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
On 29/03/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
Actually, recent version control systems can be very efficient at
storing binary files.
Still no where as efficient as a ZFS snapshot.
Careful consideration of the layout of your file
system applies regardless of which type of file system it
On 3/30/07, Nicholas Lee [EMAIL PROTECTED] wrote:
How do hard-links work across zfs mount/filesystems in the same pool?
No.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/vnode.c#1322
My guess is that it should be technically possible in the same pool
though but
On 4/3/07, Frank Cusack [EMAIL PROTECTED] wrote:
As promised. I got my 6140 SATA delivered yesterday and I hooked it
up to a T2000 on S10u3. The T2000 saw the disks straight away and is
working for the last 1 hour. I'll be running some benchmarks on it.
I'll probably have a week with it
On 4/17/07, David R. Litwin [EMAIL PROTECTED] wrote:
So, it comes to this: Why, precisely, can ZFS not be
released under a License which _is_ GPL
compatible?
So why do you think should it be released under a GPL compatible license?
--
Just me,
Wire ...
On 4/17/07, David R. Litwin [EMAIL PROTECTED] wrote:
On 17/04/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 4/17/07, David R. Litwin [EMAIL PROTECTED] wrote:
So, it comes to this: Why, precisely, can ZFS not be
released under a License which _is_ GPL
compatible?
So why do you think should
Hi Tim,
I run a setup of SAM-FS for our main file server and we loved the
backup/restore parts that you described.
The main concerns I have with SAM fronting the entire conversation is
data integrity. Unlike ZFS, SAMFS does not do end to end checksumming.
We have considered the setup you
On 4/20/07, Tim Thomas [EMAIL PROTECTED] wrote:
My initial reaction is that the world has got by without file systems
that can do this for a long time...so I don't see the absence of this as
a big deal. On the other hand, it hard to argue against a feature that
I admit that this is typically
On 4/20/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Wee,
Friday, April 20, 2007, 5:20:00 AM, you wrote:
WYT On 4/20/07, Robert Milkowski [EMAIL PROTECTED] wrote:
You can limit how much memory zfs can use for its caching.
WYT Indeed, but that memory will still be locked. How can you
On 4/24/07, Richard Elling [EMAIL PROTECTED] wrote:
Wee Yeh Tan wrote:
I didn't spot anything that reads it from /etc/system. Appreciate any
pointers.
The beauty, and curse, of /etc/system is that modules do not need to create
an explicit reader.
Grr I suspected after I replied
On 4/24/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
Is it expected that if I have filesystem tank/foo and tank/foo/bar
(mounted under /tank) then in order to be able to browse via
/net down into tank/foo/bar I need to have group/other permissions
on /tank/foo open?
You are running into
On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
On 4/29/07, Christine Tran [EMAIL PROTECTED] wrote:
Jens Elkner wrote:
So please: http://learn.to/quote
We apparently need to learn German as well. -CT
It's available in English as well...
http://www.netmeister.org/news/learn2quote.html
--
Just me,
Wire ...
Blog: prstat.blogspot.com
Ian,
On 5/3/07, Ian Collins [EMAIL PROTECTED] wrote:
I don't think it was a maxed CPU problem, only one core was loaded and
the prstat numbers I could get (the reporting period was erratic) didn't
show anything nasty.
Do you have the output of 'mpstat 5'?
--
Just me,
Wire ...
Blog:
On 7/17/07, Richard Elling [EMAIL PROTECTED] wrote:
Performance-wise, these are pretty wimpy. You should be able to saturate
the array controller, even without enabling RAID-5 on it. Note that the
T3's implementation of RAID-0 isn't quite the same as other arrays, so it
may perform somewhat
On 7/17/07, Mike Salehi [EMAIL PROTECTED] wrote:
Sorry, my question is not clear enough. These pools contain a zone each.
Firstly, zonepaths in ZFS is no yet supported. But this is the
hacker's forum so...
No change for importing the ZFS pool. Now you're gonna need to hack
the zones in.
For
On 7/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Wee Yeh Tan wrote:
Firstly, zonepaths in ZFS is no yet supported. But this is the
hacker's forum so...
I don't think that is actually true, particularly given that you can use
zoneadm clone using a ZFS snapshot/clone to copy zones
On 7/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
It in what is integrated into OpenSolaris and this is an OpenSolaris.org
list not an @sun.com support list for Solaris 10.
True enough. I stand corrected.
--
Just me,
Wire ...
Blog: prstat.blogspot.com
On 9/15/07, Mario Goebbels [EMAIL PROTECTED] wrote:
You can't create a RAID-Z out of two disks. You either have to go with
two mirrors (150GB and 500GB) in a pool, or the funkier variation of a
RAID-Z and mirror (4x150GB and a 350GB mirror).
Actually, you can. It may not make sense but it is
On 9/15/07, Peter Bridge [EMAIL PROTECTED] wrote:
I have:
2x150GB SATA ii disks
2x500GB SATA ii disks
I will go with a mirror. You need at least 500GB in parity anyway
(since you want to survive any disk failure). That means the maximum
you can get out of this setup is 800GB. With a mirror,
On 10/6/07, Vincent Fox [EMAIL PROTECTED] wrote:
So I went ahead and loaded 10u4 on a pair of V210 units.
I am going to set this nocacheflush option and cross my fingers and see how
it goes.
I have my ZPool mirroring LUNs off 2 different arrays. I have
single-controllers in each 3310.
On Jan 2, 2008 11:46 AM, Darren Reed [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
...
That's a sad situation for backup utilities, by the way - a backup
tool would have no way of finding out that file X on fs A already
existed as file Z on fs B. So what ? If the file got copied, byte
Your data will be striped across both vdevs after you add the 2nd
vdev. In any case, failure of one stripe device will result in the
loss of the entire pool.
I'm not sure, however, if there is anyway vm recover any data from
surviving vdevs.
On 1/2/08, Austin [EMAIL PROTECTED] wrote:
I didn't
On Wed, Feb 27, 2008 at 9:36 PM, Uwe Dippel [EMAIL PROTECTED] wrote:
I was hoping to be clear with my examples.
Within that 1 minute the user has easily received the mail alert that 5
mails have arrived, has seen the sender and deleted them. Without any trigger
of some snapshot, or storage
On Wed, Feb 27, 2008 at 10:42 PM, Marcus Sundman [EMAIL PROTECTED] wrote:
Darren J Moffat [EMAIL PROTECTED] wrote:
Marcus Sundman wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Feb 27, 2008 at 05:54:29AM +0200, Marcus Sundman wrote:
Nathan Kroenert [EMAIL PROTECTED]
Bob,
Are you sure that /pandora is mounted?
I hazard a guess that the error message is caused by mounting
zpool:pandora when /pandora is not empty. I notice that snv81 started
mounting zfs in level-order whereas my snv73 did not.
On Sun, Mar 9, 2008 at 10:16 AM, Bob Netherton [EMAIL PROTECTED]
You are looking for mdb.
echo '0t22861::pid2proc |::walk thread |::findstack' | mdb -k
On Tue, Mar 18, 2008 at 11:28 PM, Vahid Moghaddasi [EMAIL PROTECTED] wrote:
Thanks for your reply,
Before I used lsof, I tried pstack and truss -p but I get the following
message:
# pstack 22861
I'm just thinking out loud. What would be the advantage of having
periodic snapshot taken within ZFS vs invoking it from an external
facility?
On Thu, Apr 10, 2008 at 1:21 AM, sean walmsley
[EMAIL PROTECTED] wrote:
I haven't used it myself, but the following blog describes an automatic
IIRC, EFI boot requires support from the system BIOS.
On Sun, May 18, 2008 at 1:54 AM, A Darren Dunham [EMAIL PROTECTED] wrote:
On Fri, May 16, 2008 at 07:29:31PM -0700, Paul B. Henson wrote:
For ZFS root, is it required to have a partition and slices? Or can I just
give it the whole disk
On Wed, May 21, 2008 at 10:55 PM, Krutibas Biswal
[EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
Originally you wanted to get it multipathed which was the case by
default. Now you have disabled it (well, you still have to paths but
no automatic failover).
Thanks. Can somebody point me to
61 matches
Mail list logo