On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain t...@telegraphics.com.au wrote:
On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts mger...@gmail.com wrote
from cache. A dtrace analysis of
just how random the reads are would be interesting. I think that
hotspot.d from the DTrace Toolkit would be a good starting place.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html.
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html
Mike
On Thu, Jul 10, 2008 at 4:42 AM, Darren J Moffat darren.mof...@sun.com wrote:
I regularly create new zfs filesystems or snapshots and I find it
annoying
characteristics in
this area?
Is there less to be concerned about from a performance standpoint if
the workload is primarily read?
To maximize the efficacy of dedup, would it be best to pick a fixed
block size and match it between the layers of zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
richard.ell...@gmail.com wrote:
Good question! Additional thoughts below...
On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (iSCSI, FC) services
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote:
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
richard.ell...@gmail.com wrote:
Good question! Additional thoughts below...
On Nov 24, 2009, at 6:37 AM, Mike
Maybe to create snapshots after the fact as a part of some larger disaster
recovery effort.
(What did my pool/file-system look like at 10am?... Say 30-minutes before the
database barffed on itself...)
With some enhancements might this functionality be extendable into a poor
man's CDP offering
and sha256 implemented in hardware?
I've been waiting very patiently to see this code go in. Thank you
for all your hard work (and the work of those that helped too!).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
,
a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this
keeps the MAU's busy but the rest of the core is still idle for things
like compression, TCP, etc.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
blocks stay deduped in the ARC, it means
that it is feasible to every block that is accessed with any frequency
to be in memory. Oh yeah, and you save a lot of disk space.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
Anyone have any creative solutions for near-synchronous replication between
2 ZFS hosts?
Near-synchronous, meaning RPO X---0
I realize performance will take a hit.
Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a single, flat ZFS file
Does anyone know when this will be available? Project says Q4 2009 but does not
give a build.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(arcsize)
Target Size (Adaptive): 4207 MB (c)
That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system
that you have booted from a 32-bit kernel?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
will give each thing X/Y space. This is
because it is quite likely that someone will do the operation Y++ and
there are very few storage technologies that allow you to shrink the
amount of space allocated to each item.
--
Mike Gerdts
http://mgerdts.blogspot.com
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda bertram.fuk...@hp.com wrote:
Thanks for the info Mike.
Just so I'm clear. You suggest 1)create a single zpool from my LUN 2) create
a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right?
Correct
--
Mike Gerdts
http
...@migrate \
| ssh host2 zfs receive zones/zo...@migrate
host2# zonecfg -z zone1 create -a /zones/zone1
host2# zonecfg -z zone1 attach
host2# zoneadm -z zone1 boot
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
release (045C)8626. On
August 11 they released firmware revisions 8820, 8850, and 02G9,
depending on the drive model.
http://downloadcenter.intel.com/Detail_Desc.aspx?agr=YProdId=3043DwnldID=17485lang=eng
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
that will lead them to unsympathetic ears if things go poorly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Sep 2, 2009 at 4:06 PM, cindy.swearin...@sun.com wrote:
Hi Mike,
I reviewed this doc and the only issue I have with it now is that uses
/var/tmp an an example of storing snapshots in long-term storage
elsewhere.
One other point comes from zfs(1M):
The format of the stream
On Wed, Sep 2, 2009 at 4:46 PM, Richard Ellingrichard.ell...@gmail.com wrote:
Thanks Cindy!
Mike, et.al.,
I think the confusion is surrounding replacing an enterprise backup
scheme with send-to-file. There is nothing wrong with send-to-file,
it functions as designed. But it isn't designed
Try a: zfs get -pH -o value creation snapshot
-- MikeE
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker
Sent: Friday, August 28, 2009 10:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss]
/alice/proj1
Alice$ rm /etc/shadow
Alice$ cp myshadow /etc
Alice$ su -
root#
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. But
if the snapshots were created after the mount - they are not accessible
from inside of a zone.
So this is correct behavior or it's bug, any workarounds?
Thanks in advance for all comments.
Regards,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, as this
type of data is already presented via zpool status -v when
corruption is detected.
http://docs.sun.com/app/docs/doc/819-5461/gbctx?a=view
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
me at the
moment.
At an average file size of 45 KB, that translates to about 3 MB/sec.
As you run two data streams, you are seeing throughput that looks
kinda like the 2 * 3 MB/sec.
With 4 backup streams do you get something that looks like 4 * 3 MB/s?
How does that effect iostat output?
--
Mike
in
the parallelism gaps as the longer-running ones finish.
3. That is, there is sometimes benefit in having many more jobs to run
than you have concurrent streams. This avoids having one save set
that finishes long after all the others because of poorly balanced
save sets.
--
Mike Gerdts
http
0 - 9.76G -
# rmdir .zfs/snapshot/foo
# zfs list | grep foo
no output
I don't know of a similar shortcut for the create or clone subcommands.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
iSCSI LUNs is probably already giving you
most of this benefit (assuming low latency on network connections).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencered_spen...@umanitoba.ca wrote:
On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote:
The DBA's that I know use files that are at least hundreds of
megabytes in size. Your problem is very different.
Yes, definitely.
I'm relating records in a table to my
/pipermail/zfs-discuss/2007-September/013233.html
Quite likely related to:
http://bugs.opensolaris.org/view_bug.do?bug_id=6684721
In other words, it was a buggy Sun component that didn't do the right
thing with cache flushes.
--
Mike Gerdts
http://mgerdts.blogspot.com
?
It appears as though there is an upgrade path.
http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html
However, the troll that you have to pay to follow that path demands a
hefty sum ($7995 list). Oh, and a reboot is required. :)
--
Mike Gerdts
http://mgerdts.blogspot.com
, Virtual PC) have the
same default behaviour as VirtualBox?
I've lost a pool due to LDoms doing the same. This bug seems to be related.
http://bugs.opensolaris.org/view_bug.do?bug_id=6684721
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
/2009/US/07/15/quadrillion.dollar.glitch/index.html
- Rich
(Footnote: I ran ntpdate between starting the scrub and it finishing,
and time rolled backwards. Nothing more exciting.)
And Visa is willing to wave the $15 over the limit fee associated with
the errant charge...
--
Mike Gerdts
http
report contains more detail of the configuration. One thing
not covered in that bug report is that the S10u7 ldom has 2048 MB of
RAM and the 2009.06 ldom has 2024 MB of RAM.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
On Mon, Jul 13, 2009 at 3:16 PM, Joerg
Schillingjoerg.schill...@fokus.fraunhofer.de wrote:
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On Mon, 13 Jul 2009, Mike Gerdts wrote:
FWIW, I hit another bug if I turn off primarycache.
http://defect.opensolaris.org/bz/show_bug.cgi?id
128K. I thought I had seen excessive reads
there too, but now I can't reproduce that. Creating another fs with
recordsize=8k seems to make this behavior go away - things seem to be
working as designed. I'll go update the (nota-)bug.
--
Mike Gerdts
http://mgerdts.blogspot.com
# uname -srvp
SunOS 5.11 snv_111b sparc
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ongoing operation.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/lib/libbc/libc/gen/common/readdir.c
The libbc version hasn't changed since the code became public. You
can get to an older libc variant of it by clicking on the history link
or using the appropriate hg command to get a specific changeset.
--
Mike Gerdts
http://mgerdts.blogspot.com
that returns the entries for . and .. out of order.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
Is anyone aware if this feature might be available in the future?
If not, is there a clean way of achieving the same result?
Thanks, Mike.
--
This message posted
very tidy, thanks! :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is available through
sunsolve if you have a support contract.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/thread.jspa?messageID=377018
I have no idea of the quality or correctness of this solution.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
===
GuiErrMsg0x00: Success.
r...@nfs0009:~#
Perhaps you have change the configuration of the array since the last
reconfiguration boot. If you run devfsadm then run format, does it
see more disks?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
On Wed, May 6, 2009 at 2:54 AM, casper@sun.com wrote:
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework
How about a generic zfs options field in the JumpStart profile?
(essentially an area where options can be specified that are all applied
to the boot-pool (with provisions to deal with a broken-out-var))
That should future proof things to some extent allowing for
compression=x, copies=x,
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was delivered as part of the OpenSolaris SPARC
preview. Can anyone
Create the zpool with:
zpool create name log dev(s) - for the ZIL
zpool create name cache dev(s) - for the L2ARC
On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling
richard.ell...@gmail.comwrote:
Gary Mills wrote:
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
Gary Mills
of those finding this conversation in the archives,
this looks like it will be fixed in snv_114.
http://bugs.opensolaris.org/view_bug.do?bug_id=6824968
http://hg.genunix.org/onnv-gate.hg/rev/4f68f041ddcd
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
Wow... that's seriously cool!
Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html and
now we're really getting somewhere...
Nice to see this level of innovation here. Anyone try to employ these
types of techniques on s10? I haven't used nexenta in the past, and I'm
not clear in
On Sun, Apr 19, 2009 at 10:58 AM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote:
Also, you may want to consider doing backups from the NetApp rather
than from the Solaris box.
I've certainly recommended finding a different way to perform
to be on the same spindles? What does the
network look like from the NetApp side?
Are the mail server and the NetApp attached to the same switch, or are
they at opposite ends of the campus? Is there something between them
that is misbehaving?
--
Mike Gerdts
http://mgerdts.blogspot.com
544 18137839360 1% /home/users
% ls -ld paceytmp
drwxr-xr-x+ 2 root root 2 2009-03-31 09:47 paceytmp
The owner is root, not the user I set chown for. I also seem to have a
facl I never set up. Can someone advise on the correct way to do this so
that the permissions are correct?
Thanks,
Mike
in the
global zone and the dataset is deleted to a non-global zone, display
the UID rather than a possibly mistaken username.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
on the other end of bugs.opensolars.org will get confused by the
request to enhance a feature that doesn't yet exist.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Hello
1) Dual IO module option
2) Multipath support
3) Zone support [multi host connecting to same JBOD or same set of JBOD's
connected in series. ]
This sounds interesting - where I can read more about connecting two
hosts to same J4200 etc?
Thanks
Mike
the tools simpler - absolutely no UI for instance. does it
really need one to dump out things? :)
On Wed, Mar 11, 2009 at 7:15 PM, David Magda dma...@ee.ryerson.ca wrote:
On Mar 11, 2009, at 21:59, mike wrote:
On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca wrote:
If you know
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
) would be forward compatible...
On Wed, Mar 11, 2009 at 5:14 PM, mike mike...@gmail.com wrote:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am
doesnt it require java and x11?
On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca wrote:
On Mar 11, 2009, at 20:14, mike wrote:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2
this up by reducing the number of mnttab lookups.
And zfs list has been changed to no longer show snapshots by default.
But it still might make sense to limit the number of snapshots saved:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
-- Rich
On Sun, Mar 8, 2009 at 10:10 PM, mike
Stone about
rolling up daily snapshots into monthly snapshots, which would roll up
into yearly snapshots...
On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling richard.ell...@gmail.com wrote:
mike wrote:
Well, I could just use the same script to create my daily snapshot to
remove a snapshot
I do a daily snapshot of two filesystems, and over the past few months
it's obviously grown to a bunch.
zfs list shows me all of those.
I can change it to use the -t flag to not show them, so that's good.
However, I'm worried about boot times and other things.
Will it get to a point with 1000's
/#patents.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snapshots could be very helpful to prevent file system crawls
and to avoid being fooled by bogus mtimes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote
are created together (all at once) or not created at
all. The benefit of atomic snapshots operations is that the snapshot
data is always taken at one consistent time, even across descendent
file systems.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
or ksh, so long as the list of zfs
mount points does not overflow the maximum command line length.
$ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
Does this all go away when BP-rewrite gets fully resolved/implemented?
Short of the pool being 100% full, it should allow a rebalancing
operation and possible LUN/device-size-shrink to match the new device
that is being inserted?
Thanks,
-- MikeE
-Original Message-
From:
i'm not sure how many via chips support 64-bit, which seems to be
highly recommended.
atoms seem to be more suitable.
On Mon, Jan 12, 2009 at 1:14 PM, Joe S js.li...@gmail.com wrote:
In the last few weeks, I've seen a number of new NAS devices released
from companies like HP, QNAP, VIA, Lacie,
Hi
It would be also nice to be able to specify the zpool version during pool
creation. E.g. If I have a newer machine and I want to move data to an older
one, I should be able to specify the pool version, otherwise it's a one-way
street.
zpool create -o version=xx ...
Mike
in practice.
Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snapshot -r
file/system it still takes quite long time if there are hundreds of
snapshots, while ls /file/system/.zfs/snapshot returns immediately.
Can this also be improved somehow please?
Thanks
Mike
___
zfs-discuss mailing list
zfs-discuss
running
svcs -v zfs/auto-snapshot
The last few lines of the log files mentioned in the output from the
above command may provide helpful hints.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
--disable-debug CFLAGS=-O -m64 MAKE=gmake
5) gmake gmake install
6) /usr/local/bin/mbuffer -V
Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've seen discussions as far back as 2006 that say development is underway to
allow the addition and remove of disks in a raidz vdev to grow/shrink the
group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do
'zpool remove tank c0t3d0' and data residing on c0t3d0 would be
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data
there are. With copies of 2 or more, in theory, an entire disk can have read
errors, and the zfs volume still works.
The unfortunate part here is that the redundancy lies in the volume, not the
pool vdev like with
In theory, with 2 80GB drives, you would always have a copy somewhere else.
But a single drive, no.
I guess I'm thinking in the optimal situation. With multiple drives, copies
are spread through the vdevs. I guess it would work better if we could define
that if copies=2 or more, that at
Well, I knew it wasn't available. I meant to ask what is the status of the
development of the feature? Not started, I presume.
Is there no timeline?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
=/zones/$zone/root/var rpool/zones/$zone/var
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt [EMAIL PROTECTED] wrote:
On 12/02/08 10:24, Mike Gerdts wrote:
I follow you up to here. But why do the next steps?
zonecfg -z $zone
remove fs dir=/var
zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var
It's not strictly required
My root drive is ufs. I have corrupted my zpool which is on a different drive
than the root drive.
My system paniced and now it core dumps when it boots up and hits zfs start. I
have a alt root drive that can boot the system up with but how can I disable
zfs from starting on a different drive?
Boot from the other root drive, mount up the bad one at /mnt. Then:
# mv /mnt/etc/zfs/zpool.cache /mnt/etc/zpool.cache.bad
On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco [EMAIL PROTECTED] wrote:
My root drive is ufs. I have corrupted my zpool which is on a different drive
than the root
Boot from the other root drive, mount up the bad
one at /mnt. Then:
# mv /mnt/etc/zfs/zpool.cache
/mnt/etc/zpool.cache.bad
On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco
[EMAIL PROTECTED] wrote:
My root drive is ufs. I have corrupted my zpool
which is on a different drive than
in
ASM. Have you tried ASM on Solaris? It should give you a lot of the
benefits you would expect from ZFS (pooled storage, incremental
backups, (I think) efficient snapshots). It will only work for oracle
database files (and indexes, etc.) and should work for clustered
storage as well.
--
Mike
I think you'll need to get device support first. Last I checked there
was still no device support for PMPs, sadly.
On Thu, Nov 20, 2008 at 4:52 PM, Krenz von Leiberman
[EMAIL PROTECTED] wrote:
Does ZFS support pooled, mirrored, and raidz storage with
SATA-port-multipliers
Hello
Is there any way to list all snapshots of particular file system without
listing the snapshots of its children file systems?
Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Hi
[Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko
[EMAIL PROTECTED] wrote:
Hello
Is there any way to list all snapshots of particular file system
without listing the snapshots of its children file systems?
fsnm=tank/fs;zfs list -rt snapshot ${fsnm}|grep ${fsnm}@
or even
Solaris versions:
http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde
On Fri, Nov 14, 2008 at 2:15 AM, Vincent Boisard [EMAIL PROTECTED] wrote:
Do you have an idea if your problem is due to live upgrade or b101 itself ?
Vincent
On Thu, Nov 13, 2008 at 8:06 PM, mike [EMAIL
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper [EMAIL PROTECTED] wrote:
No clue. My friend also upgraded to b101. Said it was working awesome
- improved network performance, etc. Then he said after a few days,
he's decided to downgrade too - too many other weird side effects.
Any more details
Depends on your hardware. I've been stable for the most part on b98.
Live upgrade to b101 messed up my networking to nearly a standstill.
It stuck even after I nuked the upgrade. I had to reinstall b98.
On Nov 13, 2008, at 10:01 AM, Vincent Boisard [EMAIL PROTECTED]
wrote:
Thanks for
Will probably have a 10_recommended u6 patch bundle sometime in December...
For now, to get to u6 (and ZFS) you must do LU (ie u5 to u6)
Just FYI
On Wed, Nov 12, 2008 at 12:48 PM, Johan Hartzenberg [EMAIL PROTECTED]wrote:
On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox [EMAIL PROTECTED]wrote:
to disable some other unnecessary processes, ex:
svcs | egrep
'webco|wbem|avahi|print|font|cde|sendm|name-service-cache|opengl' | awk
'{print $3}' | xargs -n1 svcadm disable
This should made your system more usable on light hardware.
Regards
Mike
___
zfs
doesn't seem to be remembered or I'm not
understanding it properly...
The user 'mike' should have -all- the privileges, period, no matter
what the client machine is etc. I am mounting it -as- mike from both
clients...
___
zfs-discuss mailing list
zfs
whole zpool not just
sync with send/recv. But I think all will be fine there as is seems the
problem is in send/recv part on the file system itself on different
architectures.
Thanks
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hi all,
I have been asked to build a new server and would like to get some opinions on
how to setup a zfs pool for the application running on the server. The server
will be exclusively for running netbackup application.
Now which would be better? setting up a raidz pool with 6x146gig drives
101 - 200 of 472 matches
Mail list logo