W dniu 2010-12-01 15:19, Menno Lageman pisze:
f...@ll wrote:
Hi,
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv')
Hi,
I wonder what is the better option to install the system on solaris ufs
and zfs sensitive data on whether this is the best all on zfs?
What are the pros and cons of such a solution?
f...@ll
___
zfs-discuss mailing list
W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:
On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info wrote:
What can I do that zpool show new value?
zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
-- richard
I tried your suggestion, but no effect.
Did you modify the
of the perf ?
The purpose is make a big NFS server with primary data on a high-level raid
array disk but using ZFS to mirror all data on the all old-raid-array.
Regards.
--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time
I send a big file using your command
at t=t+1I just send the diff not a big file
Regards.
--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Tue Dec 5 14:53:13 CET 2006
) what you use to attach two 2disks on 2
different site ? You using FC attachement ?
Regards.
--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Wed Dec 6 09:06:30 CET 2006
access to the remote ZFS file system and the clone?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote:
Albert Chin wrote:
On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote:
I have no hands-on experience with ZFS but have a question. If the
file server running ZFS exports the ZFS file system via NFS to
clients, based
Since ZFS already has error correction, would drives that limit the time a hard
drive attempts to recover from errors such as WD RE drives or Seagate ES drive
be necessary? Would it be safe to use standard hard drives without the Time
Limited Error Recovery feature in a RAIDZ array?
This
be a RAID 0
config on the 6140?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jan 25, 2007 at 10:16:47AM -0500, Torrey McMahon wrote:
Albert Chin wrote:
On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote:
On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill [EMAIL PROTECTED]
wrote:
On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote
). Well - there's no harm in making the
suggestion ... right?
Well, when you buy disk for the Sun 5320 NAS Appliance, you get a
Controller Unit shelf and, if you expand storage, an Expansion Unit
shelf that connects to the Controller Unit. Maybe the Expansion Unit
shelf is a JBOD 6140?
--
albert
of
that.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the hot
spare is distributed amongst data drives in the array. When a failure
occurs, the rebuild occurs in parallel across _all_ drives in the
array:
http://www.issidata.com/specs/agami/enterprise-classreliability.pdf
--
albert chin ([EMAIL PROTECTED
). Is it
possible to allocate some disks from the 6140 array to ZFS on the
X4100 for the purpose of migrating data from the appliance to ZFS?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
just snapshot the file system.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is 100MB. /etc/system on the file server is:
set maxphys = 0x80
set ssd:ssd_max_throttle = 64
set zfs:zfs_nocacheflush = 1
Why can't the NFS performance match that of SSH?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote:
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
One big reason is that the sending CPU has to do all the comparisons to
compute the list of files to be sent - it has to fetch the attributes
from both local
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync
semantics conspire against single
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote:
Albert Chin wrote:
I think the bigger problem is the NFS performance penalty so we'll go
lurk somewhere else to find out what the problem is.
Is this with Solaris 10 or OpenSolaris on the client as well?
Client is RHEL 4
and/or -oCompressionLevel?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, May 21, 2007 at 13:23:48 -0800, Marion Hakanson wrote:
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
My first guess is the NFS vs array cache-flush issue. Have you
configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll
make a huge difference for NFS
array w/128k
stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 25, 2007 at 12:14:45AM -0400, Torrey McMahon wrote:
Albert Chin wrote:
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
I'm getting really poor write performance with ZFS on a RAID5 volume
(5 disks) from a storagetek 6140 array. I've searched the web
your file server on a UPS won't help
here.
http://blogs.sun.com/erickustarz/entry/zil_disable discusses some of
the issues affecting zil_disable=1.
We know we get better performance with zil_disable=1 but we're not
taking any chances.
-Andy
On 5/24/07 4:16 PM, Albert Chin
[EMAIL
confusing the array?
I think both are ok. CAM is free. Dunno about Santricity.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some features will not be available without an explicit 'zpool
upgrade'.
config:
zones ONLINE
c0d1s5ONLINE
zpool import lists the pools available for import. Maybe you need to
actually _import_ the pool first before you can upgrade.
--
albert chin ([EMAIL
NVRAM storage
to various sun low/mid-range servers that are currently acting as
ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
could be used for the ZIL? I think several HD's are available with
SCSI/ATA interfaces.
--
albert chin ([EMAIL PROTECTED
/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
Albert Chin wrote:
Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be available for ZIL storage. It would also
be nice for extra hardware (PCI-X, PCIe card) that added NVRAM
On Tue, Jul 03, 2007 at 10:31:28AM -0700, Richard Elling wrote:
Albert Chin wrote:
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
Albert Chin wrote:
Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be available for ZIL storage
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
PSARC 2007/171 will be available in b68. Any documentation anywhere on
how to take advantage of it?
Some of the Sun storage arrays contain NVRAM. It would
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X, PCIe card) that
added NVRAM storage to various sun low/mid-range servers that are
currently acting as ZFS/NFS
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote:
On Mon, 9 Jul 2007, Albert Chin wrote:
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X, PCIe card
Memory pci1332,5425 card :) I
presume this is the PCI-X version.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
anyone selling them.
- Eric
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http
if it is worth the whole effort
for my personal purposes.
Huh? So your MM-5425CN doesn't fit into a PCI slot?
Any comment are very appreciated
How did you obtain your card?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
I think we are very close to using zfs in our production environment.. Now
that I have snv_72 installed and my pools set up with NVRAM log devices
things are hauling butt.
How did you get NVRAM log devices?
--
albert chin ([EMAIL
read the script.
Let me know if this was helpful,
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-July/003080.html
You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
in SXCE either.
Ian
ZFS root and boot work fine since the later snv_6x builds, the installer
is a different matter. You'll have to install to UFS and move to ZFS - see
http://www.opensolaris.org/os/community/zfs/boot/ .
-Albert
___
zfs-discuss
replace c0t600A0B8000299966059E4668CBD3d0 with
c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
You should be able to do a 'zpool detach' of the replacement and then
try again.
Thanks. That worked.
- Eric
On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
Running ON b66 and had a drive fail. Ran 'zfs replace
.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is currently being resilvered. The
pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
Resilver and scrub are broken
On Mon, 2007-11-26 at 08:21 -0800, Roman Morokutti wrote:
Hi
I am very interested in using ZFS as a whole: meaning
on the whole disk in my laptop. I would now make a
complete reinstall and don´t know how to partition
the disk initially for ZFS.
--
Roman
This message posted from
/listinfo/zfs-discuss
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
luck,
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
desirable/reliable.
http://blogs.sun.com/eschrock/entry/zfs_and_fma
FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68:
http://www.opensolaris.org/os/community/arc/caselog/2007/283/
http://www.opensolaris.org/os/community/on/flag-days/all/
--
albert chin ([EMAIL PROTECTED
, no inexpensive
cards exist for the common consumer (with ECC memory anyways). If you
convince http://www.micromemory.com/ to sell you one, let us know :)
Set set zfs:zil_disable = 1 in /etc/system to gauge the type of
improvement you can expect. Don't use this in production though.
--
albert chin
on, but it might take sometime
before everything is made aware that yes it's no longer unusual that
there can be 1+ filesystems on one machine.
But shouldn't sharemgr(1M) be aware? It's relatively new.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss
equivalent to the performance of
ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a
heartbeat if I could find a card. I think others would as well.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
vdev uses 6-way
raidz
I can force this with «-f» option.
But what's that mean (sorry if the question is stupid).
What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
that's mean I can make raidz with 6 or 7 or any number of disk).
Regards.
--
Albert SHIH
Observatoire de
Le 30/01/2008 à 11:01:35-0500, Kyle McDonald a écrit
Albert Shih wrote:
What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
that's mean I can make raidz with 6 or 7 or any number of disk).
Depending on needs for space vs. performance, I'd probably pixk eithr 5*9
it.
Regards.
--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Ven 1 fév 2008 23:03:59 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
]
Manager of Engineering SupportEnterprise
Engineering Group
Transcom Enhanced Services
http://www.transcomus.com
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
/0b500afc4d62d434?lnk=stq=#0b500afc4d62d434
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
) command to
specify what happens when a pool fails. This was integrated in Nevada
b77, it probably won't be available in S10 until the next update.
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
a ZFS property with the date of the last backup.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of SAMBA (or, you
know your macs can speak NFS ;).
Alternatively you could run Banshee or mt-daapd on the Solaris box and
just rely on iTunes sharing. =P
Seriously, NFS is a totally reasonable way to go.
-Albert
___
zfs-discuss mailing list
zfs-discuss
property from the bootfs pool property?
Make sure you also didn't export the pool. The pool must be imported
and /etc/zfs/zpool.cache must be in sync between running system and the
ZFS root.
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
that are not available until snv_90. There is a hack to do an offline
upgrade from DVD/CD (zfs_ttinstall), if you can't wait.
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/a
# bootadm update-archive -R /a
# umount /a
Cross fingers, reboot!
# init 6
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
optimisation). =P
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Wouldn't snapupgrade clone the original BE ? In this case no data
would be rewritten.
Correct, and Live Upgrade also clones the active BE when you do
lucreate. Unless you copy all the data manually, it's going to inherit
the uncompressed blocks from the current filesystem.
-Albert
for S10u6 from the ZFS boot
support currently available in SX, but the JumpStart configuration for
SX might not be compatible for other reasons (install-discuss may know
better).
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, and I don't have any space in
this server to mount a SSD.
Maybe you can call Vmetro and get the names of some resellers whom you
could call to get pricing info?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
J4400 with 24 disks.
Doc is here :
http://docs.sun.com/app/docs/coll/j4200
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
series is doing ZFS/NFS,
performance will increase with zfs:zfs_nocacheflush=1. But, without
battery-backed NVRAM, this really isn't safe. So, for this usage case,
unless the server has battery-backed NVRAM, I don't see how the J series
is good for ZFS/NFS usage.
--
albert chin ([EMAIL PROTECTED
.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
umem_Sol_Drv_Cust_i386_v01_11.pkg
hangs on ## Installing part 1 of 3. on snv_95. I do not have other
Solaris versions to experiment with; this is really just a hobby for
me.
Does the card come with any programming specs to help debug the driver?
--
albert chin ([EMAIL PROTECTED
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
I _really_ wish rsync had an option to copy in place or something like
that, where the updates are made directly to the file, rather than a
temp copy.
Isn't this what --inplace does?
--
albert chin ([EMAIL PROTECTED
does the third raidz2 vdev report 34.0K CKSUM errors?
The number of data errors appears to be increasing as well as the
resilver process continues.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
On Mon, 24 Aug 2009, Albert Chin wrote:
Seems some of the new drives are having problems, resulting in CKSUM
errors. I don't understand why I have so many data errors though. Why
does the third raidz2 vdev report 34.0K CKSUM
\
c6t600A0B800029996609EE4A89DA51d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
pool tww. Please see zpool(1M).
So, what is going on?
--
albert chin (ch...@thewrittenword.com
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
[[ snip snip ]]
After the resilver completed:
# zpool status tww
pool: tww
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action
: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go
config:
...
So, why is a resilver in progress when I asked for a scrub?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss
this down, just lots of checksum
errors.
So, on snv_121, can you read the files with checksum errors? Is it
simply the reporting mechanism that is wrong or are the files really
damaged?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
# cat /etc/release
Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms
INUSE currently in use
c6t600A0B800029996605C84668F461d0 INUSE currently in use
c6t600A0B80002999660A454A93CEDBd0 AVAIL
c6t600A0B80002999660ADA4A9CF2EDd0 AVAIL
--
albert chin (ch...@thewrittenword.com
0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
errors: 855 data errors, use '-v' for a list
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
zfs:txg_sync_thread+265 ()
ff00104c0c50 unix:thread_start+8 ()
System is a X4100M2 running snv_114.
Any ideas?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
destroy transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
[[ snip snip ]]
We really need to import this pool. Is there a way around this? We do
have snv_114 source on the system if we need to make changes to
usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
destroy
just in case.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with find takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
--
albert chin (ch...@thewrittenword.com
.
This should work but it does not verify the redundant metadata. For
example, the duplicate metadata copy might be corrupt but the problem
is not detected since it did not happen to be used.
Too bad we cannot scrub a dataset/object.
--
albert chin (ch...@thewrittenword.com
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
properties are
not sent?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
USED AVAIL REFER MOUNTPOINT
t/opt/vms/images/vios/near.img 14.5G 2.42T 14.5G -
snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img
NAMEPROPERTY VALUE SOURCE
t/opt/vms/images/vios/near.img usedbydataset 14.5G -
--
albert chin (ch
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
When transferring a volume between servers, is it expected that the
usedbydataset property should be the same on both? If not, is it cause
for concern?
snv114# zfs list tww/opt/vms/images/vios/near.img
NAME
filesystem
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
r...@xxx:~# uname -a
SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
What's wrong?
Looks like -u was a recent addition.
--
albert chin (ch...@thewrittenword.com
3.08, etc., but such a dramatic loss of performance probably has a
single cause. Is anyone willing to speculate?
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
--
albert chin (ch...@thewrittenword.com
Albert Chin wrote:
Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
snapshot a few days ago:
# zfs snapshot a...@b
# zfs clone a...@b tank/a
# zfs clone a...@b tank/b
The system started panicing after I tried:
# zfs snapshot tank/b...@backup
So, I destroyed tank/b
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
Thanks for reporting this. I have fixed this bug (6822816) in build
127.
Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply
level (I believe it's S10u6) and ZFS recordsize.
Any suggestions will be appreciated.
regards, Jeff
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
a month. Can I speed this up?
It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the network.
Or maybe the remote system. This might help:
http://tinyurl.com/yl653am
--
albert chin (ch...@thewrittenword.com
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 126 matches
Mail list logo