Michael DeMan (OA) wrote:
Actually it appears that FreeNAS is forking with planned support for both linux
(we can only speculate on the preferred backing file system) and FreeBSD with
ZFS as preferred backing file system.
In regards to OpenSolaris advocacy for using OpenSolaris vs. FreeBSD, I
What about removing attach/deattach and replace it with
zpool add [-fn] 'pool' submirror 'device/mirrorname' 'new_device'
e.g.
NAMESTATE READ WRITE CKSUM
rpoolONLINE 0 0 0
mirror-01 ONLINE 0 0 0
c4d0s0 ONLINE
Actually it appears that FreeNAS is forking with planned support for both linux
(we can only speculate on the preferred backing file system) and FreeBSD with
ZFS as preferred backing file system.
In regards to OpenSolaris advocacy for using OpenSolaris vs. FreeBSD, I'm all
ears if anybody is b
Howdy,
I upgraded to snv_128a from snv_125 . I wanted to do some de-dup testing :).
I have two zfs pools: rpool and vault. I upgraded my vault zpool version and
turned on dedup on datastore vault/shared_storage. I also turned on gzip
compression on this dataset as well.
Before I turned on dedu
I accidentially only replied to cindy but I wanted to reply to the list.
I don't want to overstrain cindys time...maybe one of the list members can help
me as well.
> -Ursprüngliche Nachricht-
> Von: Matthias Appel
> Gesendet: Dienstag, 08. Dezember 2009 03:34
> An: 'cindy.swearin...@sun.
On 2009-Nov-18 11:50:44 +1100, I wrote:
>I have a zpool on a JBOD SE3320 that I was using for data with Solaris
>10 (the root/usr/var filesystems were all UFS). Unfortunately, we had
>a bit of a mixup with SCSI cabling and I believe that we created a
>SCSI target clash. The system was unloaded an
Because ZFS is transaction, (effectively preserves order), the rename trick
will work.
If you find the ".filename" delete create a new ".filename" and when finish
writing rename it to "filename". If "filename" exists you no all writes were
completed. If you have a batch system which looks for th
I have a Solaris 10 U5 system massively patched so that it supports
ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live
upgrade components have been updated to Solaris 10 U8 versions from
the DVD, and GRUB has been updated to support redundant menus across
the UFS boot environme
On Mon, Dec 7, 2009 at 4:32 PM, Ed Plese wrote:
> Would it be beneficial to have a command line option to zpool that
> would only "preview" or do a "dry-run" through the changes, but
> instead just display what the pool would look like after the operation
> and leave the pool unchanged? For those
Hi Alex,
The SXCE Admin Guide is generally up-to-date on docs.sun.com.
The section that covers the autoreplace property and default
behavior is here:
http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view
Thanks,
Cindy
On 12/07/09 14:50, Alexandru Pirvulescu wrote:
Thank you. That fixed the
On Dec 7, 2009, at 11:32 PM, Ed Plese wrote:
> On Mon, Dec 7, 2009 at 12:42 PM, Cindy Swearingen
> wrote:
>> I agree that zpool attach and add look similar in their syntax,
>> but if you attempt to "add" a disk to a redundant config, you'll
>> see an error message similar to the following:
>>
>
On Dec 7, 2009, at 11:23 PM, Daniel Carosone wrote:
>> but if you attempt to "add" a disk to a redundant
>> config, you'll see an error message similar [..]
>>
>> Doesn't the "mismatched replication" message help?
>
> Not if you're trying to make a single disk pool redundant by adding .. er,
>
On Mon, Dec 7, 2009 at 12:42 PM, Cindy Swearingen
wrote:
> I agree that zpool attach and add look similar in their syntax,
> but if you attempt to "add" a disk to a redundant config, you'll
> see an error message similar to the following:
>
> # zpool status export
> pool: export
> state: ONLINE
On Dec 7, 2009, at 2:23 PM, Daniel Carosone wrote:
but if you attempt to "add" a disk to a redundant
config, you'll see an error message similar [..]
Doesn't the "mismatched replication" message help?
Not if you're trying to make a single disk pool redundant by
adding .. er, attaching .. a
> but if you attempt to "add" a disk to a redundant
> config, you'll see an error message similar [..]
>
> Doesn't the "mismatched replication" message help?
Not if you're trying to make a single disk pool redundant by adding .. er,
attaching .. a mirror; then there won't be such a warning, howe
> Jokes aside, this is too easy to make a mistake with
> the consequences that are
> too hard to correct. Anyone disagrees?
No, and this sums up the situation nicely, in that there are two parallel paths
toward a resolution:
- make the mistake harder to make (various ideas here)
- make the co
Thank you. That fixed the problem.
All the tutorials on Internet didn't mention autoexpand.
Again, thank you everybody for the quick reply and solving my problem.
Alex
On Dec 7, 2009, at 11:48 PM, Ed Plese wrote:
> On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu
> wrote:
>> I've read be
Andriy Gapon writes:
> on 06/12/2009 19:40 Volker A. Brandt said the following:
> >> I wanted to add a disk to the tank pool to create a mirror. I accidentally
> >> used zpool add ? instead of zpool attach ? and now the disk is added. Is
> >> there a way to remove the disk without loosing data?
> >
On Mon, Dec 7, 2009 at 3:41 PM, Alexandru Pirvulescu wrote:
> I've read before regarding zpool size increase by replacing the vdevs.
>
> The initial pool was a raidz2 with 4 640GB disks.
> I've replaced each disk with 1TB size by taking it out, inserting the new
> disk, doing cfgadm -c configure
Did you set autoexpand on? Conversely, did you try doing a 'zpool online
bigpool ' for each disk after the replace completed?
On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote:
Hi,
I've read before regarding zpool size increase by replacing the vdevs.
The initial pool was a raidz2 with 4 640
Hi,
I've read before regarding zpool size increase by replacing the vdevs.
The initial pool was a raidz2 with 4 640GB disks.
I've replaced each disk with 1TB size by taking it out, inserting the new disk,
doing cfgadm -c configure on port and zpool replace bigpool c6tXd0
The problem is the zpoo
To be fair, I think it's obvious that Sun people are looking into it and that
users are willing to help diagnose and test. There were requests for particular
data in those threads you linked to, have you sent yours? It might help them
find a pattern in the errors.
I understand the frustration
I was catching up on old e-mail on this list, and came across a blog
posting from Henrik Johansson:
http://sparcv9.blogspot.com/2009/10/curious-case-of-strange-arc.html
it tells of his woes with a fragmented /var/pkg/downloads combined
with atime updates. I see the same problem on my servers,
>> and if you don't like it, you can use the zfs:zfs_arc_max tunable in
>> /etc/system to set a maximum amount of memory to be used prior to a write.
>
> Oops. Bad cut-n-paste. That should have been
>
> zfs:zfs_write_limit_override
>
> So I am currently using
>
> * Set ZFS maximum TXG group size
Hi Matthias,
I'm not sure I understand all the issues that are going on
in this configuration, but I don't see that you used the
zpool replace command to complete physical replacement
of the failed disk, which would look like this:
# zpool replace performance c1t3d0
Then run zpool clear to clea
Hy all,
On Fri, Nov 27, 2009 at 11:08 AM, Chavdar Ivanov wrote:
> Hi,
>
> I BFUd successfully snv_128 over snv_125:
>
> ---
> # cat /etc/release
> Solaris Express Community Edition snv_125 X86
> Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
>
On Dec 7, 2009, at 10:42 AM, Cindy Swearingen wrote:
I agree that zpool attach and add look similar in their syntax,
but if you attempt to "add" a disk to a redundant config, you'll
see an error message similar to the following:
# zpool status export
pool: export
state: ONLINE
scrub: none req
I agree that zpool attach and add look similar in their syntax,
but if you attempt to "add" a disk to a redundant config, you'll
see an error message similar to the following:
# zpool status export
pool: export
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WR
On Mon, 7 Dec 2009, Bob Friesenhahn wrote:
and if you don't like it, you can use the zfs:zfs_arc_max tunable in
/etc/system to set a maximum amount of memory to be used prior to a write.
Oops. Bad cut-n-paste. That should have been
zfs:zfs_write_limit_override
So I am currently using
* Se
On Mon, 7 Dec 2009, Richard Bruce wrote:
I started copying over all the data from my existing workstation.
When copying files (mostly multi-gigabyte DV video files), network
throughput drops to zero for ~1/2 second every 8-15 seconds. This
throughput drop corresponds to drive activity on the
On 12/07/09 09:37, Cindy Swearingen wrote:
Hi Xavier,
Neither the SMC interface nor the ZFS webconsole is available
in OpenSolaris releases. The SMC cannot be used for ZFS
administration in any Solaris release.
I'm not sure what the replacement plans are but you might
check with the experts o
Hi all,
First, kudos to all the ZFS folks for a killer technology. We use several Sun
7000 series boxes at work and love the features.
I recently decided to build an Opensolaris server for home. I just put the box
together over the weekend. It is using an LSI 1068E based HBA (Supermicro
FWI
It does 'just work', however you may have some file and/or file system
corruption if the snapshot was taken at the moment that your mac is updating
some files. So use the time slider function and take a lot of snaps. :)
--
This message posted from opensolaris.org
Hi Michael,
Whenever I see commands hanging, I would first rule out
any hardware issues.
I'm not sure how to do that on a OS X.
Cindy
On 12/06/09 09:14, Michael Armstrong wrote:
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge
pkg. When I'm writing files to the fs they ar
On 6 Dec 2009, at 16:14, Michael Armstrong wrote:
> Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge pkg.
> When I'm writing files to the fs they are appearing as 1kb files and if I do
> zpool status or scrub or anything the command is just hanging. However I can
> still r
Hi,
I this available for the current OpenSolaris? it would be great to have some
graphical administration interface.
rgds.
xavier
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Hi,
I would like to add , yet another, mpt timeout report.
Suddently the system started to get slow, noticeable due to the fact
that some linux clients where complaining about nfs server timeout, and
after some time i saw alot of reset bus messages in the
/var/adm/messsages file.
I quickly took a
+1
I support a replacement for a SCM system that used "open" as an alias for
"edit" and a separate command, "opened" to see what was opened for edit,
delete, etc. Our customers accidentally used "open" when they meant "opened"
so many times that we blocked it as a command. It saved us a lot o
on 06/12/2009 19:40 Volker A. Brandt said the following:
>> I wanted to add a disk to the tank pool to create a mirror. I accidentally
>> used zpool add ? instead of zpool attach ? and now the disk is added. Is
>> there a way to remove the disk without loosing data?
>
> Been there, done that -- at
On Sun, Dec 6, 2009 at 8:11 PM, Anurag Agarwal wrote:
> Hi,
>
> My reading of write code of ZFS (zfs_write in zfs_vnops.c), is that all the
> writes in zfs are logged in the ZIL. And if that indeed is the case, then
IIRC, there is some upper limit (1MB?) on writes that go to ZIL, with
larger ones
Edward Ned Harvey wrote:
>> I use the excellent pbzip2
>>
>> zfs send ... | tee >(md5sum) | pbzip2 | ssh remote ...
>>
>> Utilizes those 8 cores quite well :)
>>
>
> This (pbzip2) sounds promising, and it must be better than what I wrote.
> ;-) But I don't understand the syntax you've got
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge
pkg. When I'm writing files to the fs they are appearing as 1kb files
and if I do zpool status or scrub or anything the command is just
hanging. However I can still read the zpool ok, just write is having
problems and any
42 matches
Mail list logo