On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm destroys the oldest snapshots and creates new ones, both
gm recursively.
I'd be curious if you try taking the same snapshots non-recursively
instead, does the pause go
On Mon, Mar 08, 2010 at 01:23:10PM -0800, Bill Sommerfeld wrote:
On 03/08/10 12:43, Tomas Ögren wrote:
So we tried adding 2x 4GB USB sticks (Kingston Data
Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
snapshot times down to about 30 seconds.
Out of curiosity, how
Can I create a devalias to boot the other mirror similar to UFS?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 8, 2010, at 11:46 PM, ольга крыжановская olga.kryzh
anov...@gmail.com wrote:
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.
True, but if the OP is looking for those features they are more then
unlikely looking for an
Yay! Something where I can contribute! Iam a hardware
guy trying to live in a software world, but I think I know
how this one works.
The reason is that the vendor (ACER) of the mainboard
says it is not supported, and I can not get into the
bios any more, but osol boots fine and sees 8GB.
I redirected the console to the serial port and managed to capture the panic
information below:
SunOS Release 5.11 Version snv_111b 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
panic[cpu0]/thread=ff0007c39c60: mutex_enter: bad
On Mon, 8 Mar 2010, Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in?
Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's
all the fma service does anyway.
If you ever get your drive to come back online, the fma service should
recognize that
On 09/03/2010 13:18, Tony MacDoodle wrote:
Can I create a devalias to boot the other mirror similar to UFS?
yes
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm curious to know whether the following output :
bash-4.0# echo memscrub_scans_done/U | mdb -k
memscrub_scans_done:
memscrub_scans_done:1985
means that Solaris considers ECC memory is effectively installed (the fact
that it is non-zero)?
I have installed unbuffered
I am talking about having a write queue, which points to ready to write, full
stripes.
Ready to write full stripes would be
*The last byte of the full stripe has been updated.
*The file has been closed for writing. (Exception to the above rule)
I believe there is now a scheduler for ZFS, to
When I boot from a snv133 live cd and attempt to import the rpool it panics
with this output:
Sun Microsystems Inc. SunOS 5.11 snv_133 February 2010
j...@opensolaris:~$ pfexec su
Mar 9 03:11:37 opensolaris su: 'su root' succeeded for jack on /dev/console
j...@opensolaris:~# zpool import
Brand new video! George Wilson on ZFS Dedup - Oracle Solaris Video
http://bit.ly/b5MMpn
--
best regards,
Deirdré Straughan
Solaris Technical Content
blog: Un Posto al Sole http://blogs.sun.com/deirdre/
___
zfs-discuss mailing list
[I hope this isn't a repost double whammy. I posted this message
under `Message-ID: 87fx4ai5sp@newsguy.com' over 15 hrs ago but
it never appeared on my nntp server (gmane) far as I can see]
I'm a little at a loss here as to what to do about these two errors
that turned up during a scrub.
I'm a little at a loss here as to what to do about these two errors
that turned up during a scrub.
The discs involved are a matched pair in mirror mode.
zpool status -v z3 (wrapped for mail):
---- ---=--- -
scrub: scrub completed after
Ross is correct - advanced OS features are not required here - just the ability
to store a file - don’t even need unix style permissions
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ross Walker
Sent: Tuesday,
Found a site that recommended setting the following system file entries
set zfs:zfs_recover=1
set aok=1
and running this command
zdb -e -bcsvL rpool
but I get the following error:
Traversing all blocks to verify checksums ...
out of memory -- generating core dump
Abort
The laptop has 4GB of
And another brand-new video: ZFS Dynamic LUN Expansion - Oracle Solaris
Video http://bit.ly/cwwCZl
--
best regards,
Deirdré Straughan
Solaris Technical Content
blog: Un Posto al Sole http://blogs.sun.com/deirdre/
___
zfs-discuss mailing list
Hi D,
Is this a 32-bit system?
We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.
Maybe someone with similar recent memory
On Mon, Mar 8, 2010 at 1:47 PM, Tim Foster tim.fos...@sun.com wrote:
Looking at the errors, it looks like SMF isn't exporting the values for
action_authorization or value_authorization in the SMF manifest it
produces, resulting the service not being allowed to set values in
svccfg when it runs
On Mar 9, 2010, at 9:40 AM, Matt Cowger wrote:
Ross is correct - advanced OS features are not required here - just the
ability to store a file - don’t even need unix style permissions
KISS. Just use tmpfs, though you might also consider limiting its size.
-- richard
ZFS storage and
On 03/ 9/10 10:53 AM, Cindy Swearingen wrote:
Hi D,
Is this a 32-bit system?
We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.
[First, a brief apology. I inadvertently posted this message to the
`general' group when it should have been to the `zfs' group.
In that last few days I seem to be all thumbs when posting.. and have
created several bumbling posts to opensolaris lists.
]
summary:
A zfs fs set with smb and nfs
That's a very good point - in this particular case, there is no option to
change the blocksize for the application.
On 3/9/10 10:42 AM, Roch Bourbonnais roch.bourbonn...@sun.com wrote:
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table
Hello all,
I need to backup some zpools to tape. I currently have two servers,
for the purpose of this conversation we will call them server1 and
server2 respectively. Server1, has several zpools which are replicated
to a single zpool on server2 through a zfs send/recv script. This part
works
Okay... I found the solution to my problem.
And it has nothing to do with my hard drives... It was the Realtek NIC drivers.
I read about problems and added a new driver (I got that from the forum
thread). And now I have about 30MB/s read and 25MB/s write performance. That's
enough (for the
Hi Harry,
Reviewing other postings where permanent errors where found on redundant
ZFS configs, one was resolved by re-running the zpool scrub and one
resolved itself because the files with the permanent errors were most
likely temporary files.
One of the files with permanent errors below is
gd == Gregory Durham gregory.dur...@gmail.com writes:
gd it to mount on boot
I do not understand why you have a different at-boot-mounting problem
with and without lofiadm: either way it's your script doing the
importing explicitly, right? so just add lofiadm to your script. I
guess you
My Laptop is a 64bit system
Dell Latitude D630
Intel Core2 Duo Processor T7100
4GB RAM
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy Swearingen cindy.swearin...@sun.com writes:
Hi Harry,
Reviewing other postings where permanent errors where found on
redundant ZFS configs, one was resolved by re-running the zpool scrub
and one
resolved itself because the files with the permanent errors were most
likely temporary
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
roch.bourbonn...@sun.com wrote:
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and
16K reads/writes form the application.
Next
Could you retest it with mmap() used?
Olga
2010/3/9 Matt Cowger mcow...@salesforce.com:
It can, but doesn't in the command line shown below.
M
On Mar 8, 2010, at 6:04 PM, ольга крыжановская olga.kryzh
anov...@gmail.com wrote:
Does iozone use mmap() for IO?
Olga
On Tue, Mar 9, 2010
Thank you for such a thorough look into my issue. As you said, I guess I am
down to trying to backup to a zvol and then backing that up to tape. Has anyone
tried this solution? I would be very interested to find out. Anyone else with
any other solutions?
Thanks!
Greg
--
This message posted
This is a good point, and something that I tried. I limited the ARC to 1GB and
4GB (both well within the memory footprint of the system even with the
ramdisk).equally poor resultsthis doesn't feel like ARC righting with
locked memory pages.
--M
-Original Message-
From: Ross
Sorry, Full Stripe on a RaidZ is the recordsize ie if the record size is 128k
on a RaidZ and its made up of 5 disks, then 128k is spread across 4 disks with
the calc parity on the 5 disk, which means the writes are 32k to each disk.
For a RaidZ, when data is written to a disk, are individual
Hi All,
I had create a ZFS filesystem test and shared it with zfs set
sharenfs=root=host1 test, and I checked the sharenfs option and it already
update to root=host1:
bash-3.00# zfs get sharenfs test
On 3/9/2010 4:57 PM, Harry Putnam wrote:
Also - it appears `zpool scrub -s z3' doesn't really do anything.
The status report above is taken immediately after a scrub command.
The `scub -s' command just returns the prompt... no output and
apparently no scrub either.
The -s switch is
Hi All,
I had create a ZFS filesystem test and shared it with zfs set
sharenfs=root=host1 test, and I checked the sharenfs option and it
already update to root=host1:
Try to use a backslash to escape those special chars like so :
zfs set
On Mar 8, 2010, at 7:55 AM, Erik Trimble wrote:
Assume your machine has died the True Death, and you are starting with new
disks (and, at least a similar hardware setup).
I'm going to assume that you named the original snapshot
'rpool/ROOT/whate...@today'
(1) Boot off the OpenSolaris
On Mar 9, 2010, at 6:13 PM, Damon Atkins wrote:
Sorry, Full Stripe on a RaidZ is the recordsize ie if the record size is 128k
on a RaidZ and its made up of 5 disks, then 128k is spread across 4 disks
with the calc parity on the 5 disk, which means the writes are 32k to each
disk.
And I update the sharenfs option with rw,ro...@100.198.100.0/24, it works
fine, and the NFS client can do the write without error.
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi, thanks for the reply... I guess I'm so far as well, but my question is
targetted at understanding the realworld implication of the kernel software
memory scrubber.
That is, in looking through the code a bit I notice that if hardware ECC is
active the software scrubber is disabled. It is
Christian Hessmann wrote:
Victor,
Btw, they affect some files referenced by snapshots as
'zpool status -v' suggests:
tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v
tank/d...@2010025100:/Payback.m4v
tank/d...@2010025100:/TheManWhoWasntThere.m4v
In case of OpenSolaris it is
42 matches
Mail list logo