- Brandon High bh...@freaks.com skrev:
On Sun, May 30, 2010 at 11:46 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net wrote:
Is there a way to report zpool/zfs stats in a fixed scale, like KiB
or even bytes?
Some (but not all) commands use -p.
-p
Use exact
I have an odd setup at present, because I'm testing while still building my
machine.
It's an Intel Atom D510 mobo running snv_134 2GB RAM with 2 SATA drives (AHCI):
1: Samsung 250GB old laptop drive
2: WD Green 1.5TB drive (idle3 turned off)
Ultimately, it will be a time machine backup for my
Hi,
I created couple zfs fs:
datapool
datapool/system
datapool/system/mikkel
datapool/users
datapool/users/john
...
I have set com.sun:auto-snapshot:daily to true on datapool/users and set
inherit -r to datapool/users.
datapool/users/john com.sun:auto-snapshot:daily true inherited
$ zfs list -t filesystem
NAME USED AVAIL REFER MOUNTPOINT
datapool 840M 25.5G21K /datapool
datapool/virtualbox 839M 25.5G 839M /virtualbox
mypool 8.83G 6.92G82K /mypool
I wrongly said myr...@dbus_gam_server_race_partly_solved-6pm-may30-2010. I
meant mypool.
This is the send command that failed:
time zfs send -Rv myp...@dbus_gam_server_race_partly_solved-6pm-may30-2010 |
ssh 192.168.0.6 zfs recv -vuF zfs-backup/opensolaris-backup/mypool
--
This message posted
OK, I have no idea what ZFS is smoking...:-)
I was able to send the individual datasets to the backup server.
zfs-backup/opensolaris-backup/mypool 11.5G 197G
82K /zfs-backup/opensolaris-backup/mypool
zfs-backup/opensolaris-backup/mypool/ROOT
Scrub has turned up clean again:
pool: mypool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer
Can you post zpool status ?
Are your drives all the same size ?
-r
Le 30 mai 2010 à 23:37, Sandon Van Ness a écrit :
I just wanted to make sure this is normal and is expected. I fully
expected that as the file-system filled up I would see more disk space
being used than with other
Use
zfs get -Hp used pool1/nfs1
to get a parsable output.
Andreas
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi.
Sorry, it took a bit longer because I had to reinstall OSOL.
With a FRESH snv134 install the commands from you worked. I just did them and
did a soft reboot.
The change I did from a fresh install and my previous are pretty small. If you
are a devel I can email you my changes and maybe that
- Andreas Grüninger andreas.gruenin...@iz.bwl.de skrev:
Use
zfs get -Hp used pool1/nfs1
to get a parsable output.
r...@mime:~$ zfs get -Hp testpool
bad property list: invalid property 'testpool'
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
On Mon, May 31, 2010 at 11:25 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.netwrote:
- Andreas Grüninger andreas.gruenin...@iz.bwl.de skrev:
Use
zfs get -Hp used pool1/nfs1
to get a parsable output.
r...@mime:~$ zfs get -Hp testpool
bad property list: invalid property 'testpool'
On 05/31/2010 09:03 AM, Roch Bourbonnais wrote:
Can you post zpool status ?
Are your drives all the same size ?
-r
Here is zpool status for my 'data' pool:
r...@opensolaris: 11:43 AM :~# zpool status data
pool: data
state: ONLINE
scrub: none requested
config:
NAME
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated from the writes.
You didn't say which Solaris you
On Mon, 31 May 2010, Matt Connolly wrote:
Watching `zpool iostat -v 2` I could see that quite often drive 1
would write a big chunk of data and then wait for ages for drive 2
to write the same data to disc.
Could it be that there is a separate cache for the mirror that was
stalling waiting
On 05/31/2010 01:13 PM, Bob Friesenhahn wrote:
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated
On Mon, 31 May 2010, Sandon Van Ness wrote:
6586537 async zio taskqs can block out userland commands
Bob
I am using opensolaris snv_134:
r...@opensolaris: 01:32 PM :~# uname -a
SunOS opensolaris 5.11 snv_134 i86pc i386 i86pc
Is there a setting that to change the cpu scheduler for the ZFS
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks are run in in a
scheduling class at lower priority than normal user processes.
However, there is also a throttling mechanism for
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com wrote:
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks are run in in a
scheduling class at lower priority
On 05/31/2010 02:32 PM, Sandon Van Ness wrote:
well it seems like when messing with the txg sync times and stuff like
that it did make the transfer more smooth but didn't actually help with
speeds as it just meant the hangs happened for a shorter time but at a
smaller interval and actually
On 05/31/2010 02:52 PM, Mike Gerdts wrote:
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com wrote:
On 05/31/2010 01:51 PM, Bob Friesenhahn wrote:
There are multiple factors at work. Your OpenSolaris should be new
enough to have the fix in which the zfs I/O tasks
On 05/31/2010 01:13 PM, Bob Friesenhahn wrote:
On Sun, 30 May 2010, Sandon Van Ness wrote:
The problem is that when it does the write burst its taking away CPU
usage from rsync which is actually what might be causing the dip during
writes (not the I/O activity itself) but the CPU generated
Sorry, turned on html mode to avoid gmail's line wrapping.
On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness san...@van-ness.comwrote:
On 05/31/2010 02:52 PM, Mike Gerdts wrote:
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com
wrote:
On 05/31/2010 01:51 PM, Bob
Hi,
The subject says it all.
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 31, 2010, at 4:20 PM, Fred Liu wrote:
Hi,
The subject says it all.
Yes.
The reply says it all. :-)
Making it happen, is a feature of the installer(s).
-- richard
--
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/
On Mon, 31 May 2010, Sandon Van Ness wrote:
I think I have came to the conclusion that the problem here is CPU due
to the fact that its only doing this with parity raid. I would think if
it was I/O based then it would be the same as if anything its heavier on
I/O on non parity raid due to the
On Mon, May 31, 2010 at 1:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
I think that the problem is likely that this Western Digital drive is one of
the new ones using 4K sectors internally. Almost everyone obtains terrible
write performance from zfs using these drives. I suggest
On 05/31/2010 04:45 PM, Bob Friesenhahn wrote:
On Mon, 31 May 2010, Sandon Van Ness wrote:
I think I have came to the conclusion that the problem here is CPU due
to the fact that its only doing this with parity raid. I would think if
it was I/O based then it would be the same as if anything
On 06/ 1/10 07:16 AM, Sandon Van Ness wrote:
Here is zpool status for my 'data' pool:
r...@opensolaris: 11:43 AM :~# zpool status data
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data
29 matches
Mail list logo