Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a):
Hi,
I have problem with Solaris 10. I know that this forum is for
OpenSolaris but may be someone will have an idea.
My box is crashing on any attempt to import zfs pool. First crash
happened on export operation and since then I cannot
I have a a problem with zpool import after having
problems with 2 disks in RAID 5 (hardware raid). There are some bad blocks on
that disks.
#zpool import
..
state: FAULTED
status: The pool metadata is corrupted.
..
#zdb -l /dev/rdsk/c4t600C0FF009258F4855B59001d0s0
is OK.
I managed
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Other backup systems (disk-to-disk or block-to-block)
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a):
Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't
name.RegardsLukasOn 11/7/07, Łukasz K [EMAIL PROTECTED] wrote:
Hi,I think your problem is filesystem fragmentation.When available space is less than 40% ZFS might have problems withfinding free blocks. Use this script to check it:#!/usr/sbin/dtrace -s
fbt::space_map_alloc:entry{ self-s = arg1;}fbt
Now space maps, intent log, spa history are compressed.
All normal metadata (including space maps and spa history) is always
compressed. The intent log is never compressed.
Can you tell me where space map is compressed ?
Buffer is filled up with:
468 *entry++ =
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
I have a huge problem with space maps on thumper.
Space maps takes
over 3GB
and write operations generates massive read
operations.
Before every spa sync phase zfs reads space maps
from disk.
I decided to turn on compression for pool
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems )
and it helps.
Now space
Dnia 23-08-2007 o godz. 22:15 Igor Brezac napisał(a):
We are on Solaris 10 U3 with relatively recent recommended patches
applied. zfs destroy of a filesystem takes a very long time; 20GB usage
and about 5 million objects takes about 10 minutes to destroy. zfs pool
is a 2 drive stripe,
I think you have a problem with pool fragmentation. We have the same problem
and changing
recordsize will help. You have to set smaller recordsize for pool ( all
filesystem must have the same size or smaller size ). First check if you have
problems with finding blocks with this dtrace script:
Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
Hello Victor,
Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
VL Gino wrote:
Same problem here (snv_60).
Robert, did you find any solutions?
VL Couple of week ago I put together an implementation of space maps
which
VL
Same problem here (snv_60).
Robert, did you find any solutions?
gino
check this http://www.opensolaris.org/jive/thread.jspa?threadID=34423tstart=0
Check spa_sync function time
remember to change POOL_NAME !
dtrace -q -n fbt::spa_sync:entry'/(char *)(((spa_t*)arg0)-spa_name) ==
POOL_NAME/{
Ł I want to parallize zfs send to make it faster.
Ł dmu_sendbackup could allocate buffer, that will
be used for buffering output.
Ł Few threads can traverse dataset, few threads
would be used for async read operations.
Ł I think it could speed up zfs send operation
10x.
Ł What
I have a huge problem with ZFS pool fragmentation.
I started investigating problem about 2 weeks ago
http://www.opensolaris.org/jive/thread.jspa?threadID=34423tstart=0
I found workaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator
When tuning recordsize for things like databases, we
try to recommend
that the customer's recordsize match the I/O size of
the database
record.
On this filesystem I have:
- file links and they are rather static
- small files ( about 8kB ) that keeps changing
- big files ( 1MB - 20 MB )
Field ms_smo.smo_objsize in metaslab struct is size of data on disk.
I checked the size of metaslabs in memory:
::walk spa | ::walk metaslab | ::print struct metaslab
ms_map.sm_root.avl_numnodes
I got 1GB
But only some metaslabs are loaded:
::walk spa | ::walk metaslab | ::print struct metaslab
After few hours with dtrace and source code browsing I found that in my space
map there are no 128K blocks left.
Try this on your ZFS.
dtrace -n fbt::metaslab_group_alloc:return'/arg1 == -1/{}
If you will get probes, then you also have the same problem.
Allocating from space map works like
If you want to know which blocks you do not have:
dtrace -n fbt::metaslab_group_alloc:entry'{ self-s = arg1; }' -n
fbt::metaslab_group_alloc:return'/arg1 != -1/{ self-s = 0 }' -n
fbt::metaslab_group_alloc:return'/self-s (arg1 == -1)/{ @s =
quantize(self-s); self-s = 0; }' -n tick-10s'{
Hello,
I'm investigating problem with ZFS over NFS.
The problems started about 2 weeks ago, most nfs threads are hanging in
txg_wait_open.
Sync thread is consuming one processor all the time,
Average spa_sync function times from entry to return is 2 minutes.
I can't use dtrace to examine
I have to backup many filesystems, which are changing and machines are heavy
loaded.
The idea is to backup online - this should avoid I/O read operations from
disks,
data should go from cache.
Now I'm using script that does snapshot and zfs send.
I want to automate this operation and add new
I have other question about replication in this thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=27082tstart=0
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Out of curiosity, what is the timing difference between a userland script
and performing the operations in the kernel?
[EMAIL PROTECTED] ~]# time zfs destroy solaris/[EMAIL PROTECTED] ; time zfs
rename solaris/[EMAIL PROTECTED] solaris/[EMAIL PROTECTED]; time zfs snapshot
solaris/[EMAIL
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST-PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
How it got that way, I couldn't really say without looking at your code.
It works like this:
In new ioctl operation
zfs_ioc_replicate_send(zfs_cmd_t *zc)
we open filesystem ( not snapshot )
dmu_objset_open(zc-zc_name, DMU_OST_ANY,
DS_MODE_STANDARD |
Thanks for advice.
I removed my buffers snap_previous and snap_latest and it helped.
I'm using zc-value as buffer.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
26 matches
Mail list logo