Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top etc are quite
worthless, since zfs i/o load is not reported as iowait etc. Are there any
plans to rewrite the old performance monitoring tools or the zfs parts to allow
for standard monitoring tools? If not, what other
On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top etc are quite
worthless, since zfs i/o load is not reported as iowait etc. Are there any
plans to rewrite the old performance monitoring tools or the zfs parts to allow
for
- Michael Schuster michael.schus...@oracle.com skrev:
On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top
etc are quite worthless, since zfs i/o load is not reported as iowait
etc. Are there any plans to rewrite the
erik.ableson said: Just a quick comment for the send/recv operations, adding
-R makes it recursive so you only need one line to send the rpool and all
descendant filesystems.
Yes, I know of the -R flag, but it doesn't seem to work with sending loose
snapshots to the backup pool. It obviously
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient dedup is. The problem is that I've found
no way of checking this per file system.
Hi Eric,
Problem is the OP is mixing client 4k drives with 512b drives.
How do you come to that assesment?
Here's what I have:
Ap_Id Information
sata1/1::dsk/c7t1d0Mod: WDC WD10EADS-00L5B1 FRev: 01.01A01
sata1/2::dsk/c7t2d0Mod: WDC
On 05/ 7/10 10:07 PM, Bill McGonigle wrote:
On 05/07/2010 11:08 AM, Edward Ned Harvey wrote:
I'm going to continue encouraging you to staying mainstream,
because what
people do the most is usually what's supported the best.
If I may be the contrarian, I hope Matt keeps experimenting with
Hi,
This thread refers to Solaris 10, but it was suggested that I post it here as
ZFS developers may well be more likely to respond.
http://forums.sun.com/thread.jspa?threadID=5438393messageID=10986502#10986502
Basically after about ZFS 1000 filesystem creations the creation time slows
down
Hello,
I have a situation where a zfs file server holding lots of graphic files cannot
be backed up daily with a full backup.
My idea was initially to run a full backup on Sunday through the lto library on
more dedicated tapes, then have an incremental backup run on daily tapes.
Brainstorming on
Gabriele Bulfon wrote:
Hello,
I have a situation where a zfs file server holding lots of graphic files cannot
be backed up daily with a full backup.
My idea was initially to run a full backup on Sunday through the lto library on
more dedicated tapes, then have an incremental backup run on
On 10 May, 2010 - charles sent me these 0,8K bytes:
Hi,
This thread refers to Solaris 10, but it was suggested that I post it here as
ZFS developers may well be more likely to respond.
http://forums.sun.com/thread.jspa?threadID=5438393messageID=10986502#10986502
Basically after about
Darren J Moffat skrev 2010-05-10 10:58:
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient dedup is. The problem is that I've found
- charles ce...@cam.ac.uk skrev:
Hi,
This thread refers to Solaris 10, but it was suggested that I post it
here as ZFS developers may well be more likely to respond.
http://forums.sun.com/thread.jspa?threadID=5438393messageID=10986502#10986502
Basically after about ZFS 1000
- Roy Sigurd Karlsbakk r...@karlsbakk.net skrev:
- charles ce...@cam.ac.uk skrev:
Hi,
This thread refers to Solaris 10, but it was suggested that I post
it
here as ZFS developers may well be more likely to respond.
Ian Collins i...@ianshome.com wrote:
Run |cfgadm -cconfigure |on the unconfigured Ids|, see the man page for
the gory details.|
IF the BIOS is OK ;-)
I have a problem with a DELL PC: If I disable the other SATA ports, Solaris
is unable to detect new drives (linux does). If I enable other
On May 10, 2010, at 12:16 AM, Roy Sigurd Karlsbakk wrote:
- Michael Schuster michael.schus...@oracle.com skrev:
On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top
etc are quite worthless, since zfs i/o load is not
On Mon, May 10, 2010 at 7:57 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net
wrote:
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top etc are
quite worthless, since zfs i/o load is not reported as iowait etc. Are there
any plans to rewrite the old performance monitoring
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2
with a few slices on a single disk. I was expecting a good read/write
performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
You
On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli geo...@gnaa.net wrote:
I am looking at using 8K block size on the zfs volume.
8k is the default for zvols.
I was looking at the comstar iscsi settings and there is also a blk size
configuration, which defaults as 512 bytes. That would make me
- Brandon High bh...@freaks.com skrev:
On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli geo...@gnaa.net wrote:
I am looking at using 8K block size on the zfs volume.
8k is the default for zvols.
So with a 1TB zbol with default blocksize, dedup is done on 8k blocks? If so,
some 32 gigs of
On Sun, May 9, 2010 at 11:16 AM, Jim Horng jho...@stretchinc.com wrote:
zfs send tank/export/projects/project1...@today | zfs receive -d mpool
This won't get any snapshots before @today, which may lead to the
received size being smaller.
I've also noticed that different pool types (eg: raidz
It sounds like you are looking for AVS.
Consider a replication scenario where A is primary and B, secondary
and A fails. Say you get A up again on Monday AM, but you are unable
to summarily shut down B to bring A back online until Friday evening.
During that whole time, you will not have a
- Jim Horng jho...@stretchinc.com skrev:
zfs send tank/export/projects/project1...@today | zfs receive -d
mpool
Perhaps zfs send -R is what you're looking for...
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt
bh == Brandon High bh...@freaks.com writes:
bh The drive should be on the same USB port because the device
bh path is saved in the zpool.cache. If you removed the
bh zpool.cache, it wouldn't matter where the drive was plugged
bh in.
I thought it was supposed to go by devid.
On May 10, 2010, at 9:06 AM, Bob Friesenhahn wrote:
On Mon, 10 May 2010, Thomas Tornblom wrote:
Sorry, but this is incorrect.
Solaris (2 if you will) does indeed swap processes in case normal paging is
deemed insufficient.
See the chapters on Soft and Hard swapping in:
mg == Mike Gerdts mger...@gmail.com writes:
mg If Solaris is under memory pressure, [...]
mg The best thing to do with processes that can be swapped out
mg forever is to not run them.
Many programs allocate memory they never use. Linux allows
overcommitting by default (but
I was expecting
zfs send tank/export/projects/project1...@today
would send everything up to @today. That is the only snapshot and I am not
using the -i options.
The things worries me is that tank/export/projects/project1_nb was the first
file system that I tested with full dedup and
Hi again,
As for the NFS issue I mentioned before, I made sure the NFS server
was working and was able to export before I attempted to import
anything, then I started a new zpool import backup: -- my hope was
that the NFS share was causing the issue, since the only filesystem
shared is
Howdy Eduardo,
Recently I had a similar issue where the pool wouldn't import and attempting to
import it would essentially lock the server up. Finally I used pfexec zpool
import -F pool1 and simply let it do it's thing. After almost 60 hours the
imported finished and all has been well since
Is Time Slider available in Solaris10? Or just in Opensolaris?
I am running Solaris 10 5/09 s10x_u7wos_08 X86 and wanted to automate my
snapshots.
From reading blogs, seems zfs-auto-snapshot is obsolete and was/is
being replaced by time-slider. But I can not seem to find it for
Solaris10.
I believe that Time Slider is just a front end for zfs-auto-snapshot.
John
On May 10, 2010, at 1:17 PM, Mary Ellen Fitzpatrick wrote:
Is Time Slider available in Solaris10? Or just in Opensolaris?
I am running Solaris 10 5/09 s10x_u7wos_08 X86 and wanted to automate my
snapshots. From
On May 10, 2010, at 4:46 PM, John Balestrini wrote:
Recently I had a similar issue where the pool wouldn't import and
attempting to import it would essentially lock the server up.
Finally I used pfexec zpool import -F pool1 and simply let it do
it's thing. After almost 60 hours the
Oh.. thanks..
I did download the latest zfs-auto-snapshot:
zfs-snapshot-0.11.2
Is there a more recent version?
John Balestrini wrote:
I believe that Time Slider is just a front end for zfs-auto-snapshot.
John
On May 10, 2010, at 1:17 PM, Mary Ellen Fitzpatrick wrote:
Is Time
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Monday, May 10, 2010 9:55 AM
On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli geo...@gnaa.net wrote:
I am looking at using 8K block size on the zfs volume.
8k is the default for zvols.
You are right, I didn't look at
Hi Eduardo,
Please use the following steps to collect more information:
1. Use the following command to get the PID of the zpool import process,
like this:
# ps -ef | grep zpool
2. Use the actual PID of zpool import found in step 1 in the following
command, like this:
echo 0tPID of zpool
On May 10, 2010, at 6:28 PM, Cindy Swearingen wrote:
Hi Eduardo,
Please use the following steps to collect more information:
1. Use the following command to get the PID of the zpool import
process,
like this:
# ps -ef | grep zpool
2. Use the actual PID of zpool import found in step 1 in
On Mon, May 10, 2010 at 1:53 PM, Geoff Nordli geo...@gnaa.net wrote:
You are right, I didn't look at that property, and instead I was focused on
the record size property.
zvols don't have a recordsize - That's a property of filesystem
datasets, not volumes.
When I look at the stmfadm llift-lu
On Mon, May 10, 2010 at 3:53 PM, Geoff Nordli geo...@gnaa.net wrote:
Doesn't this alignment have more to do with aligning writes to the
stripe/segment size of a traditional storage array? The articles I am
It is a lot like a stripe / segment size. If you want to think of it
in those terms,
Did the fix for 6733267 make it to 134a (2010.05)? It isn't marked fixed, and i
couldn't find it anywhere in the changelogs. Does that mean we'll have to wait
for 2010.11 (or whatever v+2 is named)?
Thanks,
Moshe
--
This message posted from opensolaris.org
After a rather fruitless non-committal exchange with OCZ, I'd like to know if
there is any experience in this community with the OCZ Z-Drive...
In particular, is it possible (and worthwhile) to put the device in jbod as
opposed to raid-0 mode... an entry-level flashfire f20 'sort' of card...
41 matches
Mail list logo