After a rather fruitless non-committal exchange with OCZ, I'd like to know if
there is any experience in this community with the OCZ Z-Drive...
In particular, is it possible (and worthwhile) to put the device in jbod as
opposed to raid-0 mode... an entry-level flashfire f20 'sort' of card...
Did the fix for 6733267 make it to 134a (2010.05)? It isn't marked fixed, and i
couldn't find it anywhere in the changelogs. Does that mean we'll have to wait
for 2010.11 (or whatever v+2 is named)?
Thanks,
Moshe
--
This message posted from opensolaris.org
__
On Mon, May 10, 2010 at 3:53 PM, Geoff Nordli wrote:
> Doesn't this alignment have more to do with aligning writes to the
> stripe/segment size of a traditional storage array? The articles I am
It is a lot like a stripe / segment size. If you want to think of it
in those terms, you've got a segm
>-Original Message-
>From: Brandon High [mailto:bh...@freaks.com]
>Sent: Monday, May 10, 2010 3:12 PM
>
>On Mon, May 10, 2010 at 1:53 PM, Geoff Nordli wrote:
>> You are right, I didn't look at that property, and instead I was
>> focused on the record size property.
>
>zvols don't have a
On Mon, May 10, 2010 at 1:53 PM, Geoff Nordli wrote:
> You are right, I didn't look at that property, and instead I was focused on
> the record size property.
zvols don't have a recordsize - That's a property of filesystem
datasets, not volumes.
> When I look at the stmfadm llift-lu -v it s
On 10/05/2010 16:52, Peter Tribble wrote:
For zfs, zpool iostat has some utility, but I find fsstat to be pretty useful.
iostat, zpool iostat, fsstat - all of them are very useful and allow you
to monitor I/O on different levels.
And of course dtrace io, fsinfo and syscall providers are
On May 10, 2010, at 6:28 PM, Cindy Swearingen wrote:
Hi Eduardo,
Please use the following steps to collect more information:
1. Use the following command to get the PID of the zpool import
process,
like this:
# ps -ef | grep zpool
2. Use the actual found in step 1 in the
following
com
Hi Eduardo,
Please use the following steps to collect more information:
1. Use the following command to get the PID of the zpool import process,
like this:
# ps -ef | grep zpool
2. Use the actual found in step 1 in the following
command, like this:
echo "0t::pid2proc|::walk thread|::findsta
>-Original Message-
>From: Brandon High [mailto:bh...@freaks.com]
>Sent: Monday, May 10, 2010 9:55 AM
>
>On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli wrote:
>> I am looking at using 8K block size on the zfs volume.
>
>8k is the default for zvols.
>
You are right, I didn't look at that p
Oh.. thanks..
I did download the latest zfs-auto-snapshot:
zfs-snapshot-0.11.2
Is there a more recent version?
John Balestrini wrote:
I believe that Time Slider is just a front end for zfs-auto-snapshot.
John
On May 10, 2010, at 1:17 PM, Mary Ellen Fitzpatrick wrote:
Is Time Slider
On May 10, 2010, at 4:46 PM, John Balestrini wrote:
Recently I had a similar issue where the pool wouldn't import and
attempting to import it would essentially lock the server up.
Finally I used pfexec zpool import -F pool1 and simply let it do
it's thing. After almost 60 hours the imported
I believe that Time Slider is just a front end for zfs-auto-snapshot.
John
On May 10, 2010, at 1:17 PM, Mary Ellen Fitzpatrick wrote:
> Is Time Slider available in Solaris10? Or just in Opensolaris?
> I am running Solaris 10 5/09 s10x_u7wos_08 X86 and wanted to automate my
> snapshots. From r
Is Time Slider available in Solaris10? Or just in Opensolaris?
I am running Solaris 10 5/09 s10x_u7wos_08 X86 and wanted to automate my
snapshots.
From reading blogs, seems zfs-auto-snapshot is obsolete and was/is
being replaced by time-slider. But I can not seem to find it for
Solaris10.
Howdy Eduardo,
Recently I had a similar issue where the pool wouldn't import and attempting to
import it would essentially lock the server up. Finally I used pfexec zpool
import -F pool1 and simply let it do it's thing. After almost 60 hours the
imported finished and all has been well since (ex
Hi again,
As for the NFS issue I mentioned before, I made sure the NFS server
was working and was able to export before I attempted to import
anything, then I started a new "zpool import backup: -- my hope was
that the NFS share was causing the issue, since the only filesystem
shared is t
I was expecting
zfs send tank/export/projects/project1...@today
would send everything up to @today. That is the only snapshot and I am not
using the -i options.
The things worries me is that tank/export/projects/project1_nb was the first
file system that I tested with full dedup and compression
> "mg" == Mike Gerdts writes:
mg> If Solaris is under memory pressure, [...]
mg> The best thing to do with processes that can be swapped out
mg> forever is to not run them.
Many programs allocate memory they never use. Linux allows
overcommitting by default (but disableable), b
On May 10, 2010, at 9:06 AM, Bob Friesenhahn wrote:
> On Mon, 10 May 2010, Thomas Tornblom wrote:
>>
>> Sorry, but this is incorrect.
>>
>> Solaris (2 if you will) does indeed swap processes in case normal paging is
>> deemed insufficient.
>>
>> See the chapters on Soft and Hard swapping in:
>
> "bh" == Brandon High writes:
bh> The drive should be on the same USB port because the device
bh> path is saved in the zpool.cache. If you removed the
bh> zpool.cache, it wouldn't matter where the drive was plugged
bh> in.
I thought it was supposed to go by devid.
There was
- "Jim Horng" skrev:
> zfs send tank/export/projects/project1...@today | zfs receive -d
> mpool
Perhaps zfs send -R is what you're looking for...
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presente
It sounds like you are looking for AVS.
Consider a replication scenario where A is primary and B, secondary
and A fails. Say you get A up again on Monday AM, but you are unable
to summarily shut down B to bring A back online until Friday evening.
During that whole time, you will not have a cu
On Sun, May 9, 2010 at 11:16 AM, Jim Horng wrote:
> zfs send tank/export/projects/project1...@today | zfs receive -d mpool
This won't get any snapshots before @today, which may lead to the
received size being smaller.
I've also noticed that different pool types (eg: raidz vs. mirror) can
lead sl
- "Brandon High" skrev:
> On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli wrote:
> > I am looking at using 8K block size on the zfs volume.
>
> 8k is the default for zvols.
So with a 1TB zbol with default blocksize, dedup is done on 8k blocks? If so,
some 32 gigs of memory (or l2arc) will be
On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli wrote:
> I am looking at using 8K block size on the zfs volume.
8k is the default for zvols.
> I was looking at the comstar iscsi settings and there is also a blk size
> configuration, which defaults as 512 bytes. That would make me believe that
> all
On Mon, May 10 at 9:08, Erik Trimble wrote:
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
You ab
On Mon, 10 May 2010, Thomas Tornblom wrote:
Sorry, but this is incorrect.
Solaris (2 if you will) does indeed swap processes in case normal paging is
deemed insufficient.
See the chapters on Soft and Hard swapping in:
http://books.google.com/books?id=r_cecYD4AKkC&pg=PA189&lpg=PA189&dq=solar
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2
with a few slices on a single disk. I was expecting a good read/write
performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
___
On Mon, May 10, 2010 at 7:57 AM, Roy Sigurd Karlsbakk
wrote:
> Hi all
>
> It seems that if using zfs, the usual tools like vmstat, sar, top etc are
> quite worthless, since zfs i/o load is not reported as iowait etc. Are there
> any plans to rewrite the old performance monitoring tools or the z
On May 10, 2010, at 12:16 AM, Roy Sigurd Karlsbakk wrote:
> - "Michael Schuster" skrev:
>
>> On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
>>> Hi all
>>>
>>> It seems that if using zfs, the usual tools like vmstat, sar, top
>> etc are quite worthless, since zfs i/o load is not reported as
Ian Collins wrote:
> Run |cfgadm -cconfigure |on the unconfigured Ids|, see the man page for
> the gory details.|
IF the BIOS is OK ;-)
I have a problem with a DELL PC: If I disable the other SATA ports, Solaris
is unable to detect new drives (linux does). If I enable other SATA ports,
the D
- "Roy Sigurd Karlsbakk" skrev:
> - "charles" skrev:
>
> > Hi,
> >
> > This thread refers to Solaris 10, but it was suggested that I post
> it
> > here as ZFS developers may well be more likely to respond.
> >
> >
> http://forums.sun.com/thread.jspa?threadID=5438393&messageID=1098650
- "charles" skrev:
> Hi,
>
> This thread refers to Solaris 10, but it was suggested that I post it
> here as ZFS developers may well be more likely to respond.
>
> http://forums.sun.com/thread.jspa?threadID=5438393&messageID=10986502#10986502
>
> Basically after about ZFS 1000 filesystem c
Yes, I have recently tried the userquota option, (one ZFS filesystem with
60,000 quotas and 60,000 ordinary 'mkdir' home directories within), and this
works finebut you end up with less granularity of snapshots.
It does seem odd that after only 1000 ZFS filesystems there is a slow down. It
On 10/05/2010 13:35, P-O Yliniemi wrote:
Darren J Moffat skrev 2010-05-10 10:58:
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient
charles wrote:
>
> Basically after about ZFS 1000 filesystem creations the creation time slows
> down to around 4 seconds, and gets progressively worse.
>
You can speed up the process by initially setting the mountpoint to 'legacy'.
It's not the creation that takes that much time, it's mounting
Darren J Moffat skrev 2010-05-10 10:58:
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient dedup is. The problem is that I've found
n
On 10 May, 2010 - charles sent me these 0,8K bytes:
> Hi,
>
> This thread refers to Solaris 10, but it was suggested that I post it here as
> ZFS developers may well be more likely to respond.
>
> http://forums.sun.com/thread.jspa?threadID=5438393&messageID=10986502#10986502
>
> Basically afte
Gabriele Bulfon wrote:
Hello,
I have a situation where a zfs file server holding lots of graphic files cannot
be backed up daily with a full backup.
My idea was initially to run a full backup on Sunday through the lto library on
more dedicated tapes, then have an incremental backup run on daily
Hello,
I have a situation where a zfs file server holding lots of graphic files cannot
be backed up daily with a full backup.
My idea was initially to run a full backup on Sunday through the lto library on
more dedicated tapes, then have an incremental backup run on daily tapes.
Brainstorming on
Hi,
This thread refers to Solaris 10, but it was suggested that I post it here as
ZFS developers may well be more likely to respond.
http://forums.sun.com/thread.jspa?threadID=5438393&messageID=10986502#10986502
Basically after about ZFS 1000 filesystem creations the creation time slows
down t
On 05/ 7/10 10:07 PM, Bill McGonigle wrote:
On 05/07/2010 11:08 AM, Edward Ned Harvey wrote:
I'm going to continue encouraging you to staying "mainstream,"
because what
people do the most is usually what's supported the best.
If I may be the contrarian, I hope Matt keeps experimenting with th
Hi Eric,
> Problem is the OP is mixing client 4k drives with 512b drives.
How do you come to that assesment?
Here's what I have:
Ap_Id Information
sata1/1::dsk/c7t1d0Mod: WDC WD10EADS-00L5B1 FRev: 01.01A01
sata1/2::dsk/c7t2d0Mod: WDC
On 08/05/2010 21:45, P-O Yliniemi wrote:
I have noticed that dedup is discussed a lot in this list right now..
Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient dedup is. The problem is that I've found
no way of checking this per file system. I
erik.ableson said: "Just a quick comment for the send/recv operations, adding
-R makes it recursive so you only need one line to send the rpool and all
descendant filesystems. "
Yes, I know of the -R flag, but it doesn't seem to work with sending loose
snapshots to the backup pool. It obviously
2010-05-10 05:58, Bob Friesenhahn skrev:
On Sun, 9 May 2010, Edward Ned Harvey wrote:
So, Bob, rub it in if you wish. ;-) I was wrong. I knew the behavior in
Linux, which Roy seconded as "most OSes," and apparently we both
assumed the
same here, but that was wrong. I don't know if solaris and o
- "Michael Schuster" skrev:
> On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
> > Hi all
> >
> > It seems that if using zfs, the usual tools like vmstat, sar, top
> etc are quite worthless, since zfs i/o load is not reported as iowait
> etc. Are there any plans to rewrite the old performance m
> Just a quick comment for the send/recv operations, adding -R makes it
> recursive so you only need one line to send the rpool and all descendant
> filesystems.
Yes, I am aware of that, but it does not work when you are sending them loose
to an existing pool. Can't remember the error message b
On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top etc are quite
worthless, since zfs i/o load is not reported as iowait etc. Are there any
plans to rewrite the old performance monitoring tools or the zfs parts to allow
for
Hi all
It seems that if using zfs, the usual tools like vmstat, sar, top etc are quite
worthless, since zfs i/o load is not reported as iowait etc. Are there any
plans to rewrite the old performance monitoring tools or the zfs parts to allow
for standard monitoring tools? If not, what other too
50 matches
Mail list logo