On 23/03/10 01:23 PM, Bart Nabbe wrote:
All,
I did some digging and I was under the impression that the
> mr_sas driver was to support the LSISAS2004 HBA controller
> from LSI.
I did add the pci id to the driver alias for mr_sas, but
> then the driver still showed up as unattached (see below)
Hi
i have now two pools
rpool 2-way mirror ( pata )
data 4-way raidz2 ( sata )
if i access to datapool from network , smb , nfs , ftp , sftp , jne...
i get only max 200 KB/s speeds
compared to rpool that give XX MB/S speeds to and from network it is slow.
Any ideas what reasons might be and
On Mar 22, 2010, at 4:21 PM, Frank Middleton wrote:
> On 03/21/10 03:24 PM, Richard Elling wrote:
>
>> I feel confident we are not seeing a b0rken drive here. But something is
>> clearly amiss and we cannot rule out the processor, memory, or controller.
>
> Absolutely no question of that, other
All,
I did some digging and I was under the impression that the mr_sas driver was to
support the LSISAS2004 HBA controller from LSI.
I did add the pci id to the driver alias for mr_sas, but then the driver still
showed up as unattached (see below).
Did I miss something, or was my assumption that
I am trying to coordinate properties and data between 2 file servers.
on file server 1 I have:
zfs get all zfs52/export/os/sles10sp2
NAME PROPERTY VALUE
SOURCE
zfs52/export/os/sles10sp2 type filesystem
> > You can easily determine if the snapshot has changed by checking the
> > output of zfs list for the snapshot.
>
> Do you mean to just grep it out of the output of
>
> zfs list -t snapshot
I think the point is: You can easily tell how many MB changed in a
snapshot, and therefore you can ea
On Thu, Mar 18, 2010 at 10:38:00PM -0700, Rob wrote:
> Can a ZFS send stream become corrupt when piped between two hosts
> across a WAN link using 'ssh'?
No. SSHv2 uses HMAC-MD5 and/or HMAC-SHA-1, depending on what gets
negotiated, for integrity protection. The chances of random on the wire
corr
On 03/22/10 05:04 PM, Brandon High wrote:
On Mon, Mar 22, 2010 at 10:26 AM, Richard Elling
mailto:richard.ell...@gmail.com>> wrote:
NB. deduped streams should further reduce the snapshot size.
I haven't seen a lot of discussion on the list regarding send dedup,
but I understand it'll use
On 03/21/10 03:24 PM, Richard Elling wrote:
I feel confident we are not seeing a b0rken drive here. But something is
clearly amiss and we cannot rule out the processor, memory, or controller.
Absolutely no question of that, otherwise this list would be flooded :-).
However, the purpose of th
On Mon, Mar 22, 2010 at 10:26 AM, Richard Elling
wrote:
> NB. deduped streams should further reduce the snapshot size.
>
I haven't seen a lot of discussion on the list regarding send dedup, but I
understand it'll use the DDT if you have dedup enabled on your dataset.
What's the process and penalt
On Mon, Mar 22, 2010 at 12:21 PM, Richard Elling
wrote:
> Yes, it is better. But still nowhere near platter speed. All it takes is
> one little seek...
>
True, dat. I find that scrubs start very slow (< 20MB/s) with the disks at
near-100% utilization. Towards the end of the scrub, speeds are u
On Mon, Mar 22, 2010 at 1:58 PM, Ian Collins wrote:
> On 03/23/10 09:34 AM, Harry Putnam wrote:
>
>> Oh, something I meant to ask... is there some standard way to tell
>> before calling for a snapshot, if the directory structure has changed
>> at all, other than aging I mean. Is there something
zfs list | grep '@'
zpool/f...@1154758324G - 461G -
zpool/f...@1208482 6.94G - 338G -
zpool/f...@daily.netbackup 1.07G - 344G -
zpool/f...@11547581.77G - 242G -
zpool/f
Matt Cowger writes:
> This is totally doable, and a reasonable use of zfs snapshots - we
> do some similar things.
Good, thanks for the input.
> You can easily determine if the snapshot has changed by checking the
> output of zfs list for the snapshot.
Do you mean to just grep it out of the ou
> In other words, there
> is no
> case where multiple scrubs compete for the resources of a single disk
> because
> a single disk only participates in one pool.
Excellent point. However, the problem scenario was described as SAN. I can
easily imagine a scenario where some SAN administrator crea
> This may be a bit dimwitted since I don't really understand how
> snapshots work. I mean the part concerning COW (copy on right) and
> how it takes so little room.
COW and snapshots are very simple to explain. Suppose you're chugging along
using your filesystem, and then one moment, you tell t
On 03/23/10 09:34 AM, Harry Putnam wrote:
This may be a bit dimwitted since I don't really understand how
snapshots work. I mean the part concerning COW (copy on right) and
how it takes so little room.
But here I'm not asking about that.
It appears to me that the default snapshot setup shares
This is totally doable, and a reasonable use of zfs snapshots - we do some
similar things.
You can easily determine if the snapshot has changed by checking the output of
zfs list for the snapshot.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-bou
This may be a bit dimwitted since I don't really understand how
snapshots work. I mean the part concerning COW (copy on right) and
how it takes so little room.
But here I'm not asking about that.
It appears to me that the default snapshot setup shares some aspects
of a vcs (version control syste
On Mar 22, 2010, at 11:33 AM, Bill Sommerfeld wrote:
> On 03/22/10 11:02, Richard Elling wrote:
>> Scrub tends to be a random workload dominated by IOPS, not bandwidth.
>
> you may want to look at this again post build 128; the addition of
> metadata prefetch to scrub/resilver in that build appear
On 03/22/10 11:02, Richard Elling wrote:
> Scrub tends to be a random workload dominated by IOPS, not bandwidth.
you may want to look at this again post build 128; the addition of
metadata prefetch to scrub/resilver in that build appears to have
dramatically changed how it performs (largely for th
I will be out of the office starting 22/03/2010 and will not return until
06/04/2010.
Hello,
I am currently working on a project and out of the office. I will be
checking my message twice a day but may be unavailable to follow up on your
requests.
If the matter requires immediate attention pl
On Mar 22, 2010, at 10:36 AM, Svein Skogen wrote:
> On 22.03.2010 18:10, Richard Elling wrote:
>> On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
>>
>>> On 22.03.2010 13:54, Edward Ned Harvey wrote:
> IIRC it's "zpool scrub", and last time I checked, the zpool command
> exited (with statu
On 22.03.2010 18:10, Richard Elling wrote:
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _AL
On Mar 19, 2010, at 1:28 PM, Richard Jahnel wrote:
> They way we do this here is:
>
> zfs snapshot voln...@snapnow
> [i]#code to break on error and email not shown.[/i]
> zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1 > file
> [i]#code to break on error and email not shown.[/i]
> scp
On Mar 22, 2010, at 7:30 AM, Svein Skogen wrote:
> On 22.03.2010 13:54, Edward Ned Harvey wrote:
>>> IIRC it's "zpool scrub", and last time I checked, the zpool command
>>> exited (with status 0) as soon as it had started the scrub. Your
>>> command
>>> would start _ALL_ scrubs in paralell as a re
Thank you to all who responded. This response in particular was very helpful
and I think I will stick with my current zpool configuration (choice "a" if
you're reading below). I primarily host VMware virtual machines over NFS from
this server's predecessor and this server will be doing the same
On Sat, March 20, 2010 07:31, Chris Gerhard wrote:
> Up to a point. zfs send | zfs receive does make a very good back up scheme
> for the home user with a moderate amount of storage. Especially when the
> entire back up will fit on a single drive which I think would cover the
> majority of home
On 22.03.2010 16:24, Cooper Hubbell wrote:
I've moved to 7200RPM 2.5" laptop drives over 3.5"
drives, for a
combination of reasons: lower-power, better
performance than a
comparable sized 3.5" drives, and generally
lower-capacities meaning
resilver times are smaller. They're a bit more $/GB,
but
Cooper Hubbell wrote:
Regarding the 2.5" laptop drives, do the inherent error detection
properties of ZFS subdue any concerns over a laptop drive's higher bit
error rate or rated MTBF? I've been reading about OpenSolaris and ZFS
for several months now and am incredibly intrigued, but have yet t
> I've moved to 7200RPM 2.5" laptop drives over 3.5"
> drives, for a
> combination of reasons: lower-power, better
> performance than a
> comparable sized 3.5" drives, and generally
> lower-capacities meaning
> resilver times are smaller. They're a bit more $/GB,
> but not a lot.
> If you can s
On 22.03.2010 13:54, Edward Ned Harvey wrote:
IIRC it's "zpool scrub", and last time I checked, the zpool command
exited (with status 0) as soon as it had started the scrub. Your
command
would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So ei
hi, Thanks for all the reply.
I have found the real culprit.
Hard disk was faulty. I changed the hard disk.And now ZFS performance is much
better.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
On 22/03/2010 12:50, Edward Ned Harvey wrote:
no, it is not a subdirectory it is a filesystem mounted on top of the
subdirectory.
So unless you use NFSv4 with mirror mounts or an automounter other NFS
version will show you contents of a directory and not a filesystem. It
doesn't matter if it is a
> Not being a CIFS user, could you clarify/confirm for me.. is this
> just a "presentation" issue, ie making a directory icon appear in a
> gooey windows explorer (or mac or whatever equivalent) view for people
> to click on? The windows client could access the .zfs/snapshot dir
> via typed pathn
> IIRC it's "zpool scrub", and last time I checked, the zpool command
> exited (with status 0) as soon as it had started the scrub. Your
> command
> would start _ALL_ scrubs in paralell as a result.
You're right. I did that wrong. Sorry 'bout that.
So either way, if there's a zfs property for s
> no, it is not a subdirectory it is a filesystem mounted on top of the
> subdirectory.
> So unless you use NFSv4 with mirror mounts or an automounter other NFS
> version will show you contents of a directory and not a filesystem. It
> doesn't matter if it is a zfs or not.
Ok, I learned something
On 22.03.2010 13:35, Edward Ned Harvey wrote:
Does cron happen to know how many other scrubs are running, bogging
down
your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for f
> Does cron happen to know how many other scrubs are running, bogging
> down
> your IO system? If the scrub scheduling was integrated into zfs itself,
It doesn't need to.
Crontab entry: /root/bin/scruball.sh
/root/bin/scruball.sh:
#!/usr/bin/bash
for filesystem in filesystem1 filesystem2 filesy
I will be out of the office starting 22/03/2010 and will not return until
06/04/2010.
Hello,
I am currently working on a project and out of the office. I will be
checking my message twice a day but may be unavailable to follow up on your
requests.
If the matter requires immediate attention pl
On 22/03/2010 08:49, Andrew Gabriel wrote:
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of
On 22/03/2010 01:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with "share" and "dfstab?" And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
On 21.03.2010 01:25, Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when w
On 22.03.2010 02:13, Edward Ned Harvey wrote:
Actually ... Why should there be a ZFS property to share NFS, when you
can
already do that with "share" and "dfstab?" And still the zfs property
exists.
Probably because it is easy to create new filesystems and clone them;
as
NFS only works per f
Robert Milkowski wrote:
To add my 0.2 cents...
I think starting/stopping scrub belongs to cron, smf, etc. and not to
zfs itself.
However what would be nice to have is an ability to freeze/resume a
scrub and also limit its rate of scrubbing.
One of the reason is that when working in SAN envi
Hi,
I agree 100% with Chris.
Notice the "on their own" part of the original post. Yes, nobody wants
to run zfs send or (s)tar by hand.
That's why Chris's script is so useful: You set it up and forget and get the
job done for 80% of home users.
On another note, I was positively surprised by the
46 matches
Mail list logo