the drive quickly, to replace it? Or will you be
going does the enclosure start at logical zero... left to right.. hrmmm
Thanks
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
as well.
Regards
Brent Jones
[EMAIL PROTECTED]
On Thu, Jul 31, 2008 at 11:56 PM, Ross Smith [EMAIL PROTECTED] wrote:
Hey Brent,
On the Sun hardware like the Thumper you do get a nice bright blue ready
to remove led as soon as you issue the cfgadm -c unconfigure xxx
command. On other
concerned about bring a 4500 into our environment :(
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
handles it reasonably well (Explorer doesn't like large directories, but our
applications bypass that).
Any feedback would be appreciated!
Regards,
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
property to off, run the following command.
$ zfs set sharenfs=off file_system/volume
To verify if the sharenfs property is set to off, run the following command.
$ zfs get sharenfs file_system/volume
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Story time!
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
, and
the reps said it does not replicate. They may be mistaken, but I'm
hopeful they are correct.
Could this behavior have been changed recently on AVS to make
replication 'smarter' with ZFS as the underlying filesystem?
--
Brent Jones
[EMAIL PROTECTED
-discuss
It'd been brought up a couple times in the past, but their information
is so vague it doesn't give a whole lot to discuss =/
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do you have a lot of competing I/O's on the box which would slow down
the resilver?
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs
, but since they did not let
ZFS handle the RAID, and instead relied on another level, ZFS was not
able to correct the errors.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Correct, the other side should be set Read Only, that way nothing at
all is modified when the other hosts tries to zfs send.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
to export awesomely large volumes!
Regards,
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
!
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
SFU NFS is often slow, but tunable, here is something you might find
handy to squeeze some speed out of it:
http://technet.microsoft.com/en-us/library/bb463205.aspx
HTH
--
Brent Jones
[EMAIL PROTECTED
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
66104157
If you wanna see the status
--
Brent Jones
[EMAIL PROTECTED
offering shares,
no triggering events could be correlated.
Since upgrading to newer builds, I haven't seen similar issues.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
dispel
any concerns about the contract...
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this bulk storage
environment.
Wish I could get my hands on a beta of this GUI...
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sucks
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
I expect my X4540's to nearly fill 48TB (or more considering
compression), and taking 24 hours to transfer 100GB is, well, I could
do better on an ISDN line from 1995.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Are you running at compression? I see this behavior with heavy loads,
and GZIP compression enabled.
What does 'zfs get compression' say?
--
Brent Jones
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you file a bug report? If so, can you link it so we can see the
resolve (if one comes, even)
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
perfectly fine.
I settled on the new name and continued on and have no noticed the
problem again.
But seeing this post, I'll capture as much data as I can if it happens again.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
to cause it not to accept
anymore snapshots?
Thank you in advance
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi Brent,
Brent Jones wrote:
I am using 2008.11 with the Timeslider automatic snapshots, and using
it to automatically send snapshots to a remote host every 15 minutes.
Both sides are X4540's, with the remote
...:(
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jan 5, 2009 at 4:29 PM, Brent Jones br...@servuhome.net wrote:
On Mon, Jan 5, 2009 at 2:50 PM, Richard Elling richard.ell...@sun.com wrote:
Correlation question below...
Brent Jones wrote:
On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi Brent
a real problem with zfs send/recv.
Trying to send any meaningful sized snapshots from say an X4540 takes
up to 24 hours, for as little as 300GB changerate.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel andrew.gabr...@sun.com wrote:
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related
to a
case I opened with Sun regarding this.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jan 17, 2009 at 2:46 PM, JZ j...@excelsioritsolutions.com wrote:
I don't know if this email is even relevant to the list discussion. I will
leave that conclusion to the smart mail server policy here.
*cough*
--
Brent Jones
br...@servuhome.net
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 27, 2009 at 5:47 PM, Richard Elling
richard.ell...@gmail.com wrote:
comment far below...
Brent Jones wrote:
On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones br...@servuhome.net wrote:
--
Brent Jones
br...@servuhome.net
I found some insight to the behavior I found
. it'll buffer writes for a bit before committing to disk. Then,
when its time to commit to disk, it realizes the disk is failed, and
from then enter those failmode conditions (wait, continue, panic, ?).
Could this be the case?
http://blogs.sun.com/roch/date/20080514
--
Brent Jones
br
as well, what type of workload do you
have, and how much performance increase did you see by disabling the
write caches?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
, and see if I can't get an
engineer to look at this.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to handle ZFS snapshots/backups, you could
issue an SMF command to stop the service before taking the snapshot.
Or at the very minimum, perform an SQL dump of the DB so you at least
have a consistent full copy of the DB as a flat file in case you can't
stop the DB service.
--
Brent Jones
br
in a filesystem agnostic way, you'll be a
wealthy person indeed.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
doing, the load is very random I/O, and heavy, but little
progress appears to be happening.
I only have about 50 filesystems, and just a handful of snapshots for
each filesystem.
Thanks!
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs
On Wed, Mar 18, 2009 at 11:28 AM, Miles Nordin car...@ivy.net wrote:
bj == Brent Jones br...@servuhome.net writes:
bj I only have about 50 filesystems, and just a handful of
bj snapshots for each filesystem.
there were earlier stories of people who had imports taking hours to
complete
/zfs_automatic_snapshots_in_nv
Those are some good resources, from that, you can make something work
that is tailored to your environment.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sat, Mar 28, 2009 at 5:40 PM, Fajar A. Nugraha fa...@fajar.net wrote:
On Sun, Mar 29, 2009 at 3:40 AM, Brent Jones br...@servuhome.net wrote:
I have since modified some scripts out there, and rolled them into my
own, you can see it here at pastebin.com:
http://pastebin.com/m3871e478
systems. But afaik they would not
show up in zpool status.
For example:
zfs set note:purpose=This file system is important
zfs get note:purpose somefilesystem
Maybe that helps...
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
to
local, but I experienced it doing local-remote send/recv.
Not sure the best way to handle moving data around, when space is
tight though...
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
thoughts?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(and
hangs the server when trying to restart).
It appears to be the receiving end choking on a snapshot, and not
allowing any more to run.
Once one snapshot freezes, running another (for a different file
system) zfs send/recv will just stall, with another un-killable zfs
receive.
--
Brent Jones
br
On Fri, Jun 5, 2009 at 2:49 PM, Rick Romero r...@havokmon.com wrote:
On Fri, 2009-06-05 at 14:45 -0700, Brent Jones wrote:
On Fri, Jun 5, 2009 at 2:28 PM, Mike La Spina mike.lasp...@laspina.ca
wrote:
Hi,
I have replications between hosts and they are working fine with zfs
send/recv's
On Fri, Jun 5, 2009 at 3:25 PM, Ian Collins i...@ianshome.com wrote:
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving system, at init 6, the system will just
On Fri, Jun 5, 2009 at 4:20 PM, Tim Haley tim.ha...@sun.com wrote:
Brent Jones wrote:
Hello all,
I had been running snv_106 for about 3 or 4 months on a pair of X4540's.
I would ship snapshots from the primary server to the secondary server
nightly, which was working really well.
However
On Fri, Jun 5, 2009 at 4:20 PM, Tim Haley tim.ha...@sun.com wrote:
Brent Jones wrote:
Hello all,
I had been running snv_106 for about 3 or 4 months on a pair of X4540's.
I would ship snapshots from the primary server to the secondary server
nightly, which was working really well.
However
On Sun, Jun 7, 2009 at 3:50 AM, Ian Collinsi...@ianshome.com wrote:
Ian Collins wrote:
Tim Haley wrote:
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving
(have a support contract),
but Opensolaris doesn't seem to be well understood by the support
folks yet, so not sure how far it will get.
--
Brent Jones
br...@servuhome.net
I can reproduce this 100% by sending about 6 or more snapshots at once.
Here is some output that JBK helped me put
On Mon, Jun 8, 2009 at 9:38 PM, Richard Lowerichl...@richlowe.net wrote:
Brent Jones br...@servuhome.net writes:
I've had similar issues with similar traces. I think you're waiting on
a transaction that's never going to come.
I thought at the time that I was hitting:
CR 6367701 hang
on pkg.opensolaris.org/dev ?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, or tests, I can run them on my
X4540's and see how it goes.
Thanks
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
I checked this morning, and 117 is available now
--
Brent Jones
br...@servuhome.net
Confirming this issue is fixed on build 117.
Snapshots are significantly faster as well. My average transfer speed
went
and a bit
more even performance, but it is still there.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?)
that is optimized for 'background' tasks, or 'foreground'.
Beyond that, I will give this tuneable a shot and see how it impacts
my own workload.
Thanks!
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
?threadID=104852tstart=120
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
levels of fail here, and to blame ZFS seems misplaced, and the
subject on this thread especially inflammatory.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Looking at this external array by HP:
http://h18006.www1.hp.com/products/storageworks/600mds/index.html
70 disks in 5U, which could probably be configured in JBOD.
Has anyone attempted to connect this to a box running opensolaris to
create a 70 disk pool?
--
Brent Jones
br...@servuhome.net
, just not in any reasonable
timeframe.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
).
The only resolution is not to ever use zfs destroy, or just simply
wait it out. It will eventually finish, just not in any reasonable
timeframe.
--
Brent Jones
br...@servuhome.net
Correction, looks like my bug is 6855208
--
Brent Jones
br...@servuhome.net
-A replication
(mirror half of A to B, and half of B to A for paired redundancy).
I'll post that version up in a few weeks when I clean it up a little.
Credits go to Constantin Gonzalez for inspiration and source for parts
of my script.
http://blogs.sun.com/constantin/
--
Brent Jones
br
on Opensolaris 2008.11, which did have modifiable
snapshot properties.
Can you upgrade your pool versions possibly?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
, or they
don't intend to support any advanced monitoring whatsoever.
Sad, really.. as my $900 Dell and HP servers can send SMS, Jabber
messages, SNMP traps, etc, on ANY IPMI event, hardware issue, and what
have you without any tinkering or excuses.
--
Brent Jones
br...@servuhome.net
X4540's, 64GB of ECC memory, 1TB drives.
Rolling back to snv_118 does not reveal any checksum errors, only snc_121
So, the commodity hardware here doesn't hold up, unless Sun isn't
validating their equipment (not likely, as these servers have had no
hardware issues prior to this build)
--
Brent
as far as I would trust these systems - MP3's,
backups of photos for which I already maintain a couple copies of.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
.
Regardless of filesystem, I'd suggest splitting your directory
structure into a hierarchy. It makes sense even just for cleanliness.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
trying
to mount into.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as usual for ZFS developments?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the blocks as soon as Windows needs
space, and Windows will eventually not need that space again.
Is there a way to reclaim un-used space on a thin provisioned iSCSI target?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
:)
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 18, 2009 at 4:09 PM, Brent Jones br...@servuhome.net wrote:
On Tue, Nov 17, 2009 at 10:32 AM, Ed Plese e...@edplese.com wrote:
You can reclaim this space with the SDelete utility from Microsoft.
With the -c option it will zero any free space on the volume. For
example:
C
of Amanda from Richard as well though,
pretty flexible solution. And it can backup much more than just local
ZFS snapshots if that would be a benefit to you as well.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
/bugdatabase/view_bug.do?bug_id=6855208
I'll escalate since I have a support contract. But yes, I see this as
a serious bug, I thought my machine had locked up entirely as well, it
took about 2 days to finish a destroy on a volume about 12TB in size.
--
Brent Jones
br...@servuhome.net
prevents me from rolling back to snv_127,
which would send at many tens of megabytes a second.
This is on an X4540, dual quad cores, and 64GB RAM.
Anyone else seeing similar issues?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sat, 12 Dec 2009, Brent Jones wrote:
I've noticed some extreme performance penalties simply by using snv_128
Does the 'zpool scrub' rate seem similar to before? Do you notice any read
performance
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones br...@servuhome.net wrote:
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sat, 12 Dec 2009, Brent Jones wrote:
I've noticed some extreme performance penalties simply by using snv_128
Does the 'zpool scrub
On Sat, Dec 12, 2009 at 8:14 PM, Brent Jones br...@servuhome.net wrote:
On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones br...@servuhome.net wrote:
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sat, 12 Dec 2009, Brent Jones wrote:
I've noticed some
.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/null at close to 800MB/sec (42 drives in 5-6 disk vdevs, RAID-Z)
Something must've changed in either SSH, or the ZFS receive bits to
cause this, but sadly since I upgrade my pool, I cannot roll back
these hosts :(
--
Brent Jones
br...@servuhome.net
manifested after snv_128, and seemingly
only affect ZFS receive speeds. Local pool performance is still very
fast.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
is thrown out the
window in favor of getting all that data on disk.
I was on a watch list for a ZFS I/O scheduler bug with my paid Solaris
support, I'll try to find that bug number, but I believe some
improvements were done in 129 and 130.
--
Brent Jones
br...@servuhome.net
On Fri, Dec 25, 2009 at 9:56 PM, Tim Cook t...@cook.ms wrote:
On Fri, Dec 25, 2009 at 11:43 PM, Brent Jones br...@servuhome.net wrote:
Hang on... if you've got 77 concurrent threads going, I don't see how
that's
a sequential I/O load. To the backend storage it's going to look
OpenSolaris, so I
doubt I will get very far.
Cross posting to ZFS-discuss also, as other may have seen this and
know of a solution/workaround.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for this feature to mature.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Dec 27, 2009 at 1:35 PM, Brent Jones br...@servuhome.net wrote:
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach stephan.bud...@jvm.de
wrote:
Brent,
I had known about that bug a couple of weeks ago, but that bug has been
files against v111 and we're at v130. I have also seached
by default I believe is 128K
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
requires such sacrifice.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
about disabling the ZIL? Since
this sounds like transient data to begin with, any risks would be
pretty low I'd imagine.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
side doesn't differ, ie. has the
same current snapshots as the sender side. If the replication fails
for some reason, unlock both sides with 'zfs set'.
What problems are your experiencing with incrementals?
--
Brent Jones
br...@servuhome.net
___
zfs
this
issue. I have not had time to narrow down any causes, but I did find
one bug report that found some TCP test scenarios failed during one of
the builds, but unable to find that CR at this time.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss
On Tue, Feb 2, 2010 at 7:41 PM, Brent Jones br...@servuhome.net wrote:
On Tue, Feb 2, 2010 at 12:05 PM, Arnaud Brand t...@tib.cc wrote:
Hi folks,
I'm having (as the title suggests) a problem with zfs send/receive.
Command line is like this :
pfexec zfs send -Rp tank/t...@snapshot | ssh
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZIL performance issues? Is writecache enabled on the LUNs?
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Feb 10, 2010 at 4:05 PM, Brent Jones br...@servuhome.net wrote:
On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas geekyth...@gmail.com wrote:
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil
it to
perform.
Do you have an SSD log device? If not, try disabling the ZIL
temporarily to see if that helps. Your workload will likely benefit
from a log device.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
to see that level of performance at all.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Guess they've been seeing a lot of issues, and regardless if its
'supported' or not, he said not to use it.
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
1 - 100 of 109 matches
Mail list logo