On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble erik.trim...@oracle.com wrote:
On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com
wrote:
We at KQInfotech, initially started on an independent port of ZFS to
linux.
When
I can think of two rather ghetto ways to go.
1. write data then set the read-only property. If you need to make updates
cycle back to rw, write data, set read only.
2. Write data, snapshot the fs, expose the snapshot instead of the r/w file
system. Your mileage may vary depending on the
::arc
Kind regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Replace it. Reslivering should not as painful if all your disks are functioning
normally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
HyperDrive5 = ACard ANS9010
I have personally been wanting to try one of these for some time as a
ZIL device.
On 12/29/2010 06:35 PM, Kevin Walker wrote:
You do seem to misunderstand ZIL.
ZIL is quite simply write cache and using a short stroked rotating
drive is never going to provide a
-SAS-833TQ SAS backplane.
Have run ZFS with both Solaris and FreeBSD without a problem for a
couple years now. Had one drive go bad, but it was caught early by
running periodic scrubs.
--
Jason Fortezzo
forte...@mechanicalism.net
___
zfs-discuss mailing
We have a running zpool with a 12 disk raidz3 vdev in it ... we gave ZFS the
full, raw disks ... all is well.
However, we built it on two LSI 9211-8i cards and we forgot to change from IR
firmware to IT firmware.
Is there any danger in shutting down the OS, flashing the cards to IT firmware,
from IR - IT firmware on the
fly ? (existing zpool on LSI 9211-8i)
To: Jason Usher jushe...@yahoo.com
Cc: zfs-discuss@opensolaris.org
Date: Tuesday, July 17, 2012, 5:05 PM
Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host
Hi,
I have a ZFS filesystem with compression turned on. Does the used property
show me the actual data size, or the compressed data size ? If it shows me the
compressed size, where can I see the actual data size ?
I also wonder about checking status of dedupe - I created my pool without
--- On Fri, 9/21/12, Sašo Kiselkov skiselkov...@gmail.com wrote:
I have a ZFS filesystem with compression turned
on. Does the used property show me the actual data
size, or the compressed data size ? If it shows me the
compressed size, where can I see the actual data size ?
It shows
Oh, and one other thing ...
--- On Fri, 9/21/12, Jason Usher jushe...@yahoo.com wrote:
It shows the allocated number of bytes used by the
filesystem, i.e.
after compression. To get the uncompressed size,
multiply
used by
compressratio (so for example if used=65G and
compressratio
--- On Mon, 9/24/12, Richard Elling richard.ell...@gmail.com wrote:
I'm hoping the answer is yes - I've been looking but do not see it ...
none can hide from dtrace!# dtrace -qn 'dsl_dataset_stats:entry {this-ds =
(dsl_dataset_t *)arg0;printf(%s\tcompressed size = %d\tuncompressed
size=%d\n,
--- On Tue, 9/25/12, Volker A. Brandt v...@bb-c.de wrote:
Well, he is telling you to run the dtrace program as root in
one
window, and run the zfs get all command on a dataset in
your pool
in another window, to trigger the dataset_stats variable to
be filled.
none can hide from
--- 1 root root 10485760 Oct 23 21:38 /vol4/vol4
=end summary of the
problem==
--
Jason Gallagher
Operating Systems Group Support Team
Sun Microsystems, Inc.
Working hours: Sat/Sun 6am - 6pm Mon/Tue 8am - 5pm MNT
Phone: Call 1-800-USA4-SUN
Hi Louwtjie,
Are you running FC or SATA-II disks in the 6140? How many spindles too?
Best Regards,
Jason
On 11/3/06, Louwtjie Burger [EMAIL PROTECTED] wrote:
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above
Hi there,
I've been comparing using the ZFS send/receive function over SSH to
simply scp'ing the contents of snapshot, and have found for me the
performance is 2x faster for scp.
Has anyone else noticed ZFS send/receive to be noticeably slower?
Best Regards,
Jason
Listman,
What's the average size of your files? Do you have many file
deletions/moves going on? I'm not that familiar with how Perforce
handles moving files around.
XFS is bad at small files (worse than most file systems), as SGI
optimized it for larger files ( 64K). You might see a performance
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Thanks in advance,
J
On 11/28/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
On 11/28/06, Elizabeth Schwartz [EMAIL PROTECTED] wrote:
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for
to work with.
Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
is much appreciated.
Best Regards,
Jason
On 11/28/06, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line
, wouldn't be bad either when you're
creating a striped zpool. Even the best of us forgets these things.
Best Regards,
Jason
On 12/4/06, Richard Elling [EMAIL PROTECTED] wrote:
Douglas Denny wrote:
On 12/4/06, James C. McPherson [EMAIL PROTECTED] wrote:
Is this normal behavior for ZFS?
Yes
Any chance we might get a short refresher warning when creating a
striped zpool? O:-)
Best Regards,
Jason
On 12/4/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem
on your gear using MPXIO is ridiculously simple. For us it
was as simple as enabling it on our T2000, the Opteron boxes just came
up.
Best Regards,
Jason
On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote:
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have
dual FC
it wasn't ZFS or the storage.
Does this help?
Best Regards,
Jason
On 12/6/06, Douglas Denny [EMAIL PROTECTED] wrote:
On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write
Hi Luke,
That's really strange. We did the exact same thing moving between two
hosts (export/import) and it took maybe 10 secs. How big is your
zpool?
Best Regards,
Jason
On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote:
Doug,
I should have posted the reason behind this posting.
I have 2
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent [EMAIL PROTECTED] wrote:
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices
Hi Luke,
That's terrific!
You know you might be able to tell ZFS which disks to look at. I'm not
sure. It would be interesting, if anyone with a Thumper could comment
on whether or not they see the import time issue. What are your load
times now with MPXIO?
Best Regards,
Jason
On 12/7/06
That's gotta be what it is. All our MySQL IOP issues have gone away
one we moved to RAID-1 from RAID-Z.
-J
On 12/7/06, Anton B. Rang [EMAIL PROTECTED] wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't
Hi Dale,
For what its worth, the SX releases tend to be pretty stable. I'm not
sure if snv_52 has made a SX release yet. We ran for over 6 months on
SX 10/05 (snv_23) with no downtime.
Best Regards,
Jason
On 12/7/06, Dale Ghent [EMAIL PROTECTED] wrote:
On Dec 7, 2006, at 6:14 PM, Anton B
caused what you're seeing for us.
-J
On 12/7/06, Luke Schwab [EMAIL PROTECTED] wrote:
Jason,
I am no longer looking at not using STMS multipathing because without STMS you
loose the binding to the array and I loose all transmissions between the server
and array. The binding does come back
-
/dev/dsk/c1t0d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 -
/dev/dsk/c1t16d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t17d0s1 -
/dev/dsk/c1t17d0s1
Then:
zpool import -d /mydkslist mypool
Hope that helps...
-r
Jason J. W. Williams
this better than I) you may want to disable
the ZIL or have your array ignore the write cache flushes that ZFS
issues.
Best Regards,
Jason
On 12/12/06, Kory Wheatley [EMAIL PROTECTED] wrote:
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC
disk array. Here's are plan
be happy to
add them!
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
posted the instructions to hopefully help others in a similar boat.
I think this is a valuable discussion point though...at least for us. :-)
Best Regards,
Jason
On 12/15/06, Jeremy Teo [EMAIL PROTECTED] wrote:
The instructions will tell you how to configure the array to ignore
SCSI cache
or the DotHill version (SANnet II).
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Roch,
That sounds like a most excellent resolution to me. :-) I believe
Engenio devices support SBC-2. It seems to me making intelligent
decisions for end-users is generally a good policy.
Best Regards,
Jason
On 12/19/06, Roch - PAE [EMAIL PROTECTED] wrote:
Jason J. W. Williams writes
popping and reinserting triggered a rebuild of
the drive.
Best Regards,
Jason
On 12/19/06, Toby Thain [EMAIL PROTECTED] wrote:
On 19-Dec-06, at 2:42 PM, Jason J. W. Williams wrote:
I do see this note in the 3511 documentation: Note - Do not use a
Sun StorEdge 3511 SATA array to store single
Not sure. I don't see an advantage to moving off UFS for boot pools. :-)
-J
On 12/20/06, James C. McPherson [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic
Hi Naveen,
I believe the newer LSI cards work pretty well with Solaris.
Best Regards,
Jason
On 12/20/06, Naveen Nalam [EMAIL PROTECTED] wrote:
Hi,
This may not be the right place to post, but hoping someone here is running a
reliably working system with 12 drives using ZFS that can tell me
Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
correct controller when this occurred. Its not clear why ZFS thought
it lost a LUN but it did.
for slow Friday's before x-mas, I have a bit of time to play in the
lab today.
--Tim
-Original Message-
From: Jason J. W. Williams [mailto:[EMAIL PROTECTED]
Sent: Friday, December 22, 2006 10:56 AM
To: Tim Cook
Cc: Shawn Joy; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re
A. In the case of the Windows server Engenios RDAC was
handling multipathing. Overall, not a big deal, I just wouldn't trust
the array to do a hitless commanded controller failover or firmware
upgrade.
-J
On 12/22/06, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Friday, December 22, 2006
Hi Brad,
I believe benr experienced the same/similar issue here:
http://www.opensolaris.org/jive/message.jspa?messageID=77347
If it is the same, I believe its a known ZFS/NFS interaction bug, and
has to do with small file creation.
Best Regards,
Jason
On 1/2/07, Brad Plecs [EMAIL PROTECTED
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what others had seen. Thank you in advance.
Best Regards,
Jason
___
zfs-discuss mailing list
provide a nice safety net.
Best Regards,
Jason
On 1/3/07, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Hello All,
I was curious if anyone had run a benchmark on the IOPS performance of
RAIDZ2 vs RAID-10? I'm getting ready to run one on a Thumper and was
curious what
from the RAID-10 to the RAID-Z2 took 258 seconds. Would have
expected the RAID-10 to write data more quickly.
Its interesting to me that the RAID-10 pool registered the 38.4GB of
data as 38.4GB, whereas the RAID-Z2 registered it as 56.4.
Best Regards,
Jason
On 1/3/07, Jason J. W. Williams [EMAIL
experience RAID-5 due to the 2 reads, a XOR calc and a
write op per write instruction is usually much slower than RAID-10
(two write ops). Any advice is greatly appreciated.
Best Regards,
Jason
On 1/3/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 3, 2007, 11:11:31
Hi Robert,
That makes sense. Thank you. :-) Also, it was zpool I was looking at.
zfs always showed the correct size.
-J
On 1/3/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 3, 2007, 11:40:38 PM, you wrote:
JJWW Just got an interesting benchmark. I made two
Hi Anton,
Thank you for the information. That is exactly our scenario. We're 70%
write heavy, and given the nature of the workload, our typical writes
are 10-20K. Again the information is much appreciated.
Best Regards,
Jason
On 1/3/07, Anton B. Rang [EMAIL PROTECTED] wrote:
In our recent
Could this ability (separate ZIL device) coupled with an SSD give
something like a Thumper the write latency benefit of battery-backed
write cache?
Best Regards,
Jason
On 1/5/07, Neil Perrin [EMAIL PROTECTED] wrote:
Robert Milkowski wrote On 01/05/07 11:45,:
Hello Neil,
Friday, January 5
Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it will illuminate if memory truly is the issue.
Thank you very much in advance!
-felide-constructors
-fno-exceptions -fno-rtti
Best Regards,
Jason
On 1/7/07, Sanjeev Bagewadi [EMAIL PROTECTED] wrote:
Jason,
There is no documented way of limiting the memory consumption.
The ARC section of ZFS tries to adapt
We're not using the Enterprise release, but we are working with them.
It looks like MySQL is crashing due to lack of memory.
-J
On 1/8/07, Toby Thain [EMAIL PROTECTED] wrote:
On 8-Jan-07, at 11:54 AM, Jason J. W. Williams wrote:
...We're trying to recompile MySQL to give a
stacktrace
in the discussion thread on this seemed to be
related to a lot of DLNC entries due to the workload of a file server.
How would this affect a database server with operations in only a
couple very large files? Thank you in advance.
Best Regards,
Jason
On 1/10/07, Jason J. W. Williams [EMAIL PROTECTED] wrote
reads are much faster, as all the disk are activated in the
read process.
The default config on the X4500 we received recently was RAIDZ-groups
of 6 disks (across the 6 controllers) striped together into one large
zpool.
Best Regards,
Jason
On 1/10/07, Kyle McDonald [EMAIL PROTECTED] wrote:
Robert
device in terms of deliveredrandom input
IOPS. Thus a 10-disk group of devices each capable of 200-IOPS, will
globally act as a 200-IOPS capable RAID-Z group.
Best Regards,
Jason
On 1/10/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 10, 2007, 10:54
being used for that with the default arc_max.
After changing it to 4GB, we haven't seen anything much over 5-10%.
Best Regards,
Jason
On 1/10/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Thursday, January 11, 2007, 12:36:46 AM, you wrote:
JJWW Hi Robert,
JJWW Thank you! Holy
Hi Mark,
That does help tremendously. How does ZFS decide which zio cache to
use? I apologize if this has already been addressed somewhere.
Best Regards,
Jason
On 1/11/07, Mark Maybee [EMAIL PROTECTED] wrote:
Al Hopper wrote:
On Wed, 10 Jan 2007, Mark Maybee wrote:
Jason J. W. Williams
Hello all,
Just my two cents on the issue. The Thumper is proving to be a
terrific database server in all aspects except latency. While the
latency is acceptable, being able to add some degree of battery-backed
write cache that ZFS could use would be phenomenal.
Best Regards,
Jason
On 1/11/07
the FMA
integration would detect read errors.
Best Regards,
Jason
On 1/12/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello zfs-discuss,
One of our drives in x4500 is failing - it periodically
disconnects/connects. ZFS only reports READ errors and no hot-spare
automatically took
Hi Robert,
Will build 54 offline the drive?
Best Regards,
Jason
On 1/13/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Saturday, January 13, 2007, 12:06:57 AM, you wrote:
JJWW Hi Robert,
JJWW We've experienced luck with flaky SATA drives in our STK array by
JJWW unseating
Hi Torrey,
I think it does if you buy Xsan. Its still a separate product isn't
it? Thought its more like QFS + MPXIO.
Best Regards,
Jason
On 1/15/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
2. I belive it's definitely possible to just correct your config under
Mac
that wait until U4? Thank you in advance!
Best Regards,
Jason
On 1/15/07, Roch - PAE [EMAIL PROTECTED] wrote:
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem
Hi Torrey,
Looks like its got a half-way decent multipath design:
http://docs.info.apple.com/article.html?path=Xsan/1.1/en/c3xs12.html
Whether or not it works is another story I suppose. ;-)
Best Regards,
Jason
On 1/15/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Got me. However, transport
Hi Philip,
I'm not an expert, so I'm afraid I don't know what to tell you. I'd
call Apple Support and see what they say. As horrid as they are at
Enterprise support they may be the best ones to clarify if
multipathing is available without Xsan.
Best Regards,
Jason
On 1/16/07, Philip Mötteli
Hi Anantha,
I was curious why segregating at the FS level would provide adequate
I/O isolation? Since all FS are on the same pool, I assumed flogging a
FS would flog the pool and negatively affect all the other FS on that
pool?
Best Regards,
Jason
On 1/17/07, Anantha N. Srirama [EMAIL
Hi Robert,
I see. So it really doesn't get around the idea of putting DB files
and logs on separate spindles?
Best Regards,
Jason
On 1/17/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Jason,
Wednesday, January 17, 2007, 11:24:50 PM, you wrote:
JJWW Hi Anantha,
JJWW I was curious why
Hi Frank,
Sun doesn't support the X2100 SATA controller on Solaris 10? That's
just bizarre.
-J
On 1/18/07, Frank Cusack [EMAIL PROTECTED] wrote:
THANK YOU Naveen, Al Hopper, others, for sinking yourselves into the
shit world of PC hardware and [in]compatibility and coming up with
well
Hi David,
I don't know if your company qualifies as a startup under Sun's regs
but you can get an X4500/Thumper for $24,000 under this program:
http://www.sun.com/emrkt/startupessentials/
Best Regards,
Jason
On 1/19/07, David J. Orman [EMAIL PROTECTED] wrote:
Hi,
I'm looking at Sun's 1U x64
that caused them to pull it?
Best Regards,
Jason
On 1/20/07, Shannon Roddy [EMAIL PROTECTED] wrote:
Frank Cusack wrote:
thumper (x4500) seems pretty reasonable ($/GB).
-frank
I am always amazed that people consider thumper to be reasonable in
price. 450% or more markup per drive from
Hi Frank,
I'm sure Richard will check it out. He's a very good guy and not
trying to jerk you around. I'm sure the hostility isn't warranted. :-)
Best Regards,
Jason
On 1/22/07, Frank Cusack [EMAIL PROTECTED] wrote:
On January 22, 2007 10:03:14 AM -0800 Richard Elling
[EMAIL PROTECTED] wrote
improvement over the X2100 in terms of reliability and
features, is still an OEM'd whitebox. We use the X2100 M2s for
application servers, but for anything that needs solid reliability or
I/O we go Galaxy.
Best Regards,
Jason
On 1/22/07, David J. Orman [EMAIL PROTECTED] wrote:
Not to be picky
designed that case with hot-plug bays, if Solaris isn't going
to support it, then those shouldn't be there in my opinion.
Best Regards,
Jason
On 1/22/07, Frank Cusack [EMAIL PROTECTED] wrote:
In short, the release note is confusing, so ignore it. Use x2100
disks as hot pluggable like you've
modifications. That all being said, the problem is that Nvidia
chipset. The MCP55 in the X2100 M2 is an alright chipset, the nForce 4
Pro just had bugs.
Best Regards,
Jason
On 1/22/07, David J. Orman [EMAIL PROTECTED] wrote:
Hi David,
Depending on the I/O you're doing the X4100/X4200 are
much better
creeping the kernel memory over the
4GB limit I set for the ARC (zfs_arc_max). What I was also, curious
about, is if ZFS affects the cachelist line, or if that is just for
UFS. Thank you in advance!
Best Regards,
Jason
01/17/2007 02:28:50 GMT 2007
Page SummaryPagesMB
, but I'll defer to others with more knowledge.
Best Regards,
Jason
On 1/23/07, Neal Pollack [EMAIL PROTECTED] wrote:
Hi: (Warning, new zfs user question)
I am setting up an X4500 for our small engineering site file server.
It's mostly for builds, images, doc archives, certain workspace
archives
I believe the SmartArray is an LSI like the Dell PERC isn't it?
Best Regards,
Jason
On 1/23/07, Robert Suh [EMAIL PROTECTED] wrote:
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes consisting of Y disks?
Best Regards,
Jason
On 1/23/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 1/23/07, Neal Pollack [EMAIL PROTECTED] wrote:
Hi: (Warning, new zfs user question
Hi Peter,
Ah! That clears it up for me. Thank you.
Best Regards,
Jason
On 1/23/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 1/23/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes
indulgence.
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snapshot...unless
you clone it. Perhaps, someone who knows more than I can clarify.
Best Regards,
Jason
On 1/23/07, Prashanth Radhakrishnan [EMAIL PROTECTED] wrote:
Is there someway to synchronously mount a ZFS filesystem?
'-o sync' does not appear to be honoured.
No there isn't. Why do you
Wow. That's an incredibly cool story. Thank you for sharing it! Does
the Thumper today pretty much resemble what you saw then?
Best Regards,
Jason
On 1/23/07, Bryan Cantrill [EMAIL PROTECTED] wrote:
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its
on a single FS. We don't plan
to have more than about 40 snapshots on an FS right now.
Hope this is somewhat helpful. Its been a long time (2+ years) since
I've used Ext3 on a Linux system, so I couldn't give you a comparative
benchmark. Good luck! :-)
Best Regards,
Jason
On 1/23/07, Prashanth
Hi Wee,
Having snapshots in the filesystem that work so well is really nice.
How are y'all quiescing the DB?
Best Regards,
J
On 1/24/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 1/25/07, Bryan Cantrill [EMAIL PROTECTED] wrote:
...
after all, what was ZFS going to do with that expensive but
.
Does anyone on the list have a recommendation for ARC sizing with NFS?
Best Regards,
Jason
On 1/26/07, Jeffery Malloch [EMAIL PROTECTED] wrote:
Hi Folks,
I am currently in the midst of setting up a completely new file server using a pretty
well loaded Sun T2000 (8x1GHz, 16GB RAM) connected
Correction: ZFS has run VERY stably for us with data integrity
issues at all. should read ZFS has run VERY stably for us with NO
data integrity issues at all.
On 1/26/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Jeff,
We're running a FLX210 which I believe is an Engenio 2884. In our
though. :-)
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the LUN masking on the array. Depending on your switch that
can be less disruptive, and depending on your storage array might be
able to be scripted.
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Project Overview:
I propose the creation of a project on opensolaris.org, to bring to the community
traps fed into a network management app yet? I'm curious what
the experience with it is?
Best Regards,
Jason
On 1/29/07, Jeffery Malloch [EMAIL PROTECTED] wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about managing
writes,reads and failures. It wants to be bit
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Could the replication engine eventually be integrated more
,
Jason
On 1/29/07, Toby Thain [EMAIL PROTECTED] wrote:
On 29-Jan-07, at 9:04 PM, Al Hopper wrote:
On Mon, 29 Jan 2007, Toby Thain wrote:
Hi,
This is not exactly ZFS specific, but this still seems like a
fruitful place to ask.
It occurred to me today that hot spares could sit in standby (spun
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?
Best Regards,
Jason
On 1
dramatically cut down on the power. What do y'all think?
Best Regards,
Jason
On 1/29/07, Toby Thain [EMAIL PROTECTED] wrote:
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called
Hi Nicholas,
ZFS itself is very stable and very effective as fast FS in our
experience. If you browse the archives of the list you'll see that NFS
performance is pretty acceptable, with some performance/RAM quirks
around small files:
http://www.opensolaris.org/jive/message.jspa?threadID=19858
Hi Nicholas,
Actually Virtual Iron, they have a nice system at the moment with live
migration of windows guest.
Ah. We looked at them for some Windows DR. They do have a nice product.
3. Which leads to: coming from Debian, how easy is system updates? I
remember with OpenBSD system updates
. Leaving 11GB for the DB.
Best Regards,
Jason
On 2/22/07, Mark Maybee [EMAIL PROTECTED] wrote:
This issue has been discussed a number of times in this forum.
To summerize:
ZFS (specifically, the ARC) will try to use *most* of the systems
available memory to cache file system data. The default
migrations is a very valid concern.
Best Regards,
Jason
On 2/22/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Wed, Feb 21, 2007 at 04:43:34PM +0100, [EMAIL PROTECTED] wrote:
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror
Hi Gino,
Was there more than one LUN in the RAID-Z using the port you disabled?
-J
On 2/26/07, Gino Ruopolo [EMAIL PROTECTED] wrote:
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z
/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
just because that's the certified
101 - 200 of 241 matches
Mail list logo