Johan Hartzenberg wrote:
On Fri, Sep 26, 2008 at 7:03 PM, Richard Elling
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
Mikael Kjerrman wrote:
define a lot :-)
We are doing about 7-8M per second which I don't think is a lot
but perhaps it is enough to screw up
Tomas Ögren wrote:
On 27 September, 2008 - Brandon High sent me these 1,0K bytes:
On Sat, Sep 27, 2008 at 4:02 PM, Marcus Sundman [EMAIL PROTECTED] wrote:
So, is it possible to create a 5 * 1 TB raidz with 4 disks (i.e., with
one disk offline)? In that case I could use one of the 1
Dedhi Sujatmiko wrote:
Dedhi Sujatmiko wrote:
When I do the replication :
[EMAIL PROTECTED]:/etc/ssh# zfs send data/work/[EMAIL PROTECTED]|ssh
192.168.3.13 zfs
recv data/work/[EMAIL PROTECTED]
cannot receive: invalid backup stream
I just realized that the ZFS version being used
Volker A. Brandt wrote:
[most people don't seem to know Solaris has ramdisk devices]
That is because only a select few are able to unravel the enigma wrapped in
a clue that is solaris :)
Hmmm... very enigmatic, your remark. :-)
However, in this case I suspect it is because
Ahmed Kamal wrote:
Hi everyone,
We're a small Linux shop (20 users). I am currently using a Linux
server to host our 2TBs of data. I am considering better options for
our data storage needs. I mostly need instant snapshots and better
data protection. I have been considering EMC NS20
Ahmed Kamal wrote:
Thanks for all the answers .. Please find more questions below :)
- Good to know EMC filers do not have end2end checksums! What about
netapp ?
If they are not at the end, they can't do end-to-end data validation.
Ideally, application writers would do this, but it is a lot
Ahmed Kamal wrote:
I guess I am mostly interested in MTDL for a zfs system on whitebox
hardware (like pogo), vs dataonTap on netapp hardware. Any numbers ?
It depends to a large degree on the disks chosen. NetApp uses enterprise
class disks and you can expect better reliability from such
gm_sjo wrote:
2008/9/30 Jean Dion [EMAIL PROTECTED]:
If you want performance you do not put all your I/O across the same physical
wire. Once again you cannot go faster than the physical wire can support
(CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
single port
BJ Quinn wrote:
Is there more information that I need to post in order to help diagnose this
problem?
Segmentation faults should be correctly handled by the software.
Please file a bug and attach the core.
http://bugs.opensolaris.org
-- richard
BJ Quinn wrote:
Please forgive my ignorance. I'm fairly new to Solaris (Linux convert), and
although I recognize that Linux has the same concept of Segmentation faults /
core dumps, I believe my typical response to a Segmentation Fault was to
upgrade the kernel and that always fixed the
BJ Quinn wrote:
True, but a search for zfs segmentation fault returns 500 bugs. It's
possible one of those is related to my issue, but it would take all day to
find out. If it's not flaky or unstable, I'd like to try upgrading to
the newest kernel first, unless my Linux mindset is truly
Tim wrote:
On Tue, Sep 30, 2008 at 7:15 PM, David Magda [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Sep 30, 2008, at 19:09, Tim wrote:
SAS has far greater performance, and if your workload is
extremely random,
will have a longer MTBF. SATA drives
Ahmed Kamal wrote:
I observe that there are no disk vendors supplying SATA disks
with speed 7,200 rpm. It is no wonder that a 10k rpm disk
outperforms a 7,200 rpm disk for random workloads. I'll attribute
this to intentional market segmentation by the industry rather than
Ahmed Kamal wrote:
So, performance aside, does SAS have other benefits ? Data
integrity ? How would a 8 raid1 sata compare vs another 8 smaller
SAS disks in raidz(2) ?
Like apples and pomegranates. Both should be able to saturate a
GbE link.
You're the expert, but
Josh Hardman wrote:
Hello, I'm looking for info on adding a disk to my current zfs pool. I am
running OpenSoarlis snv_98. I have upgraded my pool since my image-update.
When I installed OpenSolaris it was a machine with 2 hard disks (regular
IDE). Is it possible to add the second hard
Ahmed Kamal wrote:
Thanks for all the opinions everyone, my current impression is:
- I do need as much RAM as I can afford (16GB look good enough for me)
- SAS disks offers better iops better MTBF than SATA. But Sata
offers enough performance for me (to saturate a gig link), and its
MTBF
Blake Irvin wrote:
I'm using Neelakanth's arcstat tool to troubleshoot performance problems with
a ZFS filer we have, sharing home directories to a CentOS frontend Samba box.
Output shows an arc target size of 1G, which I find odd, since I haven't
tuned the arc, and the system has 4G of
Blake Irvin wrote:
I think I need to clarify a bit.
I'm wondering why arc size is staying so low, when i have 10 nfs
clients and about 75 smb clients accessing the store via resharing (on
one of the 10 linux nfs clients) of the zfs/nfs export. Or is it
normal for the arc target and arc
I hate to drag this thread on, but...
Erik Trimble wrote:
OK, we cut off this thread now.
Bottom line here is that when it comes to making statements about SATA
vs SAS, there are ONLY two statements which are currently absolute:
(1) a SATA drive has better GB/$ than a SAS drive
In
Anton B. Rang wrote:
Erik:
(2) a SAS drive has better throughput and IOPs than a SATA drive
Richard:
Disagree. We proved that the transport layer protocol has no bearing
on throughput or iops. Several vendors offer drives which are
identical in all respects except for
Do you have a lot of snapshots? If so, CR 6612830 could be contributing.
Alas, many such fixes are not yet available in S10.
-- richard
Luke Schwab wrote:
Hi,
I am having a problem running zpool imports when we import multiple storage
pools at one time. Below are the details of the setup:
Scott Williamson wrote:
Speaking of this, is there a list anywhere that details what we can
expect to see for (zfs) updates in S10U6?
The official release name is Solaris 10 10/08
http://www.sun.com/software/solaris/10
has links to what's new videos.
When the release is downloadable, full
Mario Goebbels wrote:
How can I diagnose why a resilver appears to be hanging at a certain
percentage, seemingly doing nothing for quite a while, even though the
HDD LED is lit up permanently (no apparent head seeking)?
The drives in the pool are WD Raid Editions, thus have TLER and should
comment below...
Janåke Rönnblom wrote:
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks
Paul Pilcher wrote:
All;
I have a question about ZFS and how it protects data integrity in the
context of a replication scenario.
First, ZFS is designed such that all data on disk is in a consistent
state. Likewise, all data in a ZFS snapshot on disk is in a consistent
state. Further,
Timh Bergström wrote:
2008/10/10 Richard Elling [EMAIL PROTECTED]:
Timh Bergström wrote:
2008/10/9 Bob Friesenhahn [EMAIL PROTECTED]:
On Thu, 9 Oct 2008, Miles Nordin wrote:
catastrophically. If this is really the situation, then ZFS needs to
give the sysadmin
Blake Irvin wrote:
I'm also very interested in this. I'm having a lot of pain with status
requests killing my resilvers. In the example below I was trying to test to
see if timf's auto-snapshot service was killing my resilver, only to find
that calling zpool status seems to be the issue:
Nick Smith wrote:
Dear all,
Background:
I have a ZFS volume with the incorrect volume blocksize for the filesystem
(NTFS) that it is supporting.
This volume contains important data that is proving impossible to copy using
Windows XP Xen HVM that owns the data.
The disparity in volume
Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi targets and mirroring them for high availability
with T5220s.
In
Tommaso Boccali wrote:
Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks.
I would like to try a new brand of HD, by replacing a spare disk with a new
one and build on it a zfs pool.
Unfortunately the official utility to map a disk to the physical position
inside the thumper
comments below...
Carsten Aulbert wrote:
Hi all,
Carsten Aulbert wrote:
More later.
OK, I'm completely puzzled right now (and sorry for this lengthy email).
My first (and currently only idea) was that the size of the files is
related to this effect, but that does not seem to be
Vincent Fox wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks
during JumpStart?
Seems like it could buy some space performance.
Yes. There have been several people who do this regularly.
Glenn wrote a blog on how to do this when installing OpenSolaris
Bob Friesenhahn wrote:
On Wed, 15 Oct 2008, Tomas Ögren wrote:
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
This is load-share, not RAID0. Also, to answer the other fellow,
since ZFS does not support RAID0, it also does not support
Tomas Ögren wrote:
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so. Is there any way to simulate it?
We describe how to limit the size of the ARC cache in the Evil Tuning Guide.
Karthik Krishnamoorthy wrote:
We did try with this
zpool set failmode=continue pool option
and the wait option before pulling running the cp command and pulling
out the mirrors and in both cases there was a hang and I have a core
dump of the hang as well.
You have to wait for the I/O
Tomas Ögren wrote:
On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes:
Tomas Ögren wrote:
On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
Tomas Ögren wrote:
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit
Francois Goudal wrote:
Hi,
I am trying a setup with a Linux Xen Dom0 on which runs an OpenSolaris
2008.05 DomU.
I have 8 hard disk partitions that I exported to the DomU (they are visible
as c4d[1-8]p0)
I have created a raidz2 pool on these virtual disks.
Now, if I shutdown the system and
Scott Williamson wrote:
Hi All,
I have opened a ticket with sun support #66104157 regarding zfs send /
receive and will let you know what I find out.
Thanks.
Keep in mind that this is for Solaris 10 not opensolaris.
Keep in mind that any changes required for Solaris 10 will first
be
Eugene Gladchenko wrote:
Hi,
I'm running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I've
encountered a FreeBSD problem (PR kern/128083) and decided about updating the
motherboard BIOS. It looked like the update went right but after that I was
shocked to see my ZFS
William Saadi wrote:
Hi all,
I have a little question.
WIth RAID-Z rules, what is the true usable disks space?
It depends on what data you write to it, how the writes are done, and
what compression or redundancy parameters are set.
Is there a calcul like any RAID (ex. RAID5 = nb of
Robert Milkowski wrote:
Hello Richard,
Wednesday, October 15, 2008, 6:39:49 PM, you wrote:
RE Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
david lacerte wrote:
Oracle on ZFS best practice? docs? blogs? Any recent/new info related
to Running Oracle 10g and/or 11g on ZFS Solaris 10?
We try to keep the wikis up to date.
ZFS Best Practices Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS for
As it happens, I'm currently involved with a project doing some performance
analysis for this... but it is currently a WIP. Comments below.
Robert Milkowski wrote:
Hello Adam,
Tuesday, October 21, 2008, 2:00:46 PM, you wrote:
ANC We're using a rather large (3.8TB) ZFS volume for our
Constantin Gonzalez wrote:
Hi,
On a busy NFS server, performance tends to be very modest for large amounts
of small files due to the well known effects of ZFS and ZIL honoring the
NFS COMMIT operation[1].
For the mature sysadmin who knows what (s)he does, there are three
possibilities:
Ricardo M. Correia wrote:
Hi Richard,
On Qua, 2008-10-22 at 14:04 -0700, Richard Elling wrote:
It is more important to use a separate disk, than to use a separate and fast
disk. Anecdotal evidence suggests that using a USB hard disk works
well.
While I don't necessarily disagree
Adam N. Copeland wrote:
Thanks for the replies.
It appears the problem is that we are I/O bound. We have our SAN guy
looking into possibly moving us to faster spindles. In the meantime, I
wanted to implement whatever was possible to give us breathing room.
Turning off atime certainly helped,
Marcus Sundman wrote:
How can I verify the checksums for a specific file?
ZFS doesn't checksum files. So a file does not have a checksum
to verify. Perhaps you want to keep a digest(1) of the files?
-- richard
___
zfs-discuss mailing list
dick hoogendijk wrote:
Is it or isn't it possible to boot off two mirrored ZFS disks and if
yes, can this be done in the upcoming solaris 10 10/08 too?
Yes. Yes. For details, please consult the ZFS Administration Guide.
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
--
CR 6574286 removing a slog doesn't work
http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
-- richard
Ethan Erchinger wrote:
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
Hello,
I've looked quickly through the archives and haven't
Terry Heatlie wrote:
Folks,
I have a zpool with a raidz2 configuration which I've been switching
between two machines - an old one with a hardware problem and a new
one, which doesn't have hardware issues, but has a different
configuration . I've been trying to import the pool on the
I cannot recreate this on b101. There is no significant difference between
the two on my system.
-- richard
William Bauer wrote:
For clarity, here's how you can reproduce what I'm asking about:
This is for local file systems on build 86 and not about NFS or
any remote mounts. You can repeat
Simon Bonilla wrote:
Hi Team,
We have a customer who wants to implement the following architecture:
- Solaris 10
- Sun Cluster 3.2
- Oracle RAC
Oracle does not support RAC on ZFS, nor will ZFS work as a
shared, distributed file system. If you want a file system, then
QFS is supported
Paul B. Henson wrote:
I was playing with SXCE to get a feel for the soon to be released U6.
Performance wise, I'm hoping U6 will be better, hopefully some new code in
SXCE was introduced that hasn't quite been optimized yet... Last date I
heard was Nov 10, if I'm lucky I'll be able to start
I replied to Matt directly, but didn't hear back. It may be a driver issue
with checksum offloading. Certainly the symptoms are consistent.
To test with a workaround see
http://bugs.opensolaris.org/view_bug.do?bug_id=6686415
-- richard
Nigel Smith wrote:
Hi Matt.
Ok, got the capture and
Matt Harrison wrote:
On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
I replied to Matt directly, but didn't hear back. It may be a driver issue
with checksum offloading. Certainly the symptoms are consistent.
To test with a workaround see
http://bugs.opensolaris.org
Al Hopper wrote:
On Wed, Oct 29, 2008 at 8:43 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Wed, 29 Oct 2008, Martti Kuparinen wrote:
Bob Friesenhahn wrote:
AMD Athelon/Opteron dual core likely matches or exceeds
Intel quad core for ZFS use due to a less bottlenecked
Karl Rossing wrote:
$zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 48.4G 10.6G31K /rpool
rpool/ROOT36.4G 10.6G18K /rpool/ROOT
rpool/ROOT/snv_90_zfs 29.6G 10.6G 29.3G /.alt.tmp.b-Ugf.mnt/
Philip Brown wrote:
I've recently started down the road of production use for zfs, and am hitting
my head on some paradigm shifts. I'd like to clarify whether my understanding
is correct, and/or whether there are better ways of doing things.
I have one question for replication, and one
Paul Kraus wrote:
On Thu, Oct 30, 2008 at 11:05 PM, Richard Elling [EMAIL PROTECTED] wrote:
Philip Brown wrote:
I've recently started down the road of production use for zfs, and am
hitting my head on some paradigm shifts. I'd like to clarify whether my
understanding is correct
Ah, there is a cognitive disconnect... more below.
Philip Brown wrote:
relling wrote:
This question makes no sense to me. Perhaps you can
rephrase?
To take a really obnoxious case:
lets say I have a 1 gigabyte filesystem. It has 1.5 gigabytes of physical
disk allocated to it (so
Cesare wrote:
Hi all,
I've recently started down to put on production use for zfs and I'm
looking to how doing a backup of filesystem. I've more than one server
to migrate to ZFS and not so more server where there is a tape backup.
So I've put a L280 tape drive on one server and use it
Ross Smith wrote:
Hi Darren,
That's storing a dump of a snapshot on external media, but files
within it are not directly accessible. The work Tim et all are doing
is actually putting a live ZFS filesystem on external media and
sending snapshots to it.
Cognitive disconnect, again.
Ross Smith wrote:
Snapshots are not replacements for traditional backup/restore features.
If you need the latter, use what is currently available on the market.
-- richard
I'd actually say snapshots do a better job in some circumstances.
Certainly they're being used that way by the
Robert Milkowski wrote:
Hello zfs-discuss,
Looks like it is not supported there - what are the current plan to
bring L2ARC to Solaris 10?
L2ARC did not make Solaris 10 10/08 (aka update 6). I think the plans for
update 7 are still being formed.
-- richard
Our very own Roch (Bourbonnais) star is in a new video released
today as part of the MySQL releases today.
http://www.sun.com/servers/index.jsp?intcmp=hp2008nov05_mysql_find
In the video A Look Inside Sun's MySQL Optimization Lab
Roch gives a little bit of a tour and at around 3:00, you get a
Nathan Kroenert wrote:
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
For those disks that support it,
luxadm stop /dev/rdsk/...
has
Krzys wrote:
WHen property value copies is set to value greater than 1 how does it work?
Will
it store second copy of data on different disk? or does it store it on the
same
disk?
This is hard to describe in words, so I put together some pictures.
Krzys wrote:
Currently I have the following:
# zpool status
pool: rootpool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
rootpoolONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No
Thomas Kloeber wrote:
This is the 2nd attempt, so my apologies, if this mail got to you
already...
Folkses,
I'm in an absolute state of panic because I lost about 160GB of data
which were on an external USB disk.
Here is what happened:
1. I added a 500GB USB disk to my Ultra25/Solaris 10
Since you are using the rge driver, you might be getting bit by CR6686415.
http://bugs.opensolaris.org/view_bug.do?bug_id=6686415
The symptoms are that some packets work, more likely with small packets
like pings, but large packets might not work. I've also had trouble not
being
able to talk to
Ian Collins wrote:
I've been replicating a number of filesystems from a Solaris 10 update 6
system to an update 5 one. All of the filesystems receive fine except for
one, which fails with
cannot receive: invalid backup stream
What are the zfs versions? (zfs upgrade command output)
Ian Collins wrote:
Richard Elling wrote:
Ian Collins wrote:
I've been replicating a number of filesystems from a Solaris 10
update 6 system to an update 5 one. All of the filesystems receive
fine except for one, which fails with
cannot receive: invalid backup stream
Feng Tian wrote:
Hi,
I wonder if anyone can enlighten me on how ZFS handles posix_fadvise calls.
And if ZFS honors posix_fadvise, which is the rule of thumb of using it. I
cannot found the much information on this topic.
Thanks,
UTSL
to disk?
Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to
version 10 which includes option of using a separate storage device for the
ZIL.
It had been my impression that you would need to use an flash disk/SSD to
store the ZIL to improve performance, but Richard Elling
Adam Leventhal wrote:
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
That is _not_ active-active, that is active-passive.
If you have a active-active system I can access the same data via both
controllers at the same time. I can't if it works like you just
described.
dick hoogendijk wrote:
On Sat, 15 Nov 2008 18:49:17 +1300
Ian Collins [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210
Ian Collins wrote:
Al Hopper wrote:
On Sat, Nov 15, 2008 at 9:26 AM, Richard Elling [EMAIL PROTECTED] wrote:
dick hoogendijk wrote:
On Sat, 15 Nov 2008 18:49:17 +1300
Ian Collins [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote
[EMAIL PROTECTED] wrote:
RTL8211C IP checksum offload is broken. You can disable it, but you
have to edit /etc/system. See CR 6686415 for details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6686415
-- richard
I think the proper way to state this is the driver doesn't
Chris Gerhard wrote:
My home server running snv_94 is tipping with the same assertion when someone
list a particular file:
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
-- richard
::status
Loading modules: [ unix genunix
Krenz von Leiberman wrote:
Does ZFS support pooled, mirrored, and raidz storage with
SATA-port-multipliers (http://www.serialata.org/portmultiplier.asp)?
Thanks.
ZFS supports block devices. You'll need port multiplier support in the
OS before ZFS can use it.
-- richard
Luke Lonergan wrote:
ZFS works marvelously well for data warehouse and analytic DBs. For lots of
small updates scattered across the breadth of the persistent working set,
it's not going to work well IMO.
Actually, it does seem to work quite well when you use a read optimized
SSD for the
Chris Greer wrote:
Right now we are not using Oracle...we are using iorate so we don't have
separate logs. When the testing was with Oracle the logs were separate.
This test represents the 13 data luns that we had during those test.
The reason it wasn't striped with vxvm is that the
Luke Lonergan wrote:
Actually, it does seem to work quite
well when you use a read optimized
SSD for the L2ARC. In that case,
random read workloads have very
fast access, once the cache is warm.
One would expect so, yes. But the usefulness of this is limited to the cases
where the
Toby Thain wrote:
On 24-Nov-08, at 3:49 PM, Miles Nordin wrote:
tt == Toby Thain [EMAIL PROTECTED] writes:
tt Why would it be assumed to be a bug in Solaris? Seems more
tt likely on balance to be a problem in the error reporting path
tt or a controller/
Scara Maccai wrote:
In the worst case, the device would be selectable,
but not responding
to data requests which would lead through the device
retry logic and can
take minutes.
that's what I didn't know: that a driver could take minutes (hours???) to
decide that a device is not
Scara Maccai wrote:
Oh, and regarding the original post -- as several
readers correctly
surmised, we weren't faking anything, we just didn't
want to wait
for all the device timeouts. Because the disks were
on USB, which
is a hotplug-capable bus, unplugging the dead disk
generated an
Paweł Tęcza wrote:
Dnia 2008-11-25, wto o godzinie 23:16 +0100, Paweł Tęcza pisze:
Also I'm very curious whether I can configure Time Slider to taking
backup every 2 or 4 or 8 hours, for example.
Or set the max number of snapshots?
UTSL
Ross wrote:
Well, you're not alone in wanting to use ZFS and iSCSI like that, and in fact
my change request suggested that this is exactly one of the things that could
be addressed:
The idea is really a two stage RFE, since just the first part would have
benefits. The key is to improve
Ross Smith wrote:
On Fri, Nov 28, 2008 at 5:05 AM, Richard Elling [EMAIL PROTECTED] wrote:
Ross wrote:
Well, you're not alone in wanting to use ZFS and iSCSI like that, and in
fact my change request suggested that this is exactly one of the things that
could be addressed:
The idea
Ray Clark wrote:
I am [trying to] perform a test prior to moving my data to solaris and zfs.
Things are going very poorly. Please suggest what I might do to understand
what is going on, report a meaningful bug report, fix it, whatever!
Both to learn what the compression could be, and to
Nicholas Lee wrote:
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
In short, separate logs with rotating rust may reduce sync write
latency by
perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
Glaser, David wrote:
Hi all,
I have a Thumper (ok, actually 3) with each having one large pool,
multiple filesystems and many snapshots. They are holding rsync copies
of multiple clients, being synced every night (using snapshots to keep
‘incremental’ backups).
I’m wondering how often
Ethan Erchinger wrote:
Hi all,
First, I'll say my intent is not to spam a bunch of lists, but after
posting to opensolaris-discuss I had someone communicate with me offline
that these lists would possibly be a better place to start. So here we
are. For those on all three lists, sorry for
Ethan Erchinger wrote:
Richard Elling wrote:
I've seen these symptoms when a large number of errors were reported
in a short period of time and memory was low. What does fmdump -eV
show?
fmdump -eV shows lots of messages like this, and yea, I believe that
to be sd16 which is the SSD
Ethan Erchinger wrote:
Richard Elling wrote:
asc = 0x29
ascq = 0x0
ASC/ASCQ 29/00 is POWER ON, RESET, OR BUS DEVICE RESET OCCURRED
http://www.t10.org/lists/asc-num.htm#ASC_29
[this should be more descriptive as the codes are, more-or-less,
standardized, I'll try to file
Mike Brancato wrote:
I've seen discussions as far back as 2006 that say development is underway to
allow the addition and remove of disks in a raidz vdev to grow/shrink the
group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do
'zpool remove tank c0t3d0' and data
Mike Brancato wrote:
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data
there are. With copies of 2 or more, in theory, an entire disk can have read
errors, and the zfs volume still works.
No, this is not a completely true statement.
The unfortunate part here is
Joseph Zhou wrote:
Yeah?
http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
Snapshot is a big deal?
Snapshot is a big deal, but you will find most hardware RAID
implementations
are somewhat limited, as the above adaptec only supports 4
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages
Vincent Fox wrote:
Whether tis nobler.
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
IMHO, the *only* good reason to create a new file system
701 - 800 of 2354 matches
Mail list logo