is that the combination of SATA drives
and SAS expanders is a large economy-sized bucket of pain.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I have no knowledge of that subject.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
option 3 because the files are mostly largish graphics files and the like.)
Thanks for the help!
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
and then let the Mac copy from one zvol to the
other-- this is starting to feel like voodoo here.)
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for the snapshot vs zfs list on the new system and looking at
space used.)
Is this a problem? Should I be panicking yet?
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing
512 byte and 4k sector disks in one pool, or something else? I have seen
alot of discussion on the 4k issue but I haven't understood what the actual
problem ZFS has with 4k sectors is. It's getting harder and harder to find
.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11,
will zpool let me create a new pool with ashift=12 out of the box or will
I need to play around with a patched zpool binary (or the iSCSI loopback)?
--
Dave Pooser
Manager of Information Services
Alford Media http
the individual disks and ZFS can handle redundancy and
recovery.
--
Dave Pooser
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it, suggestions gratefully accepted. My
current ZFS storage servers are all built around sustained reads/sustained
writes, so tuning the ZIL and L2ARC are still outside my experience.
--
Dave Pooser
Manager of Information Services
Alford Media Services, Inc
Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com
wrote:
Well ...
Slice all 4 drives into 13G and 60G.
Use a mirror of 13G for the rpool.
Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
Use a mirror of 13G appended to tank
Hi Edward! Thanks for your post. I
Hello!
I don't see the problem. Install the OS onto a mirrored partition, and
configure all the remaining storage however you like - raid or mirror or
watever.
I didn't understand your point of view until I read the next paragraph.
My personal preference, assuming 4 disks, since the OS is
Hello Jim! I understood ZFS doesn't like slices but from your reply maybe I
should reconsider. I have a few older servers with 4 bays x 73G. If I make a
root mirror pool and swap on the other 2 as you suggest, then I would have
about 63G x 4 left over. If so then I am back to wondering what to do
Many thanks to all who responded. I learned a lot from this thread! For now
I have decided to make a 3 way mirror because of the read performance. I
don't want to take a risk on an unmirrored drive.
Instead of replying to everyone separately I am following the Sun Managers
system since I read
product for the home hobbyist, though. :^)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, unequivocally retract my
failed bit crack, but I just ordered two more of these cards for my next
project! :^)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com http://www.alfordmedia.com/
___
zfs-discuss
if other
controllers are active?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/27/11 5:15 AM, James C. McPherson j...@opensolaris.org wrote:
On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote:
On your system, c12 is the mpxio virtual controller; any disk which is
potentially multipath-able (and that includes
Especially since the SAS3081 cards work as expected. I guess I'll start
looking for some more of the 3Gb SAS controllers and chalk the 9211s up as
a failed bit.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com http://www.alfordmedia.com
towers that have only one HBA,
put the 9211s in them and grab the LSISAS3081 cards out of those towers
for this beast. So those cards will still get productive use -- not a
failed bit, at worst just not serving the purpose I had in mind.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media
that with a
suboptimal choice for this project.
Not knowing your other requirements for the project, I'll settle
for this version :)
Actually at this point I think I have to re-revise it to just fine for
this project had I brains enough to comprehend the output of 'format'.
:^)
--
Dave Pooser, ACSA
Manager
/scsi_vhci/disk@g5000cca222e0533f
Specify disk (enter its number): ^C
Any suggestions?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
will be
gratefully accepted...
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are on a zfs server the same files fail to
play.
Is it a local phenomena or a common problem?
We don't have that problem, and we have roughly 25TB of QuickTime files on
an OpenSolaris box shared over CIFS to mostly Mac clients.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http
c11t7d0 ONLINE 0 0 0
c11t8d0 ONLINE 0 0 0
errors: 1 data errors, use '-v' for a list
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing
: Make sure the affected devices are connected, then run
'zpool clear'.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Predictive Failure Analysis: 0
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- well, that seems
much less likely than the idea that I just have a bad controller that needs
replacing.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have a 14 drive pool, in a 2x 7 drive raidz2, with l2arc and slog devices
attached.
I had a port go bad on one of my controllers (both are sat2-mv8), so I need to
replace it (I have no spare ports on either card). My spare controller is a LSI
1068 based 8 port card.
My plan is to remove
I just query for the percentage in use via snmp (net-snmp)
In my snmpd.conf I have:
extend .1.3.6.1.4.1.2021.60 drive15 /usr/gnu/bin/sh /opt/utils/zpools.ksh rpool
space
and the zpools.ksh is:
#!/bin/ksh
export PATH=/usr/bin:/usr/sbin:/sbin
export LD_LIBRARY_PATH=/usr/lib
zpool list -H -o
Can you provide some specifics to see how bad the writes are?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
project,
much of it from the same folks who have experience producing
enterprise-grade ZFS. Speaking for myself, if Solaris 11 doesn't include
COMSTAR I'm going to have to take a serious look at another alternative for
our show storage towers
--
Dave Pooser, ACSA
Manager of Information Services
ideas?
Is it possible that snapshots were renamed on the sending pool during
the send operation?
-- Dave
--
David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
David Dyer-Bennet wrote:
On Tue, August 10, 2010 13:23, Dave Pacheco wrote:
David Dyer-Bennet wrote:
My full backup still doesn't complete. However, instead of hanging the
entire disk subsystem as it did on 111b, it now issues error messages.
Errors at the end.
[...]
cannot receive
I've been looking at using consumer 2.5 drives also, I think the ones I've
settled on are the hitachi 7K500 500 GB. These are 7200 rpm, I'm concerned the
5400's might be a little too low performance wise. The main reasons for hitachi
were performance seems to be among the top 2 or 3 in the
Ok guys, can we please kill this thread about commodity versus enterprise
hardware?
+1
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Looks like the bug affects through snv_137. Patches are available from the
usual location-- https://pkg.sun.com/opensolaris/support for OpenSolaris.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs
10.
*You can't really compare ZFS to conventional RAID implementations, but if
you look at it from 50,000 feet and squint you get the similarities.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs
emails to multiple addresses-- may yet push me back
to my default CentOS platform, but to the extent that Oracle is even in the
running it's because of ZFS.)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
I trimmed, and then got complained at by a mailing list user that the context
of what I was replying to was missing. Can't win :P
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, described in the
administration guide:
http://wikis.sun.com/display/FishWorks/Documentation
-- Dave
--
David Pacheco, Sun Microsystems Fishworks. http://blogs.sun.com/dap/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
keep backups. It's the
invisible data loss that makes life suboptimal.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
boxes. Is there a good resource on doing
something like that with an OpenSolaris storage server? I could see that as
a project I might want to attempt.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs
for me. It does require X11
on your machine.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/25/10 6:11 PM, Rich Teer rich.t...@rite-group.com wrote:
I tried going to that URL, but got a 404 error... :-( What's the correct
one, please?
http://code.google.com/p/maczfs/
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
writes/reads), how much benefit will I see from the SAS
interface?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
being enough to make the hardware ID it as bad.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On 18 apr 2010, at 00.52, Dave Vrona wrote:
Ok, so originally I presented the X-25E as a
reasonable approach. After reading the follow-ups,
I'm second guessing my statement.
Any decent alternatives at a reasonable price?
How much is reasonable? :-)
How about $1000 per device
The Acard device mentioned in this thread looks interesting:
http://opensolaris.org/jive/thread.jspa?messageID=401719#401719
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Or, DDRDrive X1 ? Would the X1 need to be mirrored?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IMHO, whether a dedicated log device needs redundancy
(mirrored), should
be determined by the dynamics of each end-user
environment (zpool version,
goals/priorities, and budget).
Well, I populate a chassis with dual HBAs because my _perception_ is they tend
to fail more than other cards.
Hi all,
I'm planning a new build based on a SuperMicro chassis with 16 bays. I am
looking to use up to 4 of the bays for SSD devices.
After reading many posts about SSDs I believe I have a _basic_ understanding of
a reasonable approach to utilizing SSDs for ZIL and L2ARC.
Namely:
ZIL:
Ok, so originally I presented the X-25E as a reasonable approach. After
reading the follow-ups, I'm second guessing my statement.
Any decent alternatives at a reasonable price?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
drive densities, but a quick Google search shows that they've overpromised
and underdelivered on Solaris support in the past. Is anybody currently
using those cards on OpenSolaris?
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration
guide, it sounds like a disk designated as a hot spare will:
1. Automatically take the place of a bad drive when needed
2. The spare will automatically be detached back to the spare
pool when a new device is inserted and
Hi Dave,
I'm unclear about the autoreplace behavior with one
spare that is
connected to two pools. I don't see how it could work
if the autoreplace
property is enabled on both pools, which formats and
replaces a spare
Because I already partitioned the disk into slices. Then
I indicated
Try:
zfs list -r -t snapshot zp1
--
Dave
On 2/21/10 5:23 PM, David Dyer-Bennet wrote:
I thought this was simple. Turns out not to be.
bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type
Fails equally on all the variants of pool name
IT firmware)
8x 2TB Hitachi 7200RPM SATA drives (2 connected to each LSI and 2 to
motherboard SATA ports)
2x 60GB Imation M-class SSD (boot mirror)
Qlogic 2440 PCIe Fibre Channel HBA
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
is sequential write speed.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0.0 1816.2 0.0 0.10.00.5 0 6 c9t1d0
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
dinner while I'm about it. I'll report back to the
list with any progress or lack thereof.
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
on my motherboard, i can make the onboard sata ports show up as IDE or SATA,
you may look into that. It would probably be something like AHCI mode.
Yeah, I changed the motherboard setting from enhanced to AHCI and now
those ports show up as SATA.
--
Dave Pooser, ACSA
Manager of Information
writes to be
synchronous -- thanks to Richard for that pointer.
Five hours from tearing my hair out to toasting a success-- this list is a
great resource!
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media http://www.alfordmedia.com
___
zfs
Use create-lu to give the clone a different GUID:
sbdadm create-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
--
Dave
On 2/8/10 10:34 AM, Scott Meilicke wrote:
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00
for mounting cloned volumes under
linux with b130. I don't have any windows clients to test with.
--
Dave
On 2/8/10 11:23 AM, Scott Meilicke wrote:
Sure, but that will put me back into the original situation.
-Scott
___
zfs-discuss mailing list
zfs
Thanks for taking the time to write this - very useful info :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
--
Dave Abrahams
BoostPro Computing
http://boostpro.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The case has been identified and I've just received
an IDR,which I will
test next week. I've been told the issue is fixed in
update 8, but I'm
not sure if there is an nv fix target.
Anyone know if there Is an opensolaris fix for this issue and when?
These seem to be related.
Thanks for the reply but this seems to be a bit different.
a couple of things I failed to mention;
1) this is a secondary pool and not the root pool.
2) the snapshot are trimmed to only keep 80 or so.
The system boots and runs fine. It's just an issue for this secondary pool
and
Hello all,
I have a situation where zpool status shows no known data errors but all
processes on a specific filesystem are hung. This has happened 2 times before
since we installed Opensolaris 2009.06 snv_111b. For instance there are two
files systems in this pool 'zfs get all' on one
send stream. It does not verify the ZFS format/integrity of the
stream - the only way to do that is to zfs recv the stream into ZFS.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
that it has been fixed in OpenSolaris as well. I can't tell by the info
on the bugs DB - it seems like it hasn't been fixed in OpenSolaris. If
it has, then the status should reflect it as Fixed/Closed in the bug
database...
--
Dave
Trevor Pretty wrote:
Dave
Yep that's an RFE. (Request
Richard Elling wrote:
On Aug 28, 2009, at 12:15 AM, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all
solaris versions.
In a former life, I worked at Sun to identify things like this that
affect
Can anyone from Sun comment on the status/priority of bug ID 6761786?
Seems like this would be a very high priority bug, but it hasn't been
updated since Oct 2008.
Has anyone else with thousands of volume snapshots experienced the hours
long import process?
--
Dave
. According to the link/bug
report above it will take roughly 5.5 hours to import my pool (even when
the pool is operating perfectly fine and is not degraded or faulted).
This is obviously unacceptable to anyone in an HA environment. Hopefully
someone close to the issue can clarify.
--
Dave
Blake
Maybe you can run a Dtrace probe using Chime?
http://blogs.sun.com/observatory/entry/chime
Initial Traces - Device IO
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system in situations
I don't think is at liberty to discuss ZFS Deduplication at this point in time:
http://www.itworld.com/storage/71307/sun-tussles-de-duplication-startup
Hopefully, the matter is resolved and discussions can proceed openly.
Send lawyers, guns and money. - Warren Zevon
--
This message posted from
Anyone (Ross?) creating ZFS pools over iSCSI connections will want to
pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk
problems:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=649
Yay!
___
zfs-discuss mailing
Haudy Kazemi wrote:
I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?
my first bet would be writing tools that test for ignored sync cache
commands leading to lost
Cindy, my question is about what system specific info is maintained that
would need to be changed? To take my example, my E450, homer, has disks that
are failing and it's a big clunky server anyway, and management wants to
decommission it. But we have an old 220R racked up doing nothing, and
I'll start:
- The commands are easy to remember -- all two of them. Which is easier, SVM
or ZFS, to mirror your disks? I've been using SVM for years and still have to
break out the manual to use metadb, metainit, metastat, metattach, metadetach,
etc. I hardly ever have to break out the ZFS
So I had an E450 running Solaris 8 with VxVM encapsulated root disk. I
upgraded it to Solaris 10 ZFS root using this method:
- Unencapsulate the root disk
- Remove VxVM components from the second disk
- Live Upgrade from 8 to 10 on the now-unused second disk
- Boot to the new Solaris 10 install
can't test this myself at the moment, but the reporter of Bug ID
6733267 says even one failed slog from a pair of mirrored slogs will
prevent an exported zpool from being imported. Has anyone tested this
recently?
--
Dave
___
zfs-discuss mailing list
zfs
to be committed to the main pool
vdevs - you will never know if you lost any data or not.
I think this thread is the latest discussion about slogs and their behavior:
https://opensolaris.org/jive/thread.jspa?threadID=102392tstart=0
--
Dave
___
zfs-discuss mailing
Eric Schrock wrote:
On May 19, 2009, at 12:57 PM, Dave wrote:
If you don't have mirrored slogs and the slog fails, you may lose any
data that was in a txg group waiting to be committed to the main pool
vdevs - you will never know if you lost any data or not.
None of the above is correct
in a 2U:
http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
They both work in normal PCI-E slots on my Tyan 2927 mobos.
Finding good non-Sun hardware that works very well under OpenSolaris is
frustrating to say the least. Good luck.
--
Dave
at
the ZFS level instead of the application level it would be very cool.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of ZFS. If you want to make
sure your data is not corrupted over the wire, use IPSec. If you want to
prevent corruption in RAM, use ECC sticks, etc.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Gary Mills wrote:
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence
this ZFS
behaviour and force it to wait until my cluster software decides about the
active server.
Use the cachefile=none option whenever you import the pool on either server:
zpool import -o cachefile=none xpool
--
Dave
___
zfs-discuss mailing list
zfs
Frank Cusack wrote:
When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.
So far as I can tell, mounting the received filesystem is the last
step in the process. So I guess
Will Murnane wrote:
On Thu, Feb 12, 2009 at 20:05, Tim t...@tcsac.net wrote:
Are you selectively ignoring responses to this thread or something? Dave
has already stated he *HAS IT WORKING TODAY*.
No, I saw that post. However, I saw one unequivocal it doesn't work
earlier (even if I can't
near the ability to do so.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Henrik Johansson wrote:
I tried to export the zpool also, and I got this, the strange part is
that it sometimes still thinks that the ubuntu-01-dsk01 dataset exists:
# zpool export zpool01
cannot open 'zpool01/xvm/dsk/ubuntu-01-dsk01': dataset does not exist
cannot unmount '/zpool01/dump':
You can also import pools by their unique ID instead of by name. If the
pool is not imported, 'zpool import' with no arguments should list the
pool IDs. If the pool is imported, 'zpool get guid poolname' will list
the pool ID.
Beware that if the zpools have the same mountpoints set within any
Brent wrote:
Does anyone know if this card will work in a standard pci express slot?
Yes. I have an AOC-USAS-L8i working in a regular PCI-E slot in my Tyan
2927 motherboard.
The AOC-SAT2-MV8 also works in a regular PCI slot (although it is PCI-X
card).
D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
with all respect: I never read such a non
Upgrading to b105 seems to improve zfs send/recv quite a bit. See this
thread:
http://www.opensolaris.org/jive/message.jspa?messageID=330988
--
Dave
Kok Fong Lau wrote:
I have been using ZFS send and receive for a while and I noticed that when I
try to do a send on a zfs file system
up
then you can probably import it, but beware that there may be
incompatibilities and bugs in either the solaris or mac zfs code that
may cause you to lose your data.
--
Dave
LEES, Cooper wrote:
M,
Just taking a stab at it.
Yes. This should work - well mounting it locally through ISCSI
1 - 100 of 158 matches
Mail list logo