I'm at a loss, I've managed to get myself into a fix. I'm not sure where the
problem is, but essentially i have a zpool i cannot import. This particular
pool used to have a two drives (not shown below), one for cache and another for
log. I'm unsure why they are no longer detected on zpool
Cindy,
Thanks for the quick response. Consulting ZFS history I note the following
actions:
imported my three disk raid-z pool originally created on the most recent
version of OpenSolaris but now running NexantaStor 3.03
upgraded my pool
destroyed two file systems I was no longer using (neither
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
matt.connolly...@gmail.com wrote:
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi
sh-4.0# zfs
I'm running a mirrored pair of 2 TB SATA drives as my data storage drives on my
home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a
sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to
the single mirrored pair. I'm running B134, with ZFS pool
- Second question, how about this: partition the two X25E drives into two, and
then mirror each half of each drive as log devices for each pool. Am I missing
something with this scheme? On boot, will the GUID for each pool get found by
the system from the partitioned log drives? IIRC
Has anyone looked into the new LTFS on LTO-5 for tape backups? Any idea how
this would work with ZFS? I'm presuming ZFS send / receive are not going to
work. But it seems rather appealing to have the metadata properly with the
data, and being able to browse files directly instead of having to
Darren,
It looks like you've lost your log device. The newly integrated missing
log support will help once it's available. In the meantime, you should
run 'zdb -l' on your log device to make sure the label is still intact.
Thanks,
George
Darren Taylor wrote:
I'm at a loss, I've managed to
Hi,
Is there a summary somewhere which describes exactly which ZFS related
events/errors appears in FMA today, also some sort of roadmap about
events/errors that are planned to be reported via FMA in the future?
Regards,
sendai
--
This message posted from opensolaris.org
Hi,
I have a large pool (~50TB total, ~42TB usable), composed of 4 raidz1
volumes (of 7 x 2TB disks each):
# zpool iostat -v | grep -v c4
capacity operationsbandwidth
pool used avail read write read write
- - - -
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jonathan Loran
Because you're at pool v15, it does not matter if the log device fails while
you're running, or you're offline and trying
zfs get all claims that i have 523G used by snapshot.
i want to get rid of it.
but when i look at the space used by each snapshot i can't find the one that
can occupy so much space
daten/backups used
959G
On 03 August, 2010 - valrh...@gmail.com sent me these 1,2K bytes:
I'm running a mirrored pair of 2 TB SATA drives as my data storage drives on
my home workstation, a Core i7-based machine with 10 GB of RAM. I recently
added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as
On Aug 3, 2010, at 9:29 AM, Roy Sigurd Karlsbakk wrote:
- Second question, how about this: partition the two X25E drives into two,
and then mirror each half of each drive as log devices for each pool. Am I
missing something with this scheme? On boot, will the GUID for each pool get
On Aug 3, 2010, at 5:56 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 03/08/2010 22:49, Ross Walker wrote:
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnaisroch.bourbonn...@sun.com
wrote:
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
On Wed, May 26, 2010 at 5:08 AM, Matt
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
Robert Milkowski
Sent: Tuesday, August 03, 2010 5:57 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] iScsi slow
On 03/08/2010 22:49, Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 03/08/2010 22:49, Ross Walker wrote:
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnaisroch.bourbonn...@sun.com
wrote:
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
On Wed, May 26, 2010 at 5:08 AM, Matt
Thanks for the info!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/08/2010 23:20, Ross Walker wrote:
Nothing has been violated here.
Look for WCE flag in COMSTAR where you can control how a given zvol should
behave (synchronous or asynchronous). Additionally in recent build you have zfs
set sync={disabled|default|always} which also works with zvols.
On Aug 4, 2010, at 12:23 AM, Darren Taylor wrote:
Hi George,
I think you are right. The log device looks to have suffered a complete loss,
there is no data on the disk at all. The log device was a acard ram drive
(with battery backup), but somehow it has faulted clearing all data.
Short answer: No.
Long answer: Not without rewriting the previously written data. Data
is being striped over all of the top level VDEVs, or at least it should
be. But there is no way, at least not built into ZFS, to re-allocate the
storage to perform I/O balancing. You would basically have to
On Aug 3, 2010, at 10:08 PM, Khyron wrote:
Long answer: Not without rewriting the previously written data. Data
is being striped over all of the top level VDEVs, or at least it
should
be. But there is no way, at least not built into ZFS, to re-
allocate the
storage to perform I/O
Wow! Thanks for the information James, after consulting with my manager we're
going to install the text-install version.
I'm going to try that as we're installing it on a new disk. Just curious, if I
do an export of about 3 zvols and reimport them, the mounts will be there but
will I have
On 4/08/10 12:55 PM, Emily Grettel wrote:
Wow! Thanks for the information James, after consulting with my manager
we're going to install the text-install version.
Better to stick with the supportable methods, imho :-)
I'm going to try that as we're installing it on a new disk. Just
curious,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Darren Taylor
I'm not sure
where the problem is, but essentially i have a zpool i cannot import.
This particular pool used to have a two drives (not shown below), one
for cache and another
On Aug 3, 2010, at 10:57 PM, Richard Elling wrote:
Unfortunately, zpool iostat is completely useless at describing
performance.
The only thing it can do is show device bandwidth, and everyone here
knows
that bandwidth is not performance, right? Nod along, thank you.
I totally understand
On Aug 3, 2010, at 8:39 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Darren Taylor
I'm not sure
where the problem is, but essentially i have a zpool i cannot import.
This particular pool used to have a two
I notice you use the word volume which really isn't accurate or
appropriate here.
If all of these VDEVs are part of the same pool, which as I recall you
said they are, then writes are striped across all of them (with bias for
the more empty aka less full VDEVs).
You probably want to zfs send the
On Aug 3, 2010, at 8:55 PM, Eduardo Bragatto wrote:
On Aug 3, 2010, at 10:57 PM, Richard Elling wrote:
Unfortunately, zpool iostat is completely useless at describing performance.
The only thing it can do is show device bandwidth, and everyone here knows
that bandwidth is not performance,
28 matches
Mail list logo