additional guests.
I'm now waiting for new SSD disks (STEC Zeus 18GB en STEC Mach 100GB.), since
those are used in SUN 7000 product. I hope they perform better.
Kristof
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
I have been testing the 32 GB X25-E last week.
When I connect it to one of the onboard (tyan 2925) sata ports, it's not
detected by opensolaris 2008.11.
When I connect it to an PCIE LSI 3081 , The disk is found But I'm getting
trouble when I run performance tests via filebench.
Filebench
I've seen this error often, but mostly the volume is shared.
I think it happens as soon ay the volume has snapshots.
To check if the volume is exposed or not, you can run:
iscsitadm list target -v
If the volume shows up, it's OK and you should ignore the message.
K
--
This message posted
If you have snapshots of the root filesystem, you can recover the file.
To check for snapshots run:
zfs list - t all
If you see something like
rpool/ROOT/opensola...@x
then you are lucky, you will find the original vfstab file in:
/b/.zfs/snapshotname/etc/vfstab
K
--
This message
mypool/normal
# cp UTF8-Köln.txt /mypool/mixed/
# cp ISO8859-K?ln.txt /mypool/mixed/
cp: cannot stat `/mypool/mixed/ISO8859-K\366ln.txt': Operation not supported
# cp UTF8-Köln.txt /mypool/normal/
# cp ISO8859-K?ln.txt /mypool/normal/
#
Kristof/
--
This message posted from opensolaris.org
koeln.tar
with the same result.
A truss on the failed can be found in truss-ENOTSUP.txt.
Notice the \xf6 in the filename of the stat64, which gives an ENOTSUP.
When we take an the UTF-8 name, the stat64 succeeds just fine. A truss of this
can be found in truss-OK.txt
Cheers,
Kristof/
--
This message posted from
/
#
Kristof/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think this is possible.
I already tried to add extra vdevs after install, but I got an error message
telling me that multiple vdevs for rpool are not allowed.
K
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
HI,
Today I tried one more time from scratch.
I re-installed server B with latest available opensolaris 2008.11 (b99), b.t.w
server A runs opensolaris 2008 b98
I also re-labeled all my disks.
This time I can successfully import the pool on server B:
[EMAIL PROTECTED]:~# zpool import
pool:
ONLINE
c8t600144F048FFCCD8E081B33B9800d0 ONLINE
But the pool can still be imported on server1, so the pool is still OK.
How come I cannot import the pool on another node?
Thanks in advance.
Kristof
--
This message posted from opensolaris.org
post.
If something is still unclear, please let me know.
Kristof
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm also seeing a very slow import on the 2008.11 build 98 prerelease.
I have the following setup:
a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk.
I was testing a setup with iscsiboot (windows vista) with gpxeboot, every
client was booted from a iscsi exposed
I'm also seeing a slow import on the 2008.11 build 98 prerelease.
but my situation is a little different :
I have the following setup:
a striped zpool of 2 mirrors, both mirrors have 1 local disk and 1 iscsi disk.
I was testing iscsiboot (windows vista) with gpxeboot, every client was booted
I don't know if this is already available in S10 10/08, but in opensolaris
build 71 you can set the:
zpool failmode property
see:
http://opensolaris.org/os/community/arc/caselog/2007/567/
available options are:
The property can be set to one of three options: wait, continue,
or panic.
The
A colleague told me IET is not longer an ongoing project, so it's obsoleted.
There's another new linux iscsi target:
ISCSI-SCST is a forked (with all respects) version of IET with updates to work
over SCST as well as with many improvements and bugfixes.
see http://scst.sourceforge.net/
thanks for pointing me to that article.
I see there is a solution (patch) for ietd.
Is there any solution for zfs zvols ?
Is Sun plannig any action to solve this issue?
K
This message posted from opensolaris.org
___
zfs-discuss mailing list
Some time ago I experienced the same issue.
Only 1 target could be connected from an esx host. Others were shown as
alternative paths to that target.
If I'm reminding correctly I thought I read on a forum it has something to do
with the disks serial number.
Normally every single (i)scsi disk
If you have a mirrored iscsi zpool. It will NOT panic when 1 of the submirrors
is unavailable.
zpool status will hang for some time, but after I thinkt 300 seconds it will
put the device on unavailable.
The panic was the default in the past, And it only occurs if all devices are
unavailable.
I would be very happy having a filesystem based zfs scrub
We have a 18TB big zpool, it takes more then 2 days to do the scrub.
Since we cannot take snapshots during the scrub, this is unacceptable
Kristof
This message posted from opensolaris.org
Best option is to stripe pairs of mirrors. So in your case create a pool which
stripes over 3 mirrors, this will look like:
pool
mirror:
thumper1
thumper2
mirror:
thumper3
thumper4
mirror:
thumper5
thumper6
So this will stripe over those 3
I don't have an exact copy of the error, but the following message was reported
by zpool status:
Pool degraded. Meta data corrupted. Please restore pool from backup.
All devices where online, but pool could not be imported. During import we got
I/O error.
Krdoor
This message posted from
Last 2 weeks we had 2 zpools corrupted.
Pool was visible via zpool import, but could not be imported anymore. During
import attempt we got I/O error,
After a first powercut we lost our jumpstart/nfsroot zpool (another pool was
still OK). Luckaly jumpstart data was backed up and easely
We are currently experiencing a very huge perfomance drop on our zfs storage
server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local
mirror pool. Recently we had some issues with one of the storagenodes, because
of that the pool was degraded. Since we did not
never see the Server is sending
back the challenge in response.
What could be going on?
Thanks for all your help!
Kristof
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
24 matches
Mail list logo