Hi
I got a san disk visible on two nodes (global or zone).
On the first node, i can create a pool using zpool create x1 sandisk.
If i try to reuse this disk on the first node, i got a vdev in use warning.
If i try to create a pool on the second node using the same disk, zpool create
x2 sandisk,
Hi!
I just want to check with the community to see if this is normal.
I have used a X4500 with 500Gb disks and I'm not impressed by the copy
performance.
I can run several jobs in parallel and get close to 400mb/s but I need better
performance
from a single copy. I have tried to be EVIL as
Currently it is easy to share a ZFS volume as an iSCSI target. Has
there been any thought toward adding the ability to share a ZFS volume
via USB-2 or Firewire to a directly attached client?
There is a substantial market for storage products which act like a
USB-2 or Firewire drive. Some of
# lucreate -n B85
Analyzing system configuration.
Hi,
after typing
# lucreate -n B85
I get the following error:
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name BE1.
Current boot environment is named BE1.
Creating initial
| Is it really true that as the guy on the above link states (Please
| read the link, sorry) when one iSCSI mirror goes off line, the
| initiator system will panic? Or even worse, not boot its self cleanly
| after such a panic? How could this be? Anyone else with experience
| with iSCSI based
Roman
I didn't think that we had live upgrade support for zfs root filesystem yet.
T
Roman Morokutti wrote:
# lucreate -n B85
Analyzing system configuration.
Hi,
after typing
# lucreate -n B85
I get the following error:
No name for current boot environment.
INFORMATION: The
On Mon, 2008-04-07 at 20:21 -0600, Keith Bierman wrote:
On Apr 7, 2008, at 1:46 PM, David Loose wrote:
my Solaris samba shares never really played well with iTunes.
Another approach might be to stick with Solaris on the server, and
run netatalk netatalk.sourceforge.net instead of
On my drive array (capable of 260MB/second single-process writes and
450MB/second single-process reads) 'zfs iostat' reports a read rate of
about 59MB/second and a write rate of about 59MB/second when executing
'cp -r' on a directory containing thousands of 8MB files. This seems
very similar
I further found out that there exists a nearly similar
problem described in Bug-Id: 6442921.
lubootdev reported:
# /etc/lib/lu/lubootdev -b
/dev/dsk/c0d0p0
Using this info for -C I got the following:
# lucreate -C /dev/dsk/c0d0p0 -n B85
Analyzing system configuration.
No name for current boot
I didn't think that we had live upgrade support for
zfs root filesystem yet.
Original quote from Lori Alt:
ZFS is ideally suited to making “clone and
modify” fast, easy, and space-efficient. Both
“clone and modify” tools will work much better
if your root file system is ZFS. (The new install
Hi,
This was taken from where? From liveupgrade??? As long as I know, liveupgrade
works only with ufs. At the time of my first install I choose ufs exactly for
the reason to be able to do liveupgrade.
What you have there is something that I agree but NOT for liveupgrade but yes
to work with
It's true that liveupgrade doesn't support zfs yet. That
support will become available in the build 89 or 90
time frame, at the same time that zfs as a root file system
is supported.
Lori
Ether.pt wrote:
Hi,
This was taken from where? From liveupgrade??? As long as I know, liveupgrade
Bob Friesenhahn schrieb:
On my drive array (capable of 260MB/second single-process writes and
450MB/second single-process reads) 'zfs iostat' reports a read rate of
about 59MB/second and a write rate of about 59MB/second when executing
'cp -r' on a directory containing thousands of 8MB
In our environment, the politically and administratively simplest
approach to managing our storage is to give each separate group at
least one ZFS pool of their own (into which they will put their various
filesystems). This could lead to a proliferation of ZFS pools on our
fileservers (my current
Oh, one more thing
- a tool to schedule the deletion of snapshots (Keep the past 14 Daily, 4
Weekly, 6 Monthly, etc.)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
After hearing many vehement requests for expanding RAID-Z vdevs, Matt Ahrens
and I sat down a few weeks ago to figure out an mechanism that would work.
While Sun isn't committing resources to imlementing a solution, I've written
up our ideas here:
Hi...
System Config:
2 Intel 3 Ghz 5160 dual-core cpu's
10 SATA 750 GB disks running as a ZFS RAIDZ2 pool
8 GB Memory
SunOS 5.11 snv_79a on a separate UFS mirror
~150 Read I/O's/second, ~300 Write I/O's/second
On Tue, 8 Apr 2008, [EMAIL PROTECTED] wrote:
a few seconds and the links list in, perhaps, 60 seconds. Is there a
difference in what ls has to do when listing links versus listing regular
files
in ZFS that would cause a slowdown?
Since you specified '-t' the links have to be dereferenced
Hello all. I am looking to be able to verify my zfs backups in the
most minimal way, ie without having to md5 the whole volume.
Is there a way to get a checksum for a snapshot and compare it to
another zfs volume, containing all the same blocks and verify they
contain the same information?
Another approach might be to stick with Solaris on the server, and
run netatalk instead of SAMBA (or, you
know your macs can speak NFS ;).
I also built mt-daapd on Solaris (just for fun) and iTunes can see that
shared library - however this wasn't much use to me as I still want to
use
*Platform:*
* OpenSolaris snv79 on an older beige-box Intel x86
* Apple XRaid disk box, with 7 JBOD disks
* LSI FC controller -
http://www.lsi.com/storage_home/products_home/host_bus_adapters/fibre_channel_hbas/lsi7404eplc/index.html?remote=1locale=EN
21 matches
Mail list logo