On 26/09/12 00:52, Richard Elling wrote:
On Sep 25, 2012, at 1:32 PM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:
Q: Which services are the complete list needed to
set up the COMSTAR server from scratch?
Dunno off the top of my head. Network isn't needed (COMSTAR
can serve
On 11/ 9/11 01:42 AM, Edward Ned Harvey wrote:
I know a lot of people will say don't do it, but that's only partial
truth. The real truth is:
At all times, if there's a server crash, ZFS will come back along at next
boot or mount, and the filesystem will be in a consistent state, that was
On 11/ 9/11 03:11 PM, Edward Ned Harvey wrote:
From: Evaldas Auryla [mailto:evaldas.aur...@edqm.eu]
Sent: Wednesday, November 09, 2011 8:55 AM
I was thinking about STEC ZeusRAM, but unfortunately it's SAS only
device, and it won't make into X4540 (SATA ports only), so another
option could
Hi all,
I'm trying to evaluate what are the risks of running NFS share of zfs
dataset with sync=disabled property. The clients are vmware hosts in our
environment and server is SunFire X4540 Thor system. Though general
recommendation tells not to do this, but after testing performance with
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible
with sas-addresses such as this in zpool status output:
NAME STATE READ WRITE CKSUM
cuve
/pci1000,3080@0/iport@f/disk@w5000c50025d5af66,0
On 05/19/11 03:04 PM, Hung-ShengTsao (Lao Tsao) Ph.D. wrote:
what is output
echo |format
On 5/19/2011 3:55 AM, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA
On 05/10/11 09:45 PM, Don wrote:
Is it possible to modify the GUID associated with a ZFS volume imported
into STMF?
To clarify- I have a ZFS volume I have imported into STMF and export via
iscsi. I have a number of snapshots of this volume. I need to temporarily
go back to an older snapshot
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you have fairly unlimited options for application servers,
once they are decoupled from the storage servers.
It doesn't seem like
On 01/28/11 02:37 PM, Edward Ned Harvey wrote:
Let's go into that a little bit. If you're piping zfs send directly into
zfs receive, then it is an ideal backup method. But not everybody can
afford the disk necessary to do that, so people are tempted to zfs send
to
a file or tape. There are
Hi, reminds me about this dedup bug, don't use the -d switch in zfs send, it
produces broken stream that you won't be able to receive.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
Here is a small script to test deduped zfs send stream:
=
#!/bin/bash
ZFSPOOL=rpool
ZFSDATASET=zfs-send-dedup-test
dd if=/dev/random of=/var/tmp/testfile1 bs=512 count=10
zfs create $ZFSPOOL/$ZFSDATASET
cp /var/tmp/testfile1 /$ZFSPOOL/$ZFSDATASET/testfile1
zfs snapshot
Sry, the script was cut off, ending part is:
mp/ddtest-snap2.zfs
=
It works in OpenSolaris b134, but not in OpenIndiana b147, nor Solaris Express
11, where zfs receive exists on second incremental snapshot with error message:
cannot receive incremental stream: invalid backup stream
12 matches
Mail list logo