If you like, you can later add a fifth drive
relatively easily by
replacing one of the slices with a whole drive.
how does this affect my available storage if I were to replace both of those
sparse 500GB files with a real 1TB drive? Will it be same? Or will I have
expanded my storage? If
On 22 Apr 2010, at 20:50, Rich Teer rich.t...@rite-group.com wrote:
On Thu, 22 Apr 2010, Alex Blewitt wrote:
Hi Alex,
For your information, the ZFS project lives (well, limps really) on
at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
from there and we're working on
I'm trying to provide some disaster-proofing on Amazon EC2 by using a
ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots.
My aim is to ensure that, should the instance terminate, a new instance can
spin-up, attach the EBS volume and auto-/re-configure the zpool.
I've
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure
already defined. Starting an instance from this image, without attaching the
EBS volume, shows the pool structure exists and that the pool state is
UNAVAIL (as
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of thomas
Someone on this list threw out the idea a year or so ago to just setup
2 ramdisk servers, export a ramdisk from each and create a mirror slog
from them.
Isn't the whole point of a
I'm not actually issuing any when starting up the new instance. None are
needed; the instance is booted from an image which has the zpool configuration
stored within, so simply starts and sees that the devices aren't available,
which become available after I've attached the EBS device.
Before
From: Richard Elling [mailto:richard.ell...@gmail.com]
One last try. If you change the real directory structure, how are
those
changes reflected in the snapshot directory structure?
Consider:
echo whee /a/b/c/d.txt
[snapshot]
mv /a/b /a/B
What does
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:
I'm not actually issuing any when starting up the new instance. None are
needed; the instance is booted from an image which has the zpool
configuration stored within, so simply starts and sees that the devices
aren't available, which become
The instances are ephemeral; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this
isn't the case I'm dealing with - its when the instance terminates
unexpectedly. For instance, if a reboot operation doesn't succeed or if
One thing I've just noticed is that after a reboot of the new instance, which
showed no data on the EBS volume, the files return. So:
1. Start new instance
2. Attach EBS vols
3. `ls /foo` shows no data
4. Reboot instance
5. Wait a few minutes
6. `ls /foo` shows data as expected
Not sure if this
On 23/04/2010 12:24, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of thomas
Someone on this list threw out the idea a year or so ago to just setup
2 ramdisk servers, export a ramdisk from each and create a mirror slog
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:
The instances are ephemeral; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this
isn't the case I'm dealing with - its when the instance terminates
unexpectedly. For
On 23/04/2010 13:38, Phillip Oldham wrote:
The instances are ephemeral; once terminated they cease to exist, as do all
their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the
case I'm dealing with - its when the instance terminates unexpectedly. For instance, if a
I can replicate this case; Start new instance attach EBS volumes reboot
instance data finally available.
Guessing that it's something to do with the way the volumes/devices are seen
then made available.
I've tried running various operations (offline/online, scrub) to see whether it
will
On Apr 22, 2010, at 11:03 AM, Geoff Nordli geo...@grokworx.com wrote:
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM
On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com
wrote:
If you combine the hypervisor and storage server and have
I would have thought that the file movement from one FS to another within the
same pool would be almost instantaneous. Why does it take to platter for such a
movement?
# time cp /tmp/blockfile /pcshare/1gb-tempfile
real0m5.758s
# time mv /pcshare/1gb-tempfile .
real0m4.501s
Both FSs
Sunil wrote:
If you like, you can later add a fifth drive
relatively easily by
replacing one of the slices with a whole drive.
how does this affect my available storage if I were to replace both of those
sparse 500GB files with a real 1TB drive? Will it be same? Or will I have
Hi,
I am playing with opensolaris a while now. Today i tried to deduplicate the
backup VHD files Windows Server 2008 generates. I made a backup before and
after installing AD-role and copied the files to the share on opensolaris
(build 134). First i got a straight 1.00x, then i set recordsize
Bogdan,
Thanks for pointing this out and passing along the latest news from Oracle.
Stamp out FUD wherever possible. At this point, unless it is said officially,
and Oracle generally keeps pretty tight lipped about products and directions,
people should regard most things as heresy.
Cheers,
You might note, dedupe only dedupes data that is writen after the flag is set.
It does not retroactivly dedupe already writen data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
It was active all the time.
Made a new zfs with -o dedup=on, copied with default record size, got no dedup,
deleted files, set recordsize 4k, dedup ratio 1.29x
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
A few things come to mind...
1. A lot better than...what? Setting the recordsize to 4K got you some
deduplication but maybe the pertinent question is what were you
expecting?
2. Dedup is fairly new. I haven't seen any reports of experiments like
yours so...CONGRATULATIONS!! You're probably
Dedup is a key element for my purpose, because i am planning a central
repository for like 150 Windows Server 2008 (R2) servers which would take a lot
less storage if they dedup right.
--
This message posted from opensolaris.org
___
zfs-discuss
I was having this same problem with snv_134. I executed all the same commands
as you did. The cloned disk booted up to the Hostname: line and then died.
Booting with the -kv kernel option in GRUB, it died at a different point each
time, most commonly after:
srn0 is /pseudo/s...@0
What's
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times
My use case for opensolaris is as a storage server for a VM environment (we
also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer
within a VM, simulating my VM IO activity, with some balance given to easy
benchmarking. We have about 110 VMs across eight ESX hosts. Here is
on 23/04/2010 04:22 BM said the following:
On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson kgund...@teamcool.net wrote:
Greetings All:
Granted there has been much fear, uncertainty, and doubt following
Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
list post dated
-Original Message-
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Friday, April 23, 2010 7:08 AM
We are currently porting over our existing Learning Lab Infrastructure
platform from MS Virtual Server to VBox + ZFS. When students
connect into
their lab environment it dynamically
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Actually, I find this very surprising:
Question posted:
http://lopsa.org/pipermail/tech/2010-April/004356.html
As the thread unfolds, it appears, although netapp may
I really new to zfs and also raid.
I have 3 hard disk, 500GB, 1TB, 1.5TB.
On each HD i wanna create 150GB partition + remaining space.
I wanna create raidz for 3x150GB partition. This is for my document + photo.
As for the remaining I wanna create my video library. This one no need any
30 matches
Mail list logo