How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?
On Wed, Feb 27, 2013 at 5:22 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 26 Feb 2013, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM,
Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director
On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald kmcdon...@egenera.com wrote:
I've seen the Nexenta and EON webpages, but I'm not looking to build my
own.
Is there anything out there I can
However, if you need to decide, whether to use Xen, test your setup
before going into production and ask your boss, whether he can live with
innovative ... solutions ;-)
Thanks a lot for the informative reply. It has been definitely helpful
I am however interested in the reliability of
Is anyone even using ZFS under Xen in production in some form. If so, what's
your impression of reliability ?
Regards
On Sun, May 17, 2009 at 2:16 PM, Ahmed Kamal
email.ahmedka...@googlemail.com wrote:
Hi zfs gurus,
I am wondering whether the reliability of solaris/zfs is still guaranteed
Hi zfs gurus,
I am wondering whether the reliability of solaris/zfs is still guaranteed if
I will be running zfs not directly over real hardware, but over Xen
virtualization ? The plan is to assign physical raw access to the disks to
the xen guest. I remember zfs having problems with hardware
ZFS replication basics at http://cuddletech.com/blog/pivot/entry.php?id=984
Regards
On Sat, Mar 28, 2009 at 1:57 AM, Harry Putnam rea...@newsguy.com wrote:
[...]
Harry wrote:
Now I'm wondering if the export/import sub commands might not be a
good bit faster.
Ian Collins
The good news is that ZFS is getting popular enough on consumer-grade
hardware. The bad news is that said hardware has a different set of
failure modes, so it takes a bit of work to become resilient to them.
This is pretty high on my short list.
So does this basically mean zfs rolls-back
Unmount is not sufficient.
Well, umount is not the right way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
Did anyone share a script to send/recv zfs filesystems tree in
parallel, especially if a cap on concurrency can be specified?
Richard, how fast were you taking those snapshots, how fast were the
syncs over the network. For example, assuming a snapshot every 10mins,
is it reasonable to expect to
Hi Jim,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
I will probably be testing again soon. Any
Hi Jim,
Thanks for your informative reply. I am involved with kristof
(original poster) in the setup, please allow me to reply below
Was the follow 'test' run during resynchronization mode or replication
mode?
Neither, testing was done while in logging mode. This was chosen to
simply avoid
You might want to look at AVS for realtime replication
http://www.opensolaris.org/os/project/avs/
However, I have had huge performance hits after enabling that. The
replicated volume is almost 10% the speed of normal ones
On Thu, Jan 15, 2009 at 1:28 PM, Ian Mather ian.mat...@northtyneside.gov.uk
Hi,
I have setup AVS replication between two zvols on two opensolaris-2008.11
nodes. I have been seeing BIG performance issues, so I tried to setup the
system to be as fast as possible using a couple of tricks. The detailed
setup and performance data are below:
* A 100G zvol has been setup
Well, I checked and it is 8k
volblocksize 8K
Any other suggestions how to begin to debug such issue ?
On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 15 Dec 2008, Ahmed Kamal wrote:
RandomWrite-8k: 0.9M/s
SingleStreamWriteDirect1m
Hi,
I have been doing some basic performance tests, and I am getting a big hit
when I run UFS over a zvol, instead of directly using zfs. Any hints or
explanations is very welcome. Here's the scenario. The machine has 30G RAM,
and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2,
Hi,
Not sure if this is the best place to ask, but do Sun's new Amber road
storage boxes have any kind of integration with ESX? Most importantly,
quiescing the VMs, before snapshotting the zvols, and/or some level of
management integration thru either the web UI or ESX's console ? If there's
zfs list -t snapshot ?
On Sat, Nov 22, 2008 at 1:14 AM, Pawel Tecza [EMAIL PROTECTED] wrote:
Hello All,
This is my zfs list:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10,5G 3,85G61K /rpool
rpool/ROOT9,04G 3,85G
For *nix rsync
For windows rsyncshare
http://www.nexenta.com/corp/index.php?option=com_remositoryItemid=77func=startdownid=18
On Sat, Oct 18, 2008 at 1:56 PM, Ares Drake [EMAIL PROTECTED]wrote:
Greetings.
I am currently looking into setting up a better backup solution for our
family.
I
Hi,
Unfortunately, every now and then someone has his zpool corrupt, with no
tools to fix it! This is due to either zfs bugs, or hardware lying about
whether the bits really hit the platters. I am evaluating what I should be
using for storing VMware ESX VM images (ext3 or zfs on NFS). I really
Thanks for the info. I am not really after big performance, I am already on
SATA and it's good enough for me. What I really really can't afford is data
loss. The CAD designs our engineers are working on can sometimes be really
worth a lot. But still we're a small company and would rather save and
Thanks for all the answers .. Please find more questions below :)
- Good to know EMC filers do not have end2end checksums! What about netapp ?
- Any other limitations of the big two NAS vendors as compared to zfs ?
- I still don't have my original question answered, I want to somehow assess
the
I guess I am mostly interested in MTDL for a zfs system on whitebox hardware
(like pogo), vs dataonTap on netapp hardware. Any numbers ?
On Tue, Sep 30, 2008 at 4:36 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Tue, 30 Sep 2008, Ahmed Kamal wrote:
- I still don't have my original
, Miles Nordin [EMAIL PROTECTED] wrote:
ak == Ahmed Kamal [EMAIL PROTECTED] writes:
ak I need to answer and weigh against the cost.
I suggest translating the reliability problems into a cost for
mitigating them: price the ZFS alternative as two systems, and keep
the second system offline
Intel mainstream (and indeed many tech companies') stuff is purposely
stratified from the enterprise stuff by cutting out features like ECC and
higher memory capacity and using different interface form factors.
Well I guess I am getting a Xeon anyway
There is nothing magical about SAS
Well, if you can probably afford more SATA drives for the purchase
price, you can put them in a striped-mirror set up, and that may help
things. If your disks are cheap you can afford to buy more of them
(space, heat, and power not withstanding).
Hmm, that's actually cool !
If I configure
I observe that there are no disk vendors supplying SATA disks
with speed 7,200 rpm. It is no wonder that a 10k rpm disk
outperforms a 7,200 rpm disk for random workloads. I'll attribute
this to intentional market segmentation by the industry rather than
a deficiency in the transfer
So, performance aside, does SAS have other benefits ? Data integrity ? How
would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2) ?
Like apples and pomegranates. Both should be able to saturate a GbE link.
You're the expert, but isn't the 100M/s for streaming not random
performance), and offers better performance and
MTTDL than 8 sata raidz2, I guess I will go with 8-sata-raid1 then!
Hope I'm not horribly mistaken :)
On Wed, Oct 1, 2008 at 3:18 AM, Tim [EMAIL PROTECTED] wrote:
On Tue, Sep 30, 2008 at 8:13 PM, Ahmed Kamal
[EMAIL PROTECTED] wrote:
So
Hi everyone,
We're a small Linux shop (20 users). I am currently using a Linux server to
host our 2TBs of data. I am considering better options for our data storage
needs. I mostly need instant snapshots and better data protection. I have
been considering EMC NS20 filers and Zfs based solutions.
29 matches
Mail list logo