It may depend on the firmware you're running. We've got a SAS1068E based
card in Dell R710 at the moment, connected to an external SAS JBOD, and
we did have problems with the as shipped firmware.
However we've upgraded that, and _so far_ haven't had further issues. I
didn't do the upgrade
Hi Everyone,
Is it possible to use send/recv to change the recordsize, or does each
file need to be individually recreated/copied within a given dataset?
Is there a way to check the recordsize of a given file, assuming that
the filesystems recordsize was changed at some point?
Also - Am I
That's very interesting tech you've got there... :-) I have a couple of
questions, with apologies in advance if I missed them on the website..
I see the PCI card has an external power connector - can you explain
how/why that's required, as opposed to using an on card battery or
similar. What
Thanks for the detailed response - further questions inline...
Christopher George wrote:
Excellent questions!
I see the PCI card has an external power connector - can you explain
how/why that's required, as opposed to using an on card battery or
similar.
DDRdrive X1 ZIL
, 2010 at 11:57 PM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
On Wed, Jan 6 at 14:56, Tristan Ball wrote:
For those searching list archives, the SNV125-S2/40GB given below is not
based on the Intel controller.
I queried Kingston directly about this because there appears to be so
On 6/01/2010 3:00 AM, Roch wrote:
Richard Elling writes:
On Jan 3, 2010, at 11:27 PM, matthew patton wrote:
I find it baffling that RaidZ(2,3) was designed to split a record-
size block into N (N=# of member devices) pieces and send the
uselessly tiny requests to
On 6/01/2010 7:19 AM, Richard Elling wrote:
If you are doing small, random reads on dozens of TB of data, then you've
got a much bigger problem on your hands... kinda like counting grains of
sand on the beach during low tide :-). Hopefully, you do not have to
randomly
update that data
For those searching list archives, the SNV125-S2/40GB given below is not
based on the Intel controller.
I queried Kingston directly about this because there appears to be so
much confusion (and I'm considering using these drives!), and I got back
that:
The V series uses a JMicron Controller
The
To some extent it already does.
If what you're talking about is filesystems/datasets, then all
filesystems within a pool share the same free space, which is
functionally very similar to each filesystem within the pool being
thin-provisioned. To get a thick filesystem, you'd need to set at
Ack..
I've just re-read your original post. :-) It's clear you are talking
about support for thin devices behind the pool, not features inside the
pool itself.
Mea culpa.
So I guess we wait for trim to be fully supported.. :-)
T.
On 31/12/2009 8:09 AM, Tristan Ball wrote:
To some
I've got an opensolaris snv_118 machine that does nothing except serve
up NFS and ISCSI.
The machine has 8G of ram, and I've got an 80G SSD as L2ARC.
The ARC on this machine is currently sitting at around 2G, the kernel is
using around 5G, and I've got about 1G free. I've pulled this from a
Bob Friesenhahn wrote:
On Sun, 20 Dec 2009, Richard Elling wrote:
Given that I don't believe there is any other memory pressure on the
system, why isn't the ARC using that last 1G of ram?
Simon says, don't do that ? ;-)
Yes, primarily since if there is no more memory immediately
Oops, should have sent to the list...
Richard Elling wrote:
On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:
I've got an opensolaris snv_118 machine that does nothing except
serve up NFS and ISCSI.
The machine has 8G of ram, and I've got an 80G SSD as L2ARC.
The ARC on this machine
I think the exception may be when doing a recursive snapshot - ZFS appears to
halt IO so that it can take all the snapshots at the same instant.
At least, that's what it looked like to me. I've got an Opensolaris ZFS box
providing NFS to VMWare, and I was getting SCSI timeout's within the
This is truly awesome news!
What's the best way to dedup existing datasets? Will send/recv work, or
do we just cp things around?
Regards,
Tristan
Jeff Bonwick wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
I'm curious as to how send/recv intersects with dedupe... if I send/recv
a deduped filesystem, is the data sent it it's de-duped form, ie just
sent once, followed by the pointers for subsequent dupe data, or is the
the data sent in expanded form, with the recv side system then having to
redo
What makes you say that the X25-E's cache can't be disabled or flushed?
The net seems to be full of references to people who are disabling the
cache, or flushing it frequently, and then complaining about the
performance!
T
Frédéric VANNIERE wrote:
The ZIL is a write-only log that is only
limitation I've
had with using it as a opensolaris storage system.
Regards,
Tristan.
Matthew Ahrens wrote:
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which
sends hourly snapshots to the other. This has been working well,
however as of today
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as
of today, the receiving zfs process has started running extremely
slowly, and is running at 100% CPU on one core, completely in kernel
How long have you had them in production?
Were you able to adjust the TLER settings from within solaris?
Thanks,
Tristan.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero
Sent: Friday, 11
the relative cost. For those periods that it is effective,
it really makes a difference too!
T.
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, 26 August 2009 3:48 PM
To: Tristan Ball
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Using consumer
I guess it depends on whether or not you class the various Raid
Edition drives as consumer? :-)
My one concern with these RE drives is that because they will return
errors early rather than retry is that they may fault when a normal
consumer drive would have returned the data eventually. If the
Tristan.
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, 26 August 2009 2:08 PM
To: Tristan Ball
Cc: thomas; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Using consumer drives in a zraid2
On Tue, Aug 25, 2009 at 10:56 PM
:-) )
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, 26 August 2009 3:01 PM
To: Tristan Ball
Cc: thomas; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Using consumer drives in a zraid2
On Tue, Aug 25, 2009 at 11:38 PM, Tristan Ball
tristan.b
Marcus wrote:
- how to best handle broken disks/controllers without ZFS hanging or
being unable to replace the disk
A definite +1 here. I realise it's something that Sun probably consider
fixed by the disk/controller drivers, however many of us are using
opensolaris on non-sun hardware,
In my testing, vmware doesn't see the vm1 and vm2 filesystems. Vmware
doesn't have an automounter, and doesn't traverse NFS4 sub-mounts
(whatever the formal name for them is). Actually, it doesn't support
NFS4 at all!
Regards,
Tristan.
-Original Message-
From:
Can anyone tell me why successive runs of zdb would show very
different values for the cksum column? I had thought these counters were
since last clear but that doesn't appear to be the case?
If I run zdb poolname, right at the end of the output, it lists pool
statistics:
Were those tests you mentioned on Raid-5/6/Raid-Z/z2 or on Mirrored
volumes of some kind?
We've found here that VM loads on raid 10 sata volumes, with relatively
high numbers of disks actually works pretty well - and depending size of
the drives, you quite often get more usuable space too. ;-)
Because it means you can create zfs snapshots from a non solaris/non
local client...
Like a linux nfs client, or a windows cifs client.
T
dick hoogendijk wrote:
On Wed, 29 Jul 2009 17:34:53 -0700
Roman V Shaposhnik r...@sun.com wrote:
On the read-write front: wouldn't it be cool to be
The 128G Supertalent Ultradrive ME. The larger version of the drive
mentioned in the original post. Sorry, should have made that a little
clearer. :-)
T
Kyle McDonald wrote:
Tristan Ball wrote:
It just so happens I have one of the 128G and two of the 32G versions in
my drawer, waiting to go
Bob Friesenhahn wrote:
On Fri, 24 Jul 2009, Tristan Ball wrote:
I've used 8K IO sizes for all the stage one tests - I know I might get
it to go faster with a larger size, but I like to know how well systems
will do when I treat them badly!
The Stage_1_Ops_thru_run is interesting. 2000+ ops
It just so happens I have one of the 128G and two of the 32G versions in
my drawer, waiting to go into our DR disk array when it arrives.
I dropped the 128G into a spare Dell 745 (2GB ram) and used a Ubuntu
liveCD to run some simple iozone tests on it. I had some stability
issues with Iozone
Is the system otherwise responsive during the zfs sync cycles?
I ask because I think I'm seeing a similar thing - except that it's not
only other writers that block , it seems like other interrupts are
blocked. Pinging my zfs server in 1s intervals results in large delays
while the system
According to the link bellow, VMWare will only use a single TCP session
for NFS data, which means you're unlikely to get it to travel down more
than one interface on the VMware side, even if you can find a way to do
it on the solaris side.
34 matches
Mail list logo