On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote:
> On 01/28/10 11:13 PM, Lutz Schumann wrote:
>> While thinking about ZFS as the next generation filesystem without
>> limits I am wondering if the real world is ready for this kind of
>> incredible technology ...
>>
>> I'm actually speaking of h
On Jan 29, 2010, at 4:10 AM, Tiernan OToole wrote:
> thanks.
>
> I have looked at nexentastor, but i have a lot more drives than 2Tb... i know
> their nexentacore could be better suited... I think its also based on
> OpenSolaris too, correct?
The current NexentaStor developer edition has a 4 TB
On Jan 29, 2010, at 9:12 AM, Scott Meilicke wrote:
> Link aggregation can use different algorithms to load balance. Using L4 (IP
> plus originating port I think), using a single client computer and the same
> protocol (NFS), but different origination ports has allowed me to saturate
> both NICS
On Jan 28, 2010, at 4:58 PM, Tiernan OToole wrote:
> Good morning. This is more than likley a stupid question on this alias
> but I will ask anyway. I am building a media server in the house and
> am trying to figure out what os to install. I know it must have zfs
> support but can't figure if I s
On Jan 28, 2010, at 2:23 PM, Michelle Knight wrote:
> Hi Folks,
>
> As usual, trust me to come up with the unusual. I'm planning ahead for
> future expansion and running tests.
>
> Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade
> to anything than 130, which gives
On Jan 28, 2010, at 10:54 AM, Lutz Schumann wrote:
> Actuall I tested this.
>
> If I add a l2arc device to the syspool it is not used when issueing I/O to
> the data pool (note: on root pool it must no be a whole disk, but only a
> slice of it otherwise ZFS complains that root disks may not co
On Jan 27, 2010, at 12:34 PM, David Dyer-Bennet wrote:
>
> Google is working heavily with the philosophy that things WILL fail, so they
> plan for it, and have enough redundance to survive it -- and then save lots
> of money by not paying for premium components. I like that approach.
Yes, it d
On Jan 27, 2010, at 12:25 PM, RayLicon wrote:
> Ok ...
>
> Given that ... yes, we all know that swapping is bad (thanks for the
> enlightenment).
>
> To Swap or not to Swap isn't releated to this question, and besides, even if
> you don't page swap, other mechanisms can still claim swap space,
On Jan 24, 2010, at 8:26 PM, Frank Middleton wrote:
> What an entertaining discussion! Hope the following adds to the
> entertainment value :).
>
> Any comments on this Dec. 2005 study on disk failure and error rates?
> http://research.microsoft.com/apps/pubs/default.aspx?id=64599
>
> Seagate sa
On Jan 24, 2010, at 8:26 AM, R.G. Keen wrote:
>
> “Disk drives cost $100”: yes, I fully agree, with minor exceptions. End of
> marketing, which is where the cost per drive drops significantly, is
> different from end of life – I hope!
http://en.wikipedia.org/wiki/End-of-life_(product)
Some vend
On Jan 23, 2010, at 5:06 AM, Simon Breden wrote:
> Thanks a lot.
>
> I'd looked at SO many different RAID boxes and never had a good feeling about
> them from the point of data safety, that when I read the 'A Conversation with
> Jeff Bonwick and Bill Moore – The future of file systems' article
AIUI, this works as designed.
I think the best practice will be to add the L2ARC to syspool (nee rpool).
However, for current NexentaStor releases, you cannot add cache devices
to syspool.
Earlier I mentioned that this made me nervous. I no longer hold any
reservation against it. It should wor
On Jan 23, 2010, at 3:47 PM, Frank Cusack wrote:
> On January 23, 2010 1:20:13 PM -0800 Richard Elling
>> My theory is that drives cost $100.
>
> Obviously you're not talking about Sun drives. :)
Don't confuse cost with price :-)
-- richard
__
On Jan 23, 2010, at 8:04 AM, R.G. Keen wrote:
> Interesting question.
>
> The answer I came to, perhaps through lack of information and experience, is
> that there isn't a best 1.5tb drive. I decided that 1.5tb is too big, and
> that it's better to use more and smaller devices so I could get to
On Jan 23, 2010, at 12:12 PM, Bob Friesenhahn wrote:
> On Sat, 23 Jan 2010, A. Krijgsman wrote:
>
>> Just to jump in.
>>
>> Did you guys ever consider to shortstroke a larger sata disk?
>> I'm not familiar with this, but read a lot about it;
>>
>> Since the drive cache gets larger on the bigger
Another approach is to make a new virtual disk and attach it as a mirror.
Once the silver is complete, detach and destroy the old virtual disk.
Normal procedures for bootable disks still apply.
This works because ZFS only silvers data.
-- richard
On Jan 22, 2010, at 12:42 PM, Cindy Swearingen w
On Jan 21, 2010, at 4:32 PM, Daniel Carosone wrote:
>> I propose a best practice of adding the cache device to rpool and be
>> happy.
>
> It is *still* not that simple. Forget my slow disks caching an even
> slower pool (which is still fast enough for my needs, thanks to the
> cache and zil).
>
[Richard makes a hobby of confusing Dan :-)]
more below..
On Jan 21, 2010, at 1:13 PM, Daniel Carosone wrote:
> On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote:
>> On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
>>
>>> On Wed, Jan 20, 2010 at 03:20:20
On Jan 21, 2010, at 1:55 PM, Michelle Knight wrote:
> The error messages are in the original post. They are...
> /mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2
> Hard Disk.vhd: File too large
> /mirror2/applications/virtualboximages/xp/xp.tar.bz2: File too large
>
CC'ed to ext3-disc...@opensolaris.org because this is an ext3 on Solaris
issue. ZFS has no problem with large files, but the older ext3 did.
See also the ext3 project page and documentation, especially
http://hub.opensolaris.org/bin/view/Project+ext3/Project_status
-- richard
On Jan 21, 2010,
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
> Hi all,
>
> I'm going to be trying out some tests using b130 for dedup on a server with
> about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
> I'm trying to get a handle on is how to estimate the memory overhead requir
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
> On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
>> Though the ARC case, PSARC/2007/618 is "unpublished," I gather from
>> googling and the source that L2ARC devices are considered auxiliary,
>>
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote:
> >> Until you try to pick one up and put it in a fire safe!
>
> >Then you backup to tape from x4540 whatever data you need.
> >In case of enterprise products you save on licensing here as you need a one
> >client license per x4540 but in fact can
On Jan 20, 2010, at 8:14 PM, Brad wrote:
> I was reading your old posts about load-shares
> http://opensolaris.org/jive/thread.jspa?messageID=294580 .
>
> So between raidz and load-share "striping", raidz stripes a file system block
> evenly across each vdev but with load sharing the file syst
Though the ARC case, PSARC/2007/618 is "unpublished," I gather from
googling and the source that L2ARC devices are considered auxiliary,
in the same category as spares. If so, then it is perfectly reasonable to
expect that it gets picked up regardless of the GUID. This also implies
that it is share
Hi Lutz,
On Jan 20, 2010, at 3:17 AM, Lutz Schumann wrote:
> Hello,
>
> we tested clustering with ZFS and the setup looks like this:
>
> - 2 head nodes (nodea, nodeb)
> - head nodes contain l2arc devices (nodea_l2arc, nodeb_l2arc)
This makes me nervous. I suspect this is not in the typical Q
Comment below. Perhaps someone from Sun's ZFS team can fill in the
blanks, too.
On Jan 20, 2010, at 3:34 AM, Lutz Schumann wrote:
> Actually I found some time (and reason) to test this.
>
> Environment:
> - 1 osol server
> - one SLES10 iSCSI Target
> - two LUN's exported via iSCSi to the OSol
On Jan 20, 2010, at 3:15 AM, Joerg Schilling wrote:
> Richard Elling wrote:
>
>>>
>>> ufsdump/restore was perfect in that regard. The lack of equivalent
>>> functionality is a big problem for the situations where this functionality
>>> is a business
On Jan 19, 2010, at 4:26 PM, Allen Eastwood wrote:
>> Message: 3
>> Date: Tue, 19 Jan 2010 15:48:52 -0500
>> From: Miles Nordin
>> To: zfs-discuss@opensolaris.org
>> Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability?
>> Message-ID:
>> Content-Type: text/plain; charset="us-asci
On Jan 19, 2010, at 1:53 AM, Julian Regel wrote:
> > When we brought it up last time, I think we found no one knows of a
> > userland tool similar to 'ufsdump' that's capable of serializing a ZFS
> > along with holes, large files, ``attribute'' forks, windows ACL's, and
> > checksums of its own, an
On Jan 19, 2010, at 4:36 AM, Jesus Cea wrote:
> On 01/19/2010 01:14 AM, Richard Elling wrote:
>> For example, b129
>> includes a fix for CR6869229, zfs should switch to shiny new metaslabs more
>> frequently.
>> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_i
On Jan 18, 2010, at 3:25 PM, Erik Trimble wrote:
> Given my (imperfect) understanding of the internals of ZFS, the non-ZIL
> portions of the reserved space are there mostly to insure that there is
> sufficient (reasonably) contiguous space for doing COW. Hopefully, once BP
> rewrite materialize
On Jan 18, 2010, at 7:55 AM, Jesus Cea wrote:
> zpool and zfs report different free space because zfs takes into account
> an internal reservation of 32MB or 1/64 of the capacity of the pool,
> what is bigger.
This space is also used for the ZIL.
> So in a 2TB Harddisk, the reservation would be 3
On Jan 18, 2010, at 11:04 AM, Miles Nordin wrote:
...
> Another problem is that the snv_112 man page says this:
>
> -8<-
> The format of the stream is evolving. No backwards com-
> patibility is guaranteed. You may not be able to receive
> your streams on future ve
On Jan 18, 2010, at 10:22 AM, Mr. T Doodle wrote:
> I would like some opinions on what people are doing in regards to configuring
> ZFS for root/boot drives:
>
> 1) If you have onbaord RAID controllers are you using them then creating the
> ZFS pool (mirrored from hardware)?
I let ZFS do the m
On Jan 17, 2010, at 11:59 AM, Tristan Ball wrote:
> Hi Everyone,
>
> Is it possible to use send/recv to change the recordsize, or does each file
> need to be individually recreated/copied within a given dataset?
Yes. The former does the latter.
> Is there a way to check the recordsize of a gi
On Jan 16, 2010, at 10:03 PM, Travis Tabbal wrote:
> Hmm... got it working after a reboot. Odd that it had problems before that. I
> was able to rename the pools and the system seems to be running well now.
> Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get
> copied ove
On Jan 17, 2010, at 2:38 AM, Edward Ned Harvey wrote:
>>> Personally, I use "zfs send | zfs receive" to an external disk.
>> Initially a
>>> full image, and later incrementals.
>>
>> Do these incrementals go into the same filesystem that received the
>> original zfs stream?
>
> Yes. In fact, I
On Jan 14, 2010, at 4:02 PM, Richard Elling wrote:
> That is a simple performance model for small, random reads. The ZIL
> is a write-only workload, so the model will not apply.
BTW, it is a Good Thing (tm) the small, random read model does not
apply to the ZIL.
-- r
On Jan 14, 2010, at 3:59 PM, Ray Van Dolson wrote:
> On Thu, Jan 14, 2010 at 03:55:20PM -0800, Ray Van Dolson wrote:
>> On Thu, Jan 14, 2010 at 03:41:17PM -0800, Richard Elling wrote:
>>>> Consider a pool of 3x 2TB SATA disks in RAIZ1, you would roughly
>>>>
On Jan 14, 2010, at 10:58 AM, Jeffry Molanus wrote:
> Hi all,
>
> Are there any recommendations regarding min IOPS the backing storage pool
> needs to have when flushing the SSD ZIL to the pool?
Pedantically, as many as you can afford :-) The DDRdrive folks sell IOPS at
200 IOPS/$.
Sometimes
On Jan 14, 2010, at 11:02 AM, Christopher George wrote:
>> That's kind of an overstatement. NVRAM backed by on-board LI-Ion
>> batteries has been used in storage industry for years;
>
> Respectfully, I stand by my three points of Li-Ion batteries as they relate
> to enterprise class NVRAM: igniti
On Jan 14, 2010, at 11:09 AM, Mr. T Doodle wrote:
> I am considering RAIDZ or a 2-way mirror with a spare.
>
> I have 6 disks and would like the best possible performance and reliability
> and not really concerned with disk space.
>
> My thought was a 2 disk 2-way mirror with a spare.
>
> Woul
additional clarification ...
On Jan 14, 2010, at 8:49 AM, Richard Elling wrote:
> On Jan 14, 2010, at 6:41 AM, Gary Mills wrote:
>
>> On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
>>>
>>> Gary Mills writes:
>>>>
>>>> Yes, I under
On Jan 14, 2010, at 6:41 AM, Gary Mills wrote:
> On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
>>
>> Gary Mills writes:
>>>
>>> Yes, I understand that, but do filesystems have separate queues of any
>>> sort within the ZIL? If not, would it help to put the database
>>> filesystems into
On Jan 12, 2010, at 7:46 PM, Brad wrote:
> Richard,
>
> "Yes, write cache is enabled by default, depending on the pool configuration."
> Is it enabled for a striped (mirrored configuration) zpool? I'm asking
> because of a concern I've read on this forum about a problem with SSDs (and
> disks)
On Jan 12, 2010, at 2:54 PM, Ed Spencer wrote:
> We have a zpool made of 4 512g iscsi luns located on a network appliance.
> We are seeing poor read performance from the zfs pool.
> The release of solaris we are using is:
> Solaris 10 10/09 s10s_u8wos_08a SPARC
>
> The server itself is a T2000
>
On Jan 12, 2010, at 12:37 PM, Gary Mills wrote:
> On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
>> On Tue, 12 Jan 2010, Gary Mills wrote:
>>>
>>> Is moving the databases (IMAP metadata) to a separate ZFS filesystem
>>> likely to improve performance? I've heard that this is imp
On Jan 12, 2010, at 2:53 AM, Brad wrote:
> Has anyone worked with a x4500/x4540 and know if the internal raid
> controllers have a bbu? I'm concern that we won't be able to turn off the
> write-cache on the internal hds and SSDs to prevent data corruption in case
> of a power failure.
Yes, w
On Jan 11, 2010, at 4:42 PM, Daniel Carosone wrote:
> I have a netbook with a small internal ssd as rpool. I have an
> external usb HDD with much larger storage, as a separate pool, which
> is sometimes attached to the netbook.
>
> I created a zvol on the external pool, the same size as the inte
comment below...
On Jan 11, 2010, at 10:00 AM, Lutz Schumann wrote:
> Ok, tested this myself ...
>
> (same hardware used for both tests)
>
> OpenSolaris svn_104 (actually Nexenta Core 2):
>
> 100 Snaps
>
> r...@nexenta:/volumes# time for i in $(seq 1 100); do zfs snapshot
> ssd
Good question. Zmanda seems to be a popular open source solution with
commercial licenses and support available. We try to keep the Best Practices
Guide up to date on this topic:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Using_ZFS_With_Enterprise_Backup_Solutions
Ad
On Jan 8, 2010, at 7:49 PM, bank kus wrote:
> dd if=/dev/urandom of=largefile.txt bs=1G count=8
>
> cp largefile.txt ./test/1.txt &
> cp largefile.txt ./test/2.txt &
>
> Thats it now the system is totally unusable after launching the two 8G
> copies. Until these copies finish no other applicati
On Jan 9, 2010, at 1:32 AM, Lutz Schumann wrote:
Depends.
a) Pool design
5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive
Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the
read performance for your working window of 200 GB.
An interesting thing happens when
On Jan 8, 2010, at 6:20 AM, Frank Batschulat (Home) wrote:
On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat > wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and
thus there's a good chance
that these CHKSUM errors must have a common sou
On Jan 7, 2010, at 12:02 PM, Anil wrote:
I *am* talking about situations where physical RAM is used up. So
definitely the SSD could be touched quite a bit when used as a rpool
- for pages in/out.
In the cases where rpool does not serve user data (eg. home directories
and databases are not i
I have posted my ZFS Tutorial slides from USENIX LISA09 on
slideshare.net.
You will notice that there is no real material on dedup. The reason
is that
dedup was not yet released when the materials were created. Everything
in the slides is publicly known information and, perhaps by chance,
On Jan 6, 2010, at 11:09 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote:
On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
Rather, ZFS works very nicely with "hardware
On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
iSCSI, et.al. You can happily add the
Im not sure how ZFS works very nicely with say for
On Jan 6, 2010, at 1:30 PM, Wes Felter wrote:
Michael Herf wrote:
I agree that RAID-DP is much more scalable for reads than RAIDZx, and
this basically turns into a cost concern at scale.
The raw cost/GB for ZFS is much lower, so even a 3-way mirror could
be
used instead of netapp. But this
Note to self: drink coffee before posting :-)
Thanks Glenn, et.al.
-- richard
On Jan 6, 2010, at 9:54 AM, Glenn Lagasse wrote:
* Richard Elling (richard.ell...@gmail.com) wrote:
Hi Pradeep,
This is the ZFS forum. You might have better luck on the caiman-
discuss
forum which is where the
Hi Pradeep,
This is the ZFS forum. You might have better luck on the caiman-discuss
forum which is where the folks who work on the installers hang out.
-- richard
On Jan 6, 2010, at 5:26 AM, Pradeep wrote:
Hi ,
I am trying to install solaris10 update8 on a san array using
solaris
jumpst
On Jan 5, 2010, at 11:56 AM, Tristan Ball wrote:
On 6/01/2010 3:00 AM, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects ar
On Jan 5, 2010, at 11:30 AM, Robert Milkowski wrote:
On 05/01/2010 18:49, Richard Elling wrote:
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
The problem is that while RAID-Z is really good for some workloads
it is really bad for others.
Sometimes having L2ARC might effectively
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote:
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and
managed to
raise free disk space from 4 GB to 10 GB, the "rm -rf directory"
method
started to perform significantly faster. Now it's deleting
On Jan 4, 2010, at 7:08 PM, Brad wrote:
Hi Adam,
From your the picture, it looks like the data is distributed evenly
(with the exception of parity) across each spindle then wrapping
around again (final 4K) - is this one single write operation or two?
| P | D00 | D01 | D02 | D03 | D04 | D
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down
On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote:
I didn't mean to destroy the pool. I used zpool destroy on a zvol,
when I should have used zfs destroy.
When I used zpool destroy -f mypool/myvolume the machine hard locked
after about 20 minutes.
This would be a bug. "zpool destroy" should on
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
now that
we need to get
On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote:
I have a zpool raidz1 array (called storage) that I created under
snv_118.
I then created a zfs filesystem called storage/vmware which I shared
out via iscsi.
I then deleted the vmware filesystem, using 'zpool destroy -f
storage/vmware' -- w
On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote:
I have a zpool raidz1 array (called storage) that I created under
snv_118.
I then created a zfs filesystem called storage/vmware which I shared
out via iscsi.
I then deleted the vmware filesystem, using 'zpool destroy -f
storage/vmware' -- w
On Jan 4, 2010, at 10:35 AM, Thomas Burgess wrote:
slightly outside of my price range.
I'll either do without or wait till they drop in priceis there a
"second best" option or is this pretty much it?
If you need the separate log, then you can figure the relative latency
gain for latency
On Jan 4, 2010, at 10:26 AM, David Dyer-Bennet wrote:
I initialized a new whole-disk pool on an external USB drive, and
then did
zfs send from my big data pool and zfs recv onto the new external
pool.
Sometimes this fails, but this time it completed. Zpool status
showed no
errors on the e
On Jan 3, 2010, at 11:27 PM, matthew patton wrote:
I find it baffling that RaidZ(2,3) was designed to split a record-
size block into N (N=# of member devices) pieces and send the
uselessly tiny requests to spinning rust when we know the massive
delays entailed in head seeks and rotational d
On Jan 4, 2010, at 10:00 AM, Thomas Burgess wrote:
I'm not 100% sure i'm going to need a separate SSD for my ZIL but if
i did want to look for one, i was wondering if anyone could suggest/
recommend a few budget options.
Start with zilstat, which will help you determine if your workload uses
On Jan 3, 2010, at 4:05 PM, Jack Kielsmeier wrote:
With L2arc, no such redundancy is needed. So, with a $100 SSD, if
you can get 8x the performance out of your dedup'd dataset, and you
don't have to worry about "what if the device fails", I'd call that
an awesome investment.
AFAIK, the L
On Jan 2, 2010, at 1:47 AM, Andras Spitzer wrote:
Mike,
As far as I know only Hitachi is using such a huge chunk size :
"So each vendor’s implementation of TP uses a different block size.
HDS use 42MB on the USP, EMC use 768KB on DMX, IBM allow a variable
size from 32KB to 256KB on the SVC
On Jan 1, 2010, at 6:33 PM, Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Erik Trimble wrote:
Maybe it's approaching time for vendors to just produce really
stupid SSDs: that is, ones that just do wear-leveling, and expose
their true page-size info (e.g. for MLC, how many blocks of X size
h
On Jan 1, 2010, at 2:23 PM, tom wagner wrote:
Yeah, still no joy. I moved the disks to another machine altogether
with 8gb and a quad core intel versus the dual core amd I was using
and it still just hangs the box on import. this time I did a nohup
zpool import -fFX vault after booting off
On Jan 1, 2010, at 8:11 AM, R.G. Keen wrote:
On Dec 31, 2009, at 6:14 PM, Richard Elling wrote:
Some nits:
disks aren't marked as semi-bad, but if ZFS has trouble with a
block, it will try to not use the block again. So there is two
levels
of recovery at work: whole device and block
On Jan 1, 2010, at 4:57 AM, LevT wrote:
Hi
(snv_130) created zfs pool storage (a mirror of two whole disks)
zfs created storage/iscsivol, made some tests, wrote some GBs
zfs created storage/mynas filesystem
(sharesmb
dedup=on
compression=on)
FILLED the storage/mynas
tried to ZFS DESTROY m
On Jan 1, 2010, at 11:28 AM, Bob Friesenhahn wrote:
On Fri, 1 Jan 2010, Al Hopper wrote:
Interesting article - rumor has it that this is the same controller
that Seagate will use in its upcoming enterprise level SSDs:
http://anandtech.com/storage/showdoc.aspx?i=3702
It reads like SandForce
On Dec 31, 2009, at 12:59 PM, Ragnar Sundblad wrote:
Flash SSDs actually always remap new writes into a
only-append-to-new-pages style, pretty much as ZFS does itself.
So for a SSD there is no big difference between ZFS and
filesystems as UFS, NTFS, HFS+ et al, on the flash level they
all work th
On Dec 31, 2009, at 6:14 PM, R.G. Keen wrote:
On Thu, 31 Dec 2009, Bob Friesenhahn wrote:
I like the nice and short answer from this "Bob
Friesen" fellow the
best. :-)
It was succinct, wasn't it? 8-)
Sorry - I pulled the attribution from the ID, not the
signature which was waiting below. DOH!
[I TRIMmed the thread a bit ;-)]
On Dec 31, 2009, at 1:43 AM, Ragnar Sundblad wrote:
On 31 dec 2009, at 06.01, Richard Elling wrote:
In a world with copy-on-write and without snapshots, it is obvious
that
there will be a lot of blocks running around that are no longer in
use.
Snapshots
On Dec 31, 2009, at 1:43 AM, Andras Spitzer wrote:
Let me sum up my thoughts in this topic.
To Richard [relling] : I agree with you this topic is even more
confusing if we are not careful enough to specify exactly what we
are talking about. Thin provision can be done on multiple layers,
a
On Dec 31, 2009, at 2:49 AM, Robert Milkowski wrote:
judging by a *very* quick glance it looks like you have an issue
with c3t0d0 device which is responding very slowly.
Yes, there is an I/O stuck on the device which is not getting serviced.
See below...
--
Robert Milkowski
http://milek.
On Dec 30, 2009, at 9:35 PM, Ross Walker wrote:
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then teste
On Dec 30, 2009, at 2:24 PM, Ragnar Sundblad wrote:
On 30 dec 2009, at 22.45, Richard Elling wrote:
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess
the question is always who are the targets for ZFS
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess the
question is always who are the targets for ZFS (I assume everyone,
though in reality priorities has to set up as the developer
resources are limited). For a ho
On Dec 30, 2009, at 12:41 PM, Tomas Ögren wrote:
On 30 December, 2009 - Dennis Yurichev sent me these 0,7K bytes:
Hi.
Why each file can't have also "expiration date/time" field, e.g.,
date/time when operation system will delete it automatically?
This could be usable for backups, camera raw fi
now this is getting interesting :-)...
On Dec 30, 2009, at 12:13 PM, Mike Gerdts wrote:
On Wed, Dec 30, 2009 at 1:40 PM, Richard Elling
wrote:
On Dec 30, 2009, at 10:53 AM, Andras Spitzer wrote:
Devzero,
Unfortunately that was my assumption as well. I don't have source
level
know
On Dec 30, 2009, at 10:53 AM, Andras Spitzer wrote:
Devzero,
Unfortunately that was my assumption as well. I don't have source
level knowledge of ZFS, though based on what I know it wouldn't be
an easy way to do it. I'm not even sure it's only a technical
question, but a design question,
On Dec 30, 2009, at 11:01 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
Just curious, but in your "ideal" situation, is it considered best
to use 1 controller for each vdev or user a different controler for
each device in the vdev (i'd guess the latter but ive been wr
On Dec 30, 2009, at 10:56 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Richard Elling wrote:
He's limited by GbE, which can only do 100 MB/s or so...
the PCI busses, bridges, memory, controllers, and disks will
be mostly loafing, from a bandwidth perspective. In other
words, don
On Dec 30, 2009, at 10:26 AM, tom wagner wrote:
Yeah, still no joy on getting my pool back. I think I might have to
try grabbing another server with a lot more memory and slapping the
HBA and the drives in that. Can ZFS deal with a controller change?
Yes.
-- richard
On Dec 30, 2009, at 10:17 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
and, onboard with 6 sata portsso what would be the best
method of connecting the drives if i go with 4 raidz vdevs or 5
raidz vdevs?
Try to distribute the raidz vdevs as evenly as possib
On Dec 30, 2009, at 9:35 AM, Bob Friesenhahn wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
Some important points to consider are that every write to a raidz
vdev must be synchronous. In other words, the write needs to
complete on all the drives in the stripe before the write may
return
On Dec 30, 2009, at 7:50 AM, Thomas Burgess wrote:
ok, but how should i connect the drives across the controllers?
Don't worry about the controllers. They are at least an order of
magnitude more reliable than the disks and if you are using HDDs,
then you will have plenty of performance.
-- ri
901 - 1000 of 2596 matches
Mail list logo