Tim Cook wrote:
On Sun, Jan 3, 2010 at 6:58 PM, Jerome Warnier jwarn...@beeznest.net
mailto:jwarn...@beeznest.net wrote:
Hi,
I'm smbsharing ZFS filesystems.
I know how to restrict access to it to some hosts (and users), but did
not find any way to forbid the smb protocol
On Mon, Jan 4 at 22:01, Thomas Burgess wrote:
I guess i got some bad advice then
I was told the kingston snv125-s2 used almost the exact same hardware as
an x25-m and should be considered the poor mans x25-m
...
Right, i couldn't find any of the 40 gb's in stock so i ordered the 64
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
maximize bandwidth in an SSD controller without specialized hardware
support.
Fusion-io would
The SNV125-S2/40GB is the half an X25-M drive which can be often
found as a bare OEM drive for about $85 w/ rebate.
Kingston does sell rebranded Intel SLC drives as well, but under a
different model number: SNE-125S2/32 or SNE-125S2/64. I don't believe
the 64GB Kingston MLC (SNV-125S2/64)
600? I've heard 1.5GBps reported.
On 1/5/10, Eric D. Mudama edmud...@bounceswoosh.org wrote:
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
I replayed a bunch of filesystems in order to get dedupe benefits.
Only thing is a couple of them are rolled back to November or so (and
I didn't notice before destroy'ing the old copy).
I used something like:
zfs snapshot pool/f...@dd
zfs send -Rp pool/f...@dd | zfs recv -d pool/fs2
(after
Chris Du dilid...@gmail.com wrote:
You can use the utility to erase all blocks and regain performance, but it's
a manual process and quite complex. Windows 7 support TRIM, if SSD firmware
also supports it, the process is run in the background so you will not notice
performance degrade. I
Michael Herf wrote:
I replayed a bunch of filesystems in order to get dedupe benefits.
Only thing is a couple of them are rolled back to November or so (and
I didn't notice before destroy'ing the old copy).
I used something like:
zfs snapshot pool/f...@dd
zfs send -Rp pool/f...@dd | zfs recv
I didn't use -v, so I don't know.
I just waited until the process exited, assuming it would succeed or
fail. The sizes looked equivalent, so I went ahead with the destroy,
rename.
For the jobs a couple weeks ago, I turned off the snapshot service.
For this one, I probably left it on. Anything
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them (because they eat 80% of disk space) it
Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
Hi, while not providing complete solution, I'd suggest turning atime off so
find/rm does not change access time and possibly destroying unnecessary
snapshots before removing files, should be quicker.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues.
Juergen Nickelsen n...@jnickelsen.de wrote:
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes:
The netapps patents contain claims on ideas that I invented for my Diploma
thesis work between 1989 and 1991, so the netapps patents only describe
prior
art. The new ideas
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of
On Tue, January 5, 2010 17:08, David Magda wrote:
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any
On Mon, 4 Jan 2010, Tony Russell wrote:
I am under the impression that dedupe is still only in OpenSolaris
and that support for dedupe is limited or non existent. Is this
true? I would like to use ZFS and the dedupe capability to store
multiple virtual machine images. The problem is that
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and
then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs destroy rpool/junk
On Mon, Jan 4, 2010 at 8:59 PM, Richard Elling richard.ell...@gmail.com wrote:
On Jan 4, 2010, at 6:40 AM, Carl Rathman wrote:
I have a zpool raidz1 array (called storage) that I created under snv_118.
I then created a zfs filesystem called storage/vmware which I shared
out via iscsi.
I
On Tue, January 5, 2010 10:50, Michael Schuster wrote:
David Magda wrote:
Normally when you do a move with-in a 'regular' file system all that's
usually done is the directory pointer is shuffled around. This is not
the case with ZFS data sets, even though they're on the same pool?
no - mv
Richard Elling writes:
On Jan 3, 2010, at 11:27 PM, matthew patton wrote:
I find it baffling that RaidZ(2,3) was designed to split a record-
size block into N (N=# of member devices) pieces and send the
uselessly tiny requests to spinning rust when we know the massive
delays
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
now that
we need to get
Michael Schuster michael.schus...@sun.com wrote:
rm -rf would be at least as quick.
Normally when you do a move with-in a 'regular' file system all that's
usually done is the directory pointer is shuffled around. This is not the
case with ZFS data sets, even though they're on the same
On Mon, January 4, 2010 13:51, Ross wrote:
I initialized a new whole-disk pool on an external
USB drive, and then did zfs send from my big data pool and zfs recv onto
the
new external pool.
Sometimes this fails, but this time it completed.
That's the key bit for me - zfs send /receive
no - mv doesn't know about zpools, only about posix filesystems.
mv doesn't care about filesystems only about the interface provided by
POSIX.
There is no zfs specific interface which allows you to move a file from
one zfs to the next.
Casper
___
On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote:
I didn't mean to destroy the pool. I used zpool destroy on a zvol,
when I should have used zfs destroy.
When I used zpool destroy -f mypool/myvolume the machine hard locked
after about 20 minutes.
This would be a bug. zpool destroy should
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down to 25 days, 100x would get it down to 2.5 days
(assuming
On Jan 5, 2010, at 4:38 PM, Bob Friesenhahn wrote:
On Mon, 4 Jan 2010, Tony Russell wrote:
I am under the impression that dedupe is still only in OpenSolaris and that
support for dedupe is limited or non existent. Is this true? I would like
to use ZFS and the dedupe capability to store
Brad bene...@yahoo.com writes:
Hi Adam,
I'm not Adam, but I'll take a stab at it anyway.
BTW, your crossposting is a bit confusing to follow, at least when using
gmane.org. I think it is better to stick to one mailing list anyway?
From your the picture, it looks like the data is distributed
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down
On Tue, Jan 5, 2010 at 10:12 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote:
I didn't mean to destroy the pool. I used zpool destroy on a zvol,
when I should have used zfs destroy.
When I used zpool destroy -f mypool/myvolume the machine
Hi Paul,
I opened 6914208 to cover the sysevent/zfsdle problem.
If the system crashed due to a power failure and the disk labels for
this pool were corrupted, then I think you will need to follow the steps
to get the disks relabeled correctly. You might review some previous
postings by Victor
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects are stored efficiently.
Have you got any
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and managed to
raise free disk space from 4 GB to 10 GB, the rm -rf directory method
started to perform significantly faster. Now it's deleting around 4,000
files/minute (240,000/h - quite an
On 01/ 5/10 10:01 AM, Richard Elling wrote:
How are the files named? If you know something about the filename
pattern, then you could create subdirs and mv large numbers of files
to reduce the overall size of a single directory. Something like:
mkdir .A
mv A* .A
mkdir .B
mv B*
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them
Paul Gress wrote:
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster
michael.schus...@sun.com wrote:
we need to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.
I've been following this thread. Would it be faster to do the reverse.
Copy the 20% of disk then format then
Hi Dan,
Can you describe what you are trying to recover from with more details
because we can't quite follow what steps might have lead to this
scenario.
For example, your hdc pool had two disks, c1t0d0s0 and c8t1d0s0,
and your rpool has c8t0d0s0 so s8t0d0s0 cannot be wiped clean.
Maybe you
Hi Dan,
Can you describe what you are trying to recover from
with more details
because we can't quite follow what steps might have
lead to this
scenario.
Sorry.
I was running Nevada 103 with a root zpool called hdc with c1t0d0s0 and
c1t1d0s0.
I first uttered: zpool detach hdc
On Tue, Jan 05, 2010 at 04:49:00PM +, Robert Milkowski wrote:
A possible *workaround* is to use SVM to set-up RAID-5 and create a
zfs pool on top of it.
How does SVM handle R5 write hole? IIRC SVM doesn't offer RAID-6.
As far as I know, it does not address it. It's possible that adding a
Le 5 janv. 10 à 17:49, Robert Milkowski a écrit :
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large
On Jan 4, 2010, at 7:08 PM, Brad wrote:
Hi Adam,
From your the picture, it looks like the data is distributed evenly
(with the exception of parity) across each spindle then wrapping
around again (final 4K) - is this one single write operation or two?
| P | D00 | D01 | D02 | D03 | D04 |
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote:
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and
managed to
raise free disk space from 4 GB to 10 GB, the rm -rf directory
method
started to perform significantly faster. Now it's deleting
On 05/01/2010 18:37, Roch Bourbonnais wrote:
Writes are not the problem and we have log device to offload them.
It's really about maintaining integrity of raid-5 type layout in the
presence of bit-rot even if such
bit-rot occur within free space.
How is it addressed in RAID-DP?
--
On 05/01/2010 18:49, Richard Elling wrote:
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
The problem is that while RAID-Z is really good for some workloads it
is really bad for others.
Sometimes having L2ARC might effectively mitigate the problem but for
some workloads it won't (due to
On Tue, January 5, 2010 10:25, Richard Elling wrote:
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
It's interesting how our ability to build larger disks, and our
software's
ability to do things like create really large numbers of files,
comes back
to bite us on the ass every now
On 6/01/2010 3:00 AM, Roch wrote:
Richard Elling writes:
On Jan 3, 2010, at 11:27 PM, matthew patton wrote:
I find it baffling that RaidZ(2,3) was designed to split a record-
size block into N (N=# of member devices) pieces and send the
uselessly tiny requests to
On Jan 5, 2010, at 11:30 AM, Robert Milkowski wrote:
On 05/01/2010 18:49, Richard Elling wrote:
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
The problem is that while RAID-Z is really good for some workloads
it is really bad for others.
Sometimes having L2ARC might effectively
On Jan 5, 2010, at 11:56 AM, Tristan Ball wrote:
On 6/01/2010 3:00 AM, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
On Tue, 5 Jan 2010, Richard Elling wrote:
Since there are already 1 TB SSDs on the market, the only thing keeping the
HDD market alive is the low $/TB. Moore's Law predicts that cost advantage
will pass. SSDs are already the low $/IOPS winners.
SSD vendors are still working to stabilize
On 6/01/2010 7:19 AM, Richard Elling wrote:
If you are doing small, random reads on dozens of TB of data, then you've
got a much bigger problem on your hands... kinda like counting grains of
sand on the beach during low tide :-). Hopefully, you do not have to
randomly
update that data
Is there any limit on the number of snapshots in a file system?
The documentation -- manual page, admin guide, troubleshooting guide
-- does not mention any. That seems to confirm my assumption that is
is probably not a fixed limit, but there may still be a practical
one, just like there is no
On 05/01/2010 20:19, Richard Elling wrote:
On Jan 5, 2010, at 11:30 AM, Robert Milkowski wrote:
On 05/01/2010 18:49, Richard Elling wrote:
On Jan 5, 2010, at 8:49 AM, Robert Milkowski wrote:
The problem is that while RAID-Z is really good for some workloads
it is really bad for others.
On 05/01/2010 20:19, Richard Elling wrote:
[...] Fortunately, most
workloads are not of that size and scope.
Forgot to mention it in my last email - yes, I agree. The environment
I'm talking about is rather unusual and in most other cases where
RAID-5/6 was considered the performance of
dm == David Magda dma...@ee.ryerson.ca writes:
dm 4096 - to-512 blocks
aiui NAND flash has a minimum write size (determiined by ECC OOB bits)
of 2 - 4kB, and a minimum erase size that's much larger. Remapping
cannot abstract away the performance implication of the minimum write
size if you
Many large-scale photo hosts start with netapp as the default good
enough way to handle multiple-TB storage. With a 1-5% cache on top,
the workload is truly random-read over many TBs. But these workloads
almost assume a frontend cache to take care of hot traffic, so L2ARC
is just a nice
Eric D. Mudama wrote:
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
maximize bandwidth in an SSD controller without specialized hardware
On 05/01/2010 23:31, Michael Herf wrote:
The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be
used instead of netapp. But this certainly reduces the cost advantage
significantly.
This is true to some extent. I didn't want to bring it up as I wanted to
focus only on
On Jan 5, 2010, at 16:06, Bob Friesenhahn wrote:
Perhaps inovative designers like Suncast will figure out how to
build reliable SSDs based on parts which are more likely to wear out
and forget.
At which point we'll probably start seeing the memristor start making
an appearance in various
Juergen Nickelsen wrote:
Is there any limit on the number of snapshots in a file system?
The documentation -- manual page, admin guide, troubleshooting guide
-- does not mention any. That seems to confirm my assumption that is
is probably not a fixed limit, but there may still be a practical
One reason I was so interested in this issue was the double-price of raid
enabled disks.
However, I realized that I am doing the initial proving, not production - even
if personal - of the system I'm building. So for that purpose, an array of
smaller and cheaper disks might be good.
In the
For those searching list archives, the SNV125-S2/40GB given below is not
based on the Intel controller.
I queried Kingston directly about this because there appears to be so
much confusion (and I'm considering using these drives!), and I got back
that:
The V series uses a JMicron Controller
The
As a further update, I went back and re-read my SSD controller info, and
then did some more Googling.
Turns out, I'm about a year behind on State-of-the-SSD.Eric is
correct on the way current SSDs implement writes (both SLC and MLC), so
I'm issuing a mea-cupla here. The change in
Wes Felter wrote:
Eric D. Mudama wrote:
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
maximize bandwidth in an SSD controller without
On Wed, Jan 6 at 14:56, Tristan Ball wrote:
For those searching list archives, the SNV125-S2/40GB given below is not
based on the Intel controller.
I queried Kingston directly about this because there appears to be so
much confusion (and I'm considering using these drives!), and I got back
I think the confusing part is that the 64gb version seems to use a different
controller all together
I couldn't find any SNV125-S2/40's in stock so i got 3 SNV125-S2/64's
thinking it would be the same,m only bigger.looks like it was stupid on
my part.
now i understand why i got such a good
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a
'du -sh' on the filesystem root, I only get appr. 300GB which is the correct
size.
The file system became full during Christmas and I increased the quota from 1
to 1.5 to 2TB and then decreased to 1.5TB. No
74 matches
Mail list logo