Claus Guttesen writes:
I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and
So the 1 MB files are stored as ~8 x 128K recordsize.
Because of
5003563 use smaller tail block for last block of object
The last block of you file is partially used. It will depend
on your filesize distribution by without that info we can
only guess that we're wasting an avg of
Claus Guttesen writes:
So the 1 MB files are stored as ~8 x 128K recordsize.
Because of
5003563 use smaller tail block for last block of object
The last block of you file is partially used. It will depend
on your filesize distribution by without that info we can
I'm CCing zfs-discuss@opensolaris.org, as this doesn't look like
FreeBSD-specific problem.
It looks there is a problem with block allocation(?) when we are near
quota limit. tank/foo dataset has quota set to 10m:
Without quota:
FreeBSD:
# dd if=/dev/zero of=/tank/test bs=512
Ok I found the problem with 0x06, one disk was missing. But now I got all my
disk and I get 0x05.:
Sep 21 10:25:53 unknown ^Mpanic[cpu0]/thread=ff0001e12c80:
Sep 21 10:25:53 unknown genunix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo-smo_object, offset, size, entry_map) == 0
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
http://people.freebsd.org/~pjd/misc/zfs/zfs-man.swf
BTW. Inspired by ZFS
On 9/20/07 7:31 PM, Paul B. Henson [EMAIL PROTECTED] wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
It's an IBM re-branded NetApp which can which we are using for NFS and
iSCSI.
Yeah its fun to see IBM compete with its OEM provider Netapp.
Ah, I see.
Is it comparable storage
Gino wrote:
The x4500 is very sweet and the only thing stopping
us from buying two
instead of another shelf is the fact that we have
lost pools on Sol10u3
servers and there is no easy way of making two pools
redundant (ie the
complexity of clustering.) Simply sending incremental
On 9/20/07, Paul B. Henson [EMAIL PROTECTED] wrote:
Again though, that would imply two different storage locations visible to
the clients? I'd really rather avoid that. For example, with our current
Samba implementation, a user can just connect to
'\\files.csupomona.edu\username' to access
Mike, Grant,
I reported the zoneadm.1m man page problem to the man page group.
I also added some stronger wording to the ZFS Admin Guide and the
ZFS FAQ about not using ZFS for zone root paths for the Solaris 10
release and that upgrading or patching is not supported for either
Solaris 10 or
On Sep 21, 2007, at 11:47 AM, Pawel Jakub Dawidek wrote:
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from buying two
instead of another shelf is the fact that we have lost pools on Sol10u3
servers and there is no easy way of making two pools redundant (ie the
complexity of clustering.) Simply
On Sep 21, 2007, at 14:57, eric kustarz wrote:
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won
the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
Jonathan Edwards wrote:
On Sep 21, 2007, at 14:57, eric kustarz wrote:
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won
the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from buying two
instead of another shelf is the fact that we have lost pools on Sol10u3
servers and there is no easy way of making two pools redundant (ie the
complexity
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from
buying two
instead of another shelf is the fact that we have lost pools on
Sol10u3
servers and there is no easy
eric kustarz wrote:
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
m2# zpool create test mirror iscsi_lun1 iscsi_lun2
m2# zpool export test
m1# zpool import -f test
m1# reboot
m2# reboot
Since I haven't actually looked into what problem caused your pools to
become damaged/lost, i can
grant beattie wrote:
I don't have any advice, unfortunately, but I do know that in my case
putting zones on UFS is simply not an option. there must be a way
considering there is nothing in the documentation to suggest that zones
on ZFS are not supported.
There's a very explicit Do not
On 9/20/07, Matthew Flanagan [EMAIL PROTECTED] wrote:
Mike,
I followed your procedure for cloning zones and it worked
well up until yesterday when I tried applying the S10U4
kernel patch 12001-14 and it wouldn't apply because I had
my zones on zfs :(
Thanks for sharing. That sucks.
I'm
On 9/21/07, Christine Tran [EMAIL PROTECTED] wrote:
patch and install tools can't figure out pools yet. If you have a 1GB
pool and 10 filesystems on it, du reports each having 1GB, do you have
10GB capacity? The tools can't tell. Please check the archives, this
subject has been extensively
On Thu, 20 Sep 2007, eric kustarz wrote:
As far as quotas, I was less than impressed with their implementation.
Would you mind going into more details here?
The feature set was fairly extensive, they supported volume quotas for
users or groups, or qtree quotas, which similar to the ZFS quota
On Fri, 21 Sep 2007, Andy Lubel wrote:
Yeah its fun to see IBM compete with its OEM provider Netapp.
Yes, we had both IBM and Netapp out as well. I'm not sure what the point
was... We do have some IBM SAN equipment on site, I suppose if we had gone
with the IBM variant we could have
On Fri, 21 Sep 2007, James F. Hranicky wrote:
It just seems rather involved, and relatively inefficient to continuously
be mounting/unmounting stuff all the time. One of the applications to be
deployed against the filesystem will be web service, I can't really
envision a web server with
On Fri, 21 Sep 2007, Mike Gerdts wrote:
MS-DFS could be helpful here. You could have a virtual samba instance
that generates MS-DFS redirects to the appropriate spot. At one point in
That's true, although I rather detest Microsoft DFS (they stole the acronym
from DCE/DFS, even though
msl wrote:
Hello all,
There is a way to configure the zpool to legacy_mount, and have all
filesystems in that pool mounted automatically?
I will try explain better:
- Imagine that i have a zfs pool with 1000 filesystems.
- I want to control the mount/unmount of that pool, so, i did
On Fri, 21 Sep 2007, Tim Spriggs wrote:
Still, we are using ZFS but we are re-thinking on how to deploy/manage
it. Our original model had us exporting/importing pools in order to move
zone data between machines. We had done the same with UFS on iSCSI
[...]
When we don't move pools around, zfs
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote:
I was planning to provide CIFS services via Samba. I noticed a posting a
while back from a Sun engineer working on integrating NFSv4/ZFS ACL
support
into Samba, but I'm not sure if that was ever completed and shipped
27 matches
Mail list logo