Hi All ;
What is the status of ZFS on linux and what are the kernel's supported?
Regards
Mertol
http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per storage-buying group in our
What is the status of ZFS on linux and what are the kernel’s supported?
There's sort of an experimental port to FUSE. Last I heard about it, it
isn't exactly stable and the ARC's missing too, or at least gimped.
There won't be in kernel ZFS due to license issues (CDDL vs. GPL).
-mg
Mario Goebbels wrote:
What is the status of ZFS on linux and what are the kernel’s supported?
There's sort of an experimental port to FUSE. Last I heard about it, it
isn't exactly stable and the ARC's missing too, or at least gimped.
There won't be in kernel ZFS due to license issues (CDDL
Also if ZFS can be implemented completely outside of the Linux kernel
source tree as a plugin module then it falls into the same category of
modules as proprietary binary device drivers.
The Linux community has a strange attitude about proprietary drivers.
Otherwise I wouldn't have to put up
Mario Goebbels [EMAIL PROTECTED] wrote:
What is the status of ZFS on linux and what are the kernel???s supported?
There's sort of an experimental port to FUSE. Last I heard about it, it
isn't exactly stable and the ARC's missing too, or at least gimped.
There won't be in kernel ZFS due to
Darren J Moffat [EMAIL PROTECTED] wrote:
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per
| I think the root cause of the issue is that multiple groups are buying
| physical rather than virtual storage yet it is all being attached to a
| single system.
They're actually buying constant-sized chunks of virtual storage, which
is provided through a pool of SAN-based disk space. This
Sorry for the delay. Here is the output for a couple of seconds:
# iostat -xce 1
extended device statistics errors ---
cpu
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot us
sy wt id
cmdk0 1.50.7 20.84.2 0.0
David Collier-Brown wrote:
Darren J Moffat [EMAIL PROTECTED] wrote:
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I could look up the average size we sell...)
Speaking as a
Simon Breden wrote:
Sorry for the delay. Here is the output for a couple of seconds:
This is the smoking gun...
# iostat -xce 1
extended device statistics errors ---
cpu
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn
Chris Siebenmann wrote:
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I could look up the average size
Chris Siebenmann [EMAIL PROTECTED] wrote:
| Speaking as a sysadmin (and a Sun customer), why on earth would I have
| to provision 8 GB+ of RAM on my NFS fileservers? I would much rather
| have that memory in the NFS client machines, where it can actually be
| put to work by user programs.
|
| (If
hmm, three drives with 35 io requests in the queue
and none active? remind me not to buy a drive
with that FW..
1) upgrade the FW in the drives or
2) turn off NCQ with:
echo set sata:sata_max_queue_depth = 0x1 /etc/system
Rob
Bart Smaalders wrote:
Chris Siebenmann wrote:
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I
Are there any updated guides/blogs on how to configure ZFS boot on a build 88
or later system?
If I already have an existing zpool, will I be able to just add a root/boot
dataset or does the root/boot dataset have to have it's own pool?
I have several working systems that have small UFS
Thanks a lot Richard. To give a bit more info, I've copied my /var/adm/messages
from booting up the machine:
And @picker: I guess the 35 requests are stacked up waiting for the hanging
request to be serviced?
The question I have is where do I go from now, to get some more info on what is
Hi Simon,
Simon Breden wrote:
Thanks a lot Richard. To give a bit more info, I've copied my
/var/adm/messages from booting up the machine:
And @picker: I guess the 35 requests are stacked up waiting for the hanging
request to be serviced?
The question I have is where do I go from now, to
This list seems out of sync (delayed) with email messages I receive.
Why is that?
Which are the best tools to use when reading / replying to these posts?
Anyway from my email I can see that Max has sent me a question about truss --
here is my reply:
Hi Max,
I haven't used truss before, but
Hi Simon,
Simon Breden wrote:
Hi Max,
I haven't used truss before, but give me the command line + switches
and I'll be happy to run it.
Simon
# truss -p pid_from_cp
where pid_from_cp is... the pid of the cp process that is hung. The
pid you can get from ps.
I am curious if the cp is
Doug wrote:
When we installed the Marvell driver patch 125205-07 on our X4500 a few
months ago and it started crashing, Sun support just told us to back out that
patch. The system has been stable since then.
We are still running Solaris 10 11/06 on that system. Is there an advantage
to
This mailing list seems broken and out of sync -- your post is as 'Guest' and
appears as a new post in the main zfs-discuss list -- and the main thread is
out of sync with the replies, and I just got a java exception trying to post to
the main thread -- what's going on here?
This message
Hi Max,
I re-ran the cp command and when it hanged I ran 'ps -el' looked up the cp
command, got it's PID and then ran:
# truss -p PID_of_cp
and it output nothing at all -- i.e. it hanged too -- just showing a flashing
cursor.
The system is still operational as I am typing into the browser.
Today my production server crashed 4 times. THIS IS NIGHTMARE!
Self-healing file system?! For me ZFS is SELF-KILLING filesystem.
I cannot fsck it, there's no such tool.
I cannot scrub it, it crashes 30-40 minutes after scrub starts.
I cannot use it, it crashes a number of times every day! And
On Thu, 1 May 2008, Rustam wrote:
Today my production server crashed 4 times. THIS IS NIGHTMARE!
Self-healing file system?! For me ZFS is SELF-KILLING filesystem.
I cannot fsck it, there's no such tool.
I cannot scrub it, it crashes 30-40 minutes after scrub starts.
I cannot use it, it
Hi Simon,
Simon Breden wrote:
Hi Max,
I re-ran the cp command and when it hanged I ran 'ps -el' looked up the cp
command, got it's PID and then ran:
# truss -p PID_of_cp
and it output nothing at all -- i.e. it hanged too -- just showing a flashing
cursor.
The system is still
Keep getting Java exceptions posting to the proper thread for this -- just lost
an hour --- WTF???
Had to reply to my own post as Max's reply (which I saw in my email inbox) has
not appeared here. Again, what is wrong with this forum software -- it seems so
buggy, or am I missing something
Just to reduce my stress levels and to give the webmaster some useful info to
help fix this broken forum:
I tried posting a reply to the main thread for 'cp -r hanged copying a
directory' and got the following error -- seems like it can't find the parent
thread/message's id in the database at
Rustam wrote:
Today my production server crashed 4 times. THIS IS NIGHTMARE!
Self-healing file system?! For me ZFS is SELF-KILLING filesystem.
I cannot fsck it, there's no such tool. I cannot scrub it, it crashes
30-40 minutes after scrub starts. I cannot use it, it crashes a
number of
Hi Simon,
Simon Breden wrote:
Thanks for your advice Max, and here is my reply to your suggestion:
# mdb -k
Loading modules: [ unix genunix specfs dtrace cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs ip hook neti sctp arp usba
s1394 nca lofs zfs random md sppp smbsrv nfs
Simon Breden wrote:
Thanks a lot Richard. To give a bit more info, I've copied my
/var/adm/messages from
booting up the machine:
And @picker: I guess the 35 requests are stacked up waiting for the hanging
request to be serviced?
The question I have is where do I go from now, to get
[forget the BUI forum, e-mail works better, IMHO]
Simon Breden wrote:
Thanks a lot Richard. To give a bit more info, I've copied my
/var/adm/messages from booting up the machine:
I don't see any major issues related to this problem in the messages.
And @picker: I guess the 35 requests
Is your ZFS pool configured with redundancy (e.g mirrors, raidz) or is
it non-redundant? If non-redundant, then there is not much that ZFS
can really do if a device begins to fail.
It's RAID 10 (more info here:
http://www.opensolaris.org/jive/thread.jspa?threadID=57425):
NAME STATE READ
On Thu, 1 May 2008, Rustam wrote:
operating system: 5.10 Generic_127112-07 (i86pc)
Seems kind of old. I am using Generic_127112-11 here.
Probably many hundreds of nasty bugs have been eliminated since the
version you are using.
Bob
==
Bob Friesenhahn
35 matches
Mail list logo