On Dec 5, 2006, at 12:13 PM, Rutger Bevaart wrote:

There were 2 targets created of 4GB, online and used. They were connected to by static configuration using a local network interface (nge1), and mounted as a local filesystem.

I created one additional target of 120GB which never came online (at least not within a few hours). When I attempted to delete it, in time this is roughly when it crashed. I cannot be sure it was that what triggered it...


Given the stack trace, it was the delete that caused the core. Some quick background on how the target creates logical units. The administrator will request that the daemon create a LU of some size, in your case that was 120GB. The daemon does a quick check to see if there's room of the device for a LU of that size. If so, it returns success and starts a separate thread to initialize the LU. This thread basically has to write every block in the file to guarantee the space is available. If it doesn't perform this operation the file would be hole-y, this is called thin-provisioning which has pros and cons.

Now, as you've seen this can take quite a long time to initialize the LU. When you deleted the LU, the daemon needed to shutdown that thread and does so by sending the thread a message to exit. The thread will send an ack message so that the main thread knows that it got through and then exit. The main thread then does some cleanup. This is where there's some problem which has escaped our test suites.

@Rick,
Given some time I can do some ZFS work, but if I understand correctly I cannot use a partition as ZFS right? I might be able to break the mirror and use the second disk for ZFS - if that helps debugging...


I use ZFS on several of my machines which only have a single disk. I use the partition which is normally setup by the Solaris install program as /export/home. I remove /export/home from /etc/vfstab, unmount it, and then run:
    zpool create -f playground c0t0d0s7

There's no redundancy, but for my testing that's fine.

Since I now know where in the code the problem is happening and what you where doing when this occurred, I should be able to reproduce the problem.

My suggestion to use ZFS was more from the standpoint that UFS will not give you the performance that you might seek. I look at UFS/SVM in the same way as my old 1970 Dodge D100 pickup truck. It was a good old truck that I loved, but it would haul to much nor get anywhere real fast. ZFS is like my new 2006 Dodge 2500 with the Cummins Diesel engine. This truck pulls my 8000 pound trailer like it's not even there.

If you have a reason to run UFS/SVM then do so.


This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

----
Rick McNeal

"If ignorance is bliss, this lesson would appear to be a deliberate attempt on your part to deprive me of happiness, the pursuit of which is my unalienable right according to the Declaration of Independence. I therefore assert my patriotic prerogative not to know this material. I'll be out on the playground." -- Calvin


_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to