Re: [OpenIndiana-discuss] ZFS; what the manuals don't say ...
2012-10-24 23:58, Timothy Coalson wrote: I doubt I would like the outcome of having some software make arbitrary decisions of what real filesystem each to put file on, and then having one filesystem fail, so if you really expect this, you may be happier keeping the two pools separate and deciding where to put stuff yourself (since if you are expecting a set of disks to fail, I expect you would have some idea as to which ones it would be, for instance an external enclosure). This to an extent sounds similar (doable with) hierarchical storage management, such as Sun's SAMFS/QFS solution. Essentially, this is a (virtual) filesystem where you set up storage rules based on last access times and frequencies, data types, etc. and where you have many tiers of storage (ranging from fast, small, expensive to slow, bulky, cheap), such as SSD Arrays - 15K SAS arrays - 7.2 SATA - Tape. New incoming data ends up on the fast tier. Old stale data lives on tapes. Data used sometimes migrates between tiers. The rules you define for the HSM system regulate how many copies on which tier you'd store, so loss of some devices should not be fatal - as well as cleaning up space on the faster tier to receive new data or to cache the old data requested by users and fetched from slower tiers. I did propose to add some HSM-type capabilities to ZFS, mostly with the goals of power-saving on home-NAS machines, so that the box could live with a couple of active disks (i.e. rpool and the active-data part of the data pool) while most of the data pool's disks can remain spun-down. Whenever a user reads some data from the pool (watching a movie or listening to music or processing his photos) the system would prefetch the data (perhaps a folder with MP3's) onto the cache disks and let the big ones spin down - with a home NAS and few users it is likely that if you're watching a movie, you system is otherwise unused for a couple of hours. Likewise, and this happens to be the trickier part, new writes to the data pool should go to the active disks and occasionally sync to and spread over the main pool disk. I hoped this can be all done transparently to users within ZFS, but overall discussions led to conclusion that this can better be done not within ZFS, but with some daemons (perhaps a dtrace-abusing script) doing the data migration and abstraction (the transparency to users). Besides, with introduction and advances in generic L2ARC, and with the possibility of file-level prefetch, much of that discussion became moot ;) Hope this small historical insight helps you :) //Jim Klimov ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS; what the manuals don't say ...
2012-10-24 15:17, Robin Axelsson wrote: On 2012-10-23 20:06, Jim Klimov wrote: 2012-10-23 19:53, Robin Axelsson wrote: ... But if I do send/receive to the same pool I will need to have enough free space in it to fit at least two copies of the dataset I want to reallocate. Likewise with reallocation of files - though the unit of required space would be smaller. It seems like what zfs is missing here is a good defrag tool. This was discussed several times, with the outcome being that with ZFS's data allocation policies, there is no one good defrag policy. The two most popular options are about storing the current live copy of a file contiguously (as opposed to its history of released blocks only referenced in snapshots) vs. storing pool blocks in ascending creation-TXG order (to arguably speed up scrubs and resilvers, which can consume a noticeable portion of performance doing random IO). Usual users mostly think that the first goal is good - however, if you add clones and dedup into the equation, it might never be possible to retain their benefits AND store all files contiguously. Also, as with other matters of moving blocks around in the allocation areas and transparently to other layers of the system (that is, on a live system that actively does I/O while you defrag data), there are some other problems that I'm not very qualified to speculate about, that are deemed to be solvable by the generic BPR. Still, I do think that many of the problems postponed until the time that BPR arrives, can be solved with different methods and limitations (such as off-line mangling of data on the pool) which might still be acceptable to some use-cases. All-in-all, the main intended usage of ZFS is on relatively powerful enterprise-class machines, where much of the needed data is cached on SSD or in huge RAM, so random HDD IO lags become less relevant. This situation is most noticeable with deduplication, which in ZFS implementation requires vast resources to basically function. With market prices going down over time, it is more likely to see home-NAS boxes tomorrow similarly spec'ed to enterprise servers of today, than to see the core software fundamentally revised and rearchitected for boxes of yesterday. After all, even in open-source world, developers need to eat and feed their families, so commercial applicability does matter and does influence the engineering designs and trade-offs. It would be interesting to know how you convert a raidz2 stripe to say a raidz3 stripe. Let's say that I'm on a raidz2 pool and want to add an extra parity drive by converting it to a raidz3 pool. I'm imagining that would be like creating a raidz1 pool on top of the leaf vdevs that constitutes the raidz2 pool and the new leaf vdev which results in an additional parity drive. It doesn't sound too difficult to do that. Actually, this way you could even get raidz4 or raidz5 pools. Question is though, how things would pan out performance wise, I would imagine that a 55 drive raidz25 pool is really taxing on the CPU. Going from raidz3 to raidz2 or from raidz2 to raidz1 sounds like a no-brainer; you just remove one drive from the pool and force zpool to accept the new state as normal. But expanding a raidz pool with additional storage while preserving the parity structure sounds a little bit trickier. I don't think I have that knowledge to write a bpr rewriter although I'm reading Solaris Internals right now ;) Read also the ZFS On-Disk Specification (the one I saw is somewhat outdated, being from 2006, but most concepts and data structures are the foundation - expected to remain in place and be expanded upon). In short, if I got that all right, the leaf components of a top-level VDEV are striped upon creation and declared as an allocation area with its ID and monotonous offsets of sectors (and further subdivided into a couple hundred SPAs to reduce seeking). For example, on a 5-disk array the offsets of the pooled sectors might look like this: 0 1 2 3 4 5 6 7 8 9 ... (For the purposes of offset numbering, sector size is 512b - even on 4Kb-sectored disks; I am not sure how that's processed in the address math - likely the ashift value helps pick the specific disk's number). Then when a piece of data is saved by the OS (kernel metadata or userland userdata), this is logically combined into a block, processed for storage (compressed, etc.) and depending on redundancy level some sectors with parity data are prepended to each set of data sectors. For a raidz2 of 5 disks you'd have 2 parity (P, p, b) and up to 3 data sectors (D, d, k) as in the example below: P0 P1 D0 D1 D2 P2 P3 D3 D4 p0 p1 d0 b0 b1 k0 k1 k2 ... ZFS allocates only as many sectors as are needed to store the redundancy and data for the block, so the data (and holes after removal of data) are not very predictably intermixed - as would be the case with traditional full-stripe RAID5/6. Still, this does allow recovery from the loss of N
Re: [OpenIndiana-discuss] About virt-manager
23.10.12 22:17, ?? ? ?: Tell to exchange experience I can tell you how to open the DVD disk on an OI through FTP service in the home environment with a laptop running Windows Today I updated OI 151a4 to OI151a7 on hp ML110G6. KVM works fine. But I want to use virt-manager to manage VM. So let's me tell the status of virt-manager and libvirt. Best Regards. ryo ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss -- Regards, Arhipkin Ilya (Miass OpenSolaris Team Leader) http://www.post.arhipkin.com ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Newbie server testing...
Hi, I am a brand new SunOS/OpenIndiana newbie coming from Linux... I am trying to build a test storage server for some zfs fun. First, I would like to know if what I did looks ok, and then I ran into a little issue. So, the server has 6 internal 3TB SATA drives (c5txdy), plus 2 small eSATA external drives (c3txdy). I installed the server/text version on the 1st external drive. Then I more or less did the following: pkg update reboot zpool upgrade rpool format -e c3t1d0 fdisk label format prtvtoc /dev/rdsk/c3t0d0p0 | fmthard -s - /dev/rdsk/c3t1d0p0 zpool attach -f rpool c3t0d0s0 c3t1d0s0 installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t1d0s0 zpool create VOLUME raidz2 c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 So, I have a RAID1 on the 2 external drives, and a RAID6 on the 6 internal. Anything wrong or inefficient? I read about putting zfs logs out of the RAID6, but not sure how... I think the format part is a bit unclear for me but it worked... Then, for testing, I needed to put some data from an ext3 usb drive on the raidz2... So I had to first install ext2 support: pfexec pkgadd -d SFEe2fsprogs.pkg SFElibiconv SFEe2fsprogs pfexec pkgadd -d SUNWext2fs.pkg SUNWext2fs It worked, but the file transfer (rsync -avH) was VERY slow (only 179G in 15 hours)... From the logs: Oct 24 12:05:49 clust-2 usba: [ID 912658 kern.info] USB 2.0 device (usb4fc,c15) operating at hi speed (USB 2.x) on USB 2.0 external hub: storage@1, scsa2usb0 at bus address 5 Oct 24 12:05:49 clust-2 usba: [ID 349649 kern.info] Sunplus Technology Inc. USB to Serial-ATA bridge ST2000DL00S2H7J90C602099 Oct 24 12:05:49 clust-2 genunix: [ID 936769 kern.info] scsa2usb0 is /pci@0,0/pci15d9,62f@1d/hub@1/storage@1 Oct 24 12:05:49 clust-2 genunix: [ID 408114 kern.info] /pci@0,0/pci15d9,62f@1d/hub@1/storage@1 (scsa2usb0) online Oct 24 12:05:49 clust-2 scsi: [ID 583861 kern.info] sd0 at scsa2usb0: target 0 lun 0 Oct 24 12:05:49 clust-2 genunix: [ID 936769 kern.info] sd0 is /pci@0,0/pci15d9,62f@1d/hub@1/storage@1/disk@0,0 Oct 24 12:05:49 clust-2 genunix: [ID 408114 kern.info] /pci@0,0/pci15d9,62f@1d/hub@1/storage@1/disk@0,0 (sd0) online Oct 24 12:09:58 clust-2 ufs: [ID 717476 kern.notice] NOTICE: mount: not a UFS magic number (0x0) Oct 24 12:22:07 clust-2 ext2fs: [ID 800345 kern.notice] NOTICE: info: mount_count=0 Oct 24 12:22:07 clust-2 ext2fs: [ID 854744 kern.notice] NOTICE: Setting ops, name=ext2fs Oct 24 12:22:07 clust-2 ext2fs: [ID 302322 kern.notice] NOTICE: ext2init end Oct 24 12:22:07 clust-2 ext2fs: [ID 850434 kern.notice] NOTICE: DATAs_033: 13472722/244203520 files, 487818616/488378000 blocks Oct 24 12:55:50 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 12:57:00 clust-2 last message repeated 190 times Oct 24 12:57:24 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:03:30 clust-2 last message repeated 940 times Oct 24 13:04:15 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:10:23 clust-2 last message repeated 955 times Oct 24 13:10:24 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:16:25 clust-2 last message repeated 592 times ... Oct 25 03:03:41 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 25 03:10:01 clust-2 last message repeated 94 times And that stops there at 3am because this morning I had been disconnected and when I went to the server, there was some kind of kernel panic (I think related to ext2fs) and it said something about dumping zfs stuff and rebooting... but it got stuck there and never rebooted. I cannot find the panic message... is it written somewhere or only on the screen? Is the ext2fs module stable? Thx, JD ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Power management in OI
Hello all, I'm trying to set up power management on my oi_151a5 laptop. For one, I found a useful comment in this blog post: http://prefetch.net/blog/index.php/2009/07/12/using-the-cpu-power-management-features-in-solaris/ ...that cpupm (in /etc/power.conf) must include the poll-mode keyword: cpu-threshold 1s cpupm enable poll-mode At least, with this option set, I see reduced core speeds with kstat -m cpu_info -i 0 -s current_clock_Hz and powertop much more often than with just cpupm enable. Bumping the threshold to 3s also helped the CPU stay in reduced states much longer, because running GNOME seems to bite off about 3% user-time and about 3% kernel-time on this laptop, which causes the CPU to wake up often needlessly (IMHO). I am not sure HDD powersaving is viable, given there's 1 disk in the system and ZFS has something to flush every cycle, but there were no reportable problems with setting it up with http://constantin.glez.de/blog/2010/03/opensolaris-home-server-scripting-2-setting-power-management (however, I did not hear spindown-spinup when I set a 1s timeout). Now, I'm having a problem with suspend: when I do the action, the system falls out of X11 into text mode (empty screen with the cursor bar) and freezes. The box must be hard-rebooted afterwards. Logs include: Sep 2 12:14:41 nbofh genunix: [ID 535284 kern.notice] System is being suspended Sep 2 12:14:42 nbofh genunix: [ID 122848 kern.warning] WARNING: Unable to suspend device display@1. Sep 2 12:14:43 nbofh genunix: [ID 537702 kern.warning] WARNING: Device is busy or does not support suspend/resume. Sep 2 12:14:53 nbofh srn: [ID 980641 kern.warning] WARNING: srn_notify: clone 2 did not ack event a03 Sep 2 12:14:53 nbofh genunix: [ID 121466 kern.warning] WARNING: audiohd#0: Unable to restore value of control record-source Sep 2 12:14:54 nbofh genunix: [ID 583038 kern.notice] System has been resumed. Sep 2 12:22:52 nbofh genunix: [ID 576964 kern.notice] ^MOpenIndiana Build oi_151a5 64-bit (illumos 13740:836bfdf31fc4) Sep 2 12:22:52 nbofh genunix: [ID 107366 kern.notice] SunOS Release 5.11 - Copyright 1983-2010 Oracle and/or its affiliates. ... So, apparently, some device - the display (generic VESA VGA on the Radeon built into the AMD E2 CPU) or one of the audios (hdmi on the radeon or separate realtek chip) fail to suspend/resume and may cause the freeze. On side notes: DPMI works ok to turn off the lights on the LCD screen. Suspend-resume worked OK in Win7, though I couldn't get that to hibernate for no apparent reason. Thanks for ideas, //Jim Klimov ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Newbie server testing...
Hi, On čt, 2012-10-25 at 05:05 -0700, John Doe wrote: Hi, [...] Then, for testing, I needed to put some data from an ext3 usb drive on the raidz2... So I had to first install ext2 support: pfexec pkgadd -d SFEe2fsprogs.pkg SFElibiconv SFEe2fsprogs pfexec pkgadd -d SUNWext2fs.pkg SUNWext2fs from where did you download these packages? It worked, but the file transfer (rsync -avH) was VERY slow (only 179G in 15 hours)... It could be. Either USB support or ext2(3)fs is slow in your case. From the logs: Oct 24 12:05:49 clust-2 usba: [ID 912658 kern.info] USB 2.0 device (usb4fc,c15) operating at hi speed (USB 2.x) on USB 2.0 external hub: storage@1, scsa2usb0 at bus address 5 Oct 24 12:05:49 clust-2 usba: [ID 349649 kern.info] Sunplus Technology Inc. USB to Serial-ATA bridge ST2000DL00S2H7J90C602099 Oct 24 12:05:49 clust-2 genunix: [ID 936769 kern.info] scsa2usb0 is /pci@0,0/pci15d9,62f@1d/hub@1/storage@1 Oct 24 12:05:49 clust-2 genunix: [ID 408114 kern.info] /pci@0,0/pci15d9,62f@1d/hub@1/storage@1 (scsa2usb0) online Oct 24 12:05:49 clust-2 scsi: [ID 583861 kern.info] sd0 at scsa2usb0: target 0 lun 0 Oct 24 12:05:49 clust-2 genunix: [ID 936769 kern.info] sd0 is /pci@0,0/pci15d9,62f@1d/hub@1/storage@1/disk@0,0 Oct 24 12:05:49 clust-2 genunix: [ID 408114 kern.info] /pci@0,0/pci15d9,62f@1d/hub@1/storage@1/disk@0,0 (sd0) online Oct 24 12:09:58 clust-2 ufs: [ID 717476 kern.notice] NOTICE: mount: not a UFS magic number (0x0) Oct 24 12:22:07 clust-2 ext2fs: [ID 800345 kern.notice] NOTICE: info: mount_count=0 Oct 24 12:22:07 clust-2 ext2fs: [ID 854744 kern.notice] NOTICE: Setting ops, name=ext2fs Oct 24 12:22:07 clust-2 ext2fs: [ID 302322 kern.notice] NOTICE: ext2init end Oct 24 12:22:07 clust-2 ext2fs: [ID 850434 kern.notice] NOTICE: DATAs_033: 13472722/244203520 files, 487818616/488378000 blocks Oct 24 12:55:50 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 12:57:00 clust-2 last message repeated 190 times Oct 24 12:57:24 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:03:30 clust-2 last message repeated 940 times Oct 24 13:04:15 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:10:23 clust-2 last message repeated 955 times Oct 24 13:10:24 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 24 13:16:25 clust-2 last message repeated 592 times ... Oct 25 03:03:41 clust-2 ext2fs: [ID 118274 kern.notice] NOTICE: ext2_iupdat 2 Oct 25 03:10:01 clust-2 last message repeated 94 times And that stops there at 3am because this morning I had been disconnected and when I went to the server, there was some kind of kernel panic (I think related to ext2fs) and it said something about dumping zfs stuff and rebooting... but it got stuck there and never rebooted. I cannot find the panic message... is it written somewhere or only on the screen? As root you should write savecore and if crashdump was created and is still on the disk, it will be exported to /var/crash/ Is the ext2fs module stable? I was using it several times a year ago and it was good enough for me but mostly for ext2 data transfer. Please, contact me off the list and we can look at it. Thx, JD Best regards, Milan ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Power management in OI
Hi Jim, On čt, 2012-10-25 at 17:06 +0400, Jim Klimov wrote: Hello all, I'm trying to set up power management on my oi_151a5 laptop. For one, I found a useful comment in this blog post: http://prefetch.net/blog/index.php/2009/07/12/using-the-cpu-power-management-features-in-solaris/ ...that cpupm (in /etc/power.conf) must include the poll-mode keyword: cpu-threshold 1s cpupm enable poll-mode At least, with this option set, I see reduced core speeds with kstat -m cpu_info -i 0 -s current_clock_Hz and powertop much more often than with just cpupm enable. Bumping the threshold to 3s also helped the CPU stay in reduced states much longer, because running GNOME seems to bite off about 3% user-time and about 3% kernel-time on this laptop, which causes the CPU to wake up often needlessly (IMHO). yes, poll-mode can be more effective on some systems. But not always what you see with kstat and powertop is what you have in real usage. powertop and kstat are a bit heavy tools which have impact on the reality (system does not decrease CPU speed if you are monitoring it). I am not sure HDD powersaving is viable, given there's 1 disk in the system and ZFS has something to flush every cycle, but there were no reportable problems with setting it up with http://constantin.glez.de/blog/2010/03/opensolaris-home-server-scripting-2-setting-power-management (however, I did not hear spindown-spinup when I set a 1s timeout). Now, I'm having a problem with suspend: when I do the action, the system falls out of X11 into text mode (empty screen with the cursor bar) and freezes. The box must be hard-rebooted afterwards. Logs include: Sep 2 12:14:41 nbofh genunix: [ID 535284 kern.notice] System is being suspended Sep 2 12:14:42 nbofh genunix: [ID 122848 kern.warning] WARNING: Unable to suspend device display@1. Sep 2 12:14:43 nbofh genunix: [ID 537702 kern.warning] WARNING: Device is busy or does not support suspend/resume. Sep 2 12:14:53 nbofh srn: [ID 980641 kern.warning] WARNING: srn_notify: clone 2 did not ack event a03 Sep 2 12:14:53 nbofh genunix: [ID 121466 kern.warning] WARNING: audiohd#0: Unable to restore value of control record-source Sep 2 12:14:54 nbofh genunix: [ID 583038 kern.notice] System has been resumed. Sep 2 12:22:52 nbofh genunix: [ID 576964 kern.notice] ^MOpenIndiana Build oi_151a5 64-bit (illumos 13740:836bfdf31fc4) Sep 2 12:22:52 nbofh genunix: [ID 107366 kern.notice] SunOS Release 5.11 - Copyright 1983-2010 Oracle and/or its affiliates. ... So, apparently, some device - the display (generic VESA VGA on the Radeon built into the AMD E2 CPU) or one of the audios (hdmi on the radeon or separate realtek chip) fail to suspend/resume and may cause the freeze. No surprise, the key problem is lack of support fo Radeon. System cannot sleep without implementation of it. Without fixing this it makes no sense to investigate it more. You need to look at system console, usually serial port, to see what happend after resume. Also, you can try to suspend and resume with disabled X server. On side notes: DPMI works ok to turn off the lights on the LCD screen. Suspend-resume worked OK in Win7, though I couldn't get that to hibernate for no apparent reason. Thanks for ideas, //Jim Klimov Best regards, Milan ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] About virt-manager
Please tell us how to do that! It's usefull ;-) Kind regards, The out-side Op 25 okt. 2012 om 11:29 heeft Ilya Arhipkin i...@arhipkin.com het volgende geschreven: 23.10.12 22:17, ?? ? ?: Tell to exchange experience I can tell you how to open the DVD disk on an OI through FTP service in the home environment with a laptop running Windows Today I updated OI 151a4 to OI151a7 on hp ML110G6. KVM works fine. But I want to use virt-manager to manage VM. So let's me tell the status of virt-manager and libvirt. Best Regards. ryo ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss -- Regards, Arhipkin Ilya (Miass OpenSolaris Team Leader) http://www.post.arhipkin.com ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Security problem
There's a security problem in Openindiana OI 157aX when using mount on smbfs, see: https://www.illumos.org/issues/3305 -- Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT www-imk.fzk.de/asf/sat/grabowski/ www.imk-asf.kit.edu/english/sat.php KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu Postfach 3640,76021 Karlsruhe,Germany T:(+49)721 608-26026 F:-926026 ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI on Dell R810
Little offtopic, but what type of raid controller do they have and Nics? I still have problems installing OI on an R320, so i'm currious. Kind regards, The out-side Op 24 okt. 2012 om 16:26 heeft Rich rerc...@acm.jhu.edu het volgende geschreven: I assure you, the R810s I have running OI do not, and have never in the past, had this problem. Perhaps it's a misbehaving interaction with the watchdog timer? Do you have the OS watchdog turned on in the BIOS? - Rich On Wed, Oct 24, 2012 at 5:06 AM, Ram Chander ramqu...@gmail.com wrote: Hi, I have installed OI on Dell R810 hardware and it frequently gets auto rebooted atleast twice a day. sar shows cpu/disk/mem is all normal. This box acts as nfs server and doesnt do any heavy duty. Any idea what might be the issue. Also got to know that OI isnt tested on this hardware. No useful logs too when reboot happens. What might be the issue, Nic,/motherboard/cpu ? http://wiki.openindiana.org/oi/Servers root@hosti:/fkdigital# last -10 reboot rebootsystem boot Wed Oct 24 10:14 rebootsystem down Wed Oct 24 10:06 rebootsystem boot Tue Oct 23 16:26 rebootsystem down Tue Oct 23 16:15 rebootsystem boot Tue Oct 23 12:55 rebootsystem down Tue Oct 23 12:31 rebootsystem boot Tue Oct 23 11:43 rebootsystem down Tue Oct 23 11:30 rebootsystem boot Mon Oct 22 00:27 rebootsystem down Mon Oct 22 00:21 ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI on Dell R810
Onboard we have Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet The RAID card is a PERC H200. - RIch On Thu, Oct 25, 2012 at 1:32 PM, Roel_D openindi...@out-side.nl wrote: Little offtopic, but what type of raid controller do they have and Nics? I still have problems installing OI on an R320, so i'm currious. Kind regards, The out-side Op 24 okt. 2012 om 16:26 heeft Rich rerc...@acm.jhu.edu het volgende geschreven: I assure you, the R810s I have running OI do not, and have never in the past, had this problem. Perhaps it's a misbehaving interaction with the watchdog timer? Do you have the OS watchdog turned on in the BIOS? - Rich On Wed, Oct 24, 2012 at 5:06 AM, Ram Chander ramqu...@gmail.com wrote: Hi, I have installed OI on Dell R810 hardware and it frequently gets auto rebooted atleast twice a day. sar shows cpu/disk/mem is all normal. This box acts as nfs server and doesnt do any heavy duty. Any idea what might be the issue. Also got to know that OI isnt tested on this hardware. No useful logs too when reboot happens. What might be the issue, Nic,/motherboard/cpu ? http://wiki.openindiana.org/oi/Servers root@hosti:/fkdigital# last -10 reboot rebootsystem boot Wed Oct 24 10:14 rebootsystem down Wed Oct 24 10:06 rebootsystem boot Tue Oct 23 16:26 rebootsystem down Tue Oct 23 16:15 rebootsystem boot Tue Oct 23 12:55 rebootsystem down Tue Oct 23 12:31 rebootsystem boot Tue Oct 23 11:43 rebootsystem down Tue Oct 23 11:30 rebootsystem boot Mon Oct 22 00:27 rebootsystem down Mon Oct 22 00:21 ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] working FTP service through which you can enter the directory DVD drive
25.10.12 23:27, Roel_D ?: Hi Roel_D!!! ;-) There are two options open disk One from the Internet 2 LAN To open the DVD drive with formatting that is supported need a working FTP service through which you can enter the directory DVD drive to go through the browser as a command line ftp://login:password @ name.domain Login is your OpenIndiana an account in the system and authenticate with you get into the home directory / export / home / user, we need to be in the directory in which the disc information browser appear in the information in the home directory is / export / home / user you are signed up to the directory above the root directory already in the root directory to the directory / media where the drive is mounted Please tell us how to do that! It's usefull ;-) Kind regards, The out-side Op 25 okt. 2012 om 11:29 heeft Ilya Arhipkini...@arhipkin.com het volgende geschreven: 23.10.12 22:17, ?? ? ?: Tell to exchange experience I can tell you how to open the DVD disk on an OI through FTP service in the home environment with a laptop running Windows ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss -- Regards, Arhipkin Ilya (Miass OpenSolaris Team Leader) http://www.post.arhipkin.com ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss