Here's the topography of the Host and Guest system layout

[SSD][SSD]
==> [RAID0]
====> [Host]
======> [HHD0] --> \\.\PhysicalDrive0 --> raw vmdk --> PhysicalDrive0.vmdk
======> [HHD1] --> \\.\PhysicalDrive1 --> raw vmdk --> PhysicalDrive1.vmdk
======> [HHD2] --> \\.\PhysicalDrive2 --> raw vmdk --> PhysicalDrive2.vmdk
======> [HHD3] --> \\.\PhysicalDrive3 --> raw vmdk --> PhysicalDrive3.vmdk
======> [HHD4] --> \\.\PhysicalDrive4 --> raw vmdk --> PhysicalDrive4.vmdk
======> [HHD5] --> \\.\PhysicalDrive5 --> raw vmdk --> PhysicalDrive5.vmdk
========> [Guest]
==========> PhysicalDrive0.vmdk
==========> PhysicalDrive1.vmdk
==========> PhysicalDrive2.vmdk
==========> PhysicalDrive3.vmdk
==========> PhysicalDrive4.vmdk
==========> PhysicalDrive5.vmdk

HHD0 and HHD1 are unmounted, NTFS partitioned drives, because they hold a
mirror copy of my data
There are two SSDs, SSD0 and SSD1 (not listed), that are created the same
way as the HDD, and mounted as zil and l2arc devices.


On Tue, Apr 1, 2014 at 5:50 PM, Jason Belec <jasonbe...@belecmartin.com>wrote:

> Going through this bit by bit, but some things that I take issue with but
> may be interpreting incorrectly.
>
> You created several vmdk's on C: drive (9), your running Windows on this
> drive, as well as Virtualbox which has an OS making use of the vmdk's, this
> correct? If yes, we may have stumbled across your issue, thats a lot of i/o
> for the underlying drive, some of it fighting with the other contenders.
> You list 6 physical drives, reason they are not utilized? Perhaps just
> moving the vmdk's to another drive might at least help with the stress.
>
> As an example, I never host the VM on the OS drive, just like I never host
> ZFS on the OS drive FreeBSD can of course, but I believe attention must be
> paid to setup) even if I have room for a partition (tried that in the past).
>
>
>
> --
> Jason Belec
> Sent from my iPad
>
> On Apr 1, 2014, at 4:25 PM, Eric <naisa...@gmail.com> wrote:
>
> Attached is my vbox Guest settings, and added it to the forums post as
> well (https://forums.virtualbox.org/viewtopic.php?f=6&t=60975)
>
> The NAT issue is small. I switched my SSH server back to Bridge Mode and
> everything worked again. There was something about NAT mode where it was
> breaking the connection and wasn't letting SSH work normally.
>
>
>
>
> On Tue, Apr 1, 2014 at 4:13 PM, Jason Belec <jasonbe...@belecmartin.com>wrote:
>
>> I looked through your thread, but I almost always tell people - "STOP
>> using Windows unless its in a VM". ;)
>>
>> Not enough info in your thread to actually help you with the VM. What are
>> the Guest settings? What drives are actually assigned to what, scripts are
>> only useful after you setup something functional.
>>
>> As for the NAT issue thread, I don't think its an issue so much a
>> misconception how it works in relation to the parts in question,
>> specifically Windows, the VM and the Guest. I have never really had issues
>> like this but I've never tried with parts your using in the sequence
>> described. As for why it might not work... The Guest settings info might be
>> relevant here as well.
>>
>>
>>
>> --
>> Jason Belec
>> Sent from my iPad
>>
>> On Apr 1, 2014, at 3:46 PM, Eric <naisa...@gmail.com> wrote:
>>
>> haha train away!
>>
>> This is what I'm trying to do for my own needs. Issues or no issues, I
>> haven't seen it done before. So, I'm reaching out to anyone. Mac or not,
>> I'm just asking from one IT professional to another, is this possible, and
>> if not, why not? (that's just how I feel)
>>
>> I'm assuming the complications you mean are the ways FreeBSD behaves when
>> running specifically in VBox under Windows, because that's what I'm trying
>> to figure out.
>>
>> Details are in the forum post, but yes, it's a clean setup with a
>> dedicated vdi for the os. Networking shouldn't be related, but it's working
>> as well.
>>
>>
>> On Tue, Apr 1, 2014 at 3:17 PM, Jason Belec 
>> <jasonbe...@belecmartin.com>wrote:
>>
>>> OK. So your running Windows, asking questions on the MacZFS list. That's
>>> going to cause problems right out of the gate. And your asking about
>>> FreeBSD running under VirtualBox for issues with ZFS.
>>>
>>> I know it's not nice, bit I'm laughing myself purple. This is going to
>>> make it into my training sessions.
>>>
>>> The only advice I can give you at this point is you have made a very
>>> complicated situation for yourself. Back up and start with Windows, ensure
>>> networking us functions. Then a clean VM of FreeBSD make sure networking is
>>> functioning however you want it to. Now setup ZFS where you may have to
>>> pre-set/create devices just for the VM to utilize so that OS's are not
>>> fighting for the same drive(s)/space.
>>>
>>>
>>> Jason
>>> Sent from my iPhone 5S
>>>
>>> On Apr 1, 2014, at 12:03 PM, Eric <naisa...@gmail.com> wrote:
>>>
>>> I have the details on the setup posted to virtualbox's forums, here:
>>> https://forums.virtualbox.org/viewtopic.php?f=6&t=60975
>>>
>>> Essentially, I'm running ZFS on FreeBSD10 in VBox running in Windows 7.
>>> Rather than the other way around. I think I mentioned that earlier
>>>
>>>
>>> I just created a short post about the NAT Network issue, here:
>>> https://forums.virtualbox.org/viewtopic.php?f=6&t=60992
>>>
>>>
>>> On Tue, Apr 1, 2014 at 11:58 AM, Jason Belec <jasonbe...@belecmartin.com
>>> > wrote:
>>>
>>>> I run over 30 instances of Virtualbox with various OSs without issue
>>>> all running ontop of ZFS environments. Most of my clients have at least 3
>>>> VMs running a variant of Windows ontop of ZFS without any issues. Not sure
>>>> what you mean with your NAT issue. Perhaps posting your setup info might be
>>>> of more help.
>>>>
>>>>
>>>>
>>>> --
>>>> Jason Belec
>>>> Sent from my iPad
>>>>
>>>> On Apr 1, 2014, at 11:34 AM, Eric Jaw <naisa...@gmail.com> wrote:
>>>>
>>>>
>>>>
>>>> On Tuesday, April 1, 2014 7:04:39 AM UTC-4, jasonbelec wrote:
>>>>>
>>>>> ZFS is lots of parts, in most cases lots of cheap unreliable parts,
>>>>> refurbished parts, yadda yadda, as posted on this thread and many, many
>>>>> others, any issues are probably not ZFS but the parts of the whole. Yes, 
>>>>> it
>>>>> could be ZFS, after you confirm that all the parts ate pristine, maybe.
>>>>>
>>>>
>>>> I don't think it's ZFS. ZFS is pretty solid. In my specific case, I'm
>>>> trying to figure out why VirtualBox is creating these issues. I'm pretty
>>>> sure that's the root cause, but I don't know why yet. So I'm just
>>>> speculating at this point. Of course, I want to get my ZFS up and running
>>>> so I can move on to what I really need to do, so it's easy to jump on a
>>>> conclusion about something that I haven't thought of in my position. Hope
>>>> you can understand
>>>>
>>>>
>>>>>
>>>>> My oldest system running ZFS is an Mac Mini Intel Core Duo with 3GB
>>>>> RAM (not ECC) it is the home server for music, tv shows, movies, and some
>>>>> interim backups. The mini has been modded for ESATA and has 6 drives
>>>>> connected. The pool is 2 RaidZ of 3 mirrored with copies set at 2. Been
>>>>> running since ZFS was released from Apple builds. Lost 3 drives, 
>>>>> eventually
>>>>> traced to a new cable that cracked at the connector which when hot enough
>>>>> expanded lifting 2 pins free of their connector counter parts resulting in
>>>>> errors. Visually almost impossible to see. I replaced port multipliers,
>>>>> Esata cards, RAM, mini's, power supply, reinstalled OS, reinstalled ZFS,
>>>>> restored ZFS data from backup, finally to find the bad connector end one
>>>>> because it was hot and felt 'funny'.
>>>>>
>>>>> Frustrating, yes, educational also. The happy news is, all the data
>>>>> was fine, wife would have torn me to shreds if photos were missing, music
>>>>> was corrupt, etc., etc.. And this was on the old out of date but stable 
>>>>> ZFS
>>>>> version we Mac users have been hugging onto for dear life. YMMV
>>>>>
>>>>> Never had RAM as the issue, here in the mad science lab across 10
>>>>> rotating systems or in any client location - pick your decade. However I
>>>>> don't use cheap RAM either, and I only have 2 Systems requiring ECC
>>>>> currently that don't even connect to ZFS as they are both XServers with
>>>>> other lives.
>>>>>
>>>>>
>>>>> --
>>>>> Jason Belec
>>>>> Sent from my iPad
>>>>>
>>>>> On Apr 1, 2014, at 12:13 AM, Daniel Becker <razz...@gmail.com> wrote:
>>>>>
>>>>> On Mar 31, 2014, at 7:41 PM, Eric Jaw <nais...@gmail.com> wrote:
>>>>>
>>>>> I started using ZFS about a few weeks ago, so a lot of it is still new
>>>>> to me. I'm actually not completely certain about "proper procedure" for
>>>>> repairing a pool. I'm not sure if I'm supposed to clear the errors after
>>>>> the scrub, before or after (little things). I'm not sure if it even
>>>>> matters. When I restarted the VM, the checksum counts cleared on its own.
>>>>>
>>>>>
>>>>> The counts are not maintained across reboots.
>>>>>
>>>>>
>>>>> On the first scrub it repaired roughly 1.65MB. None on the second
>>>>> scub. Even after the scrub there were still 43 data errors. I was 
>>>>> expecting
>>>>> they were going to go away.
>>>>>
>>>>>
>>>>> errors: 43 data errors, use '-v' for a list
>>>>>
>>>>>
>>>>> What this means is that in these 43 cases, the system was not able to
>>>>> correct the error (i.e., both drives in a mirror returned bad data).
>>>>>
>>>>>
>>>>> This is an excellent question. They're in 'Normal' mode. I remember
>>>>> looking in to this before and decided normal mode should be fine. I might
>>>>> be wrong. So thanks for bringing this up. I'll have to check it out again.
>>>>>
>>>>>
>>>>> The reason I was asking is that these symptoms would also be
>>>>> consistent with something outside the VM writing to the disks behind the
>>>>> VM’s back; that’s unlikely to happen accidentally with disk images, but 
>>>>> raw
>>>>> disks are visible to the host OS as such, so it may be as simple as 
>>>>> Windows
>>>>> deciding that it should initialize the “unformatted” (really, formatted
>>>>> with an unknown filesystem) devices. Or it could be a raid controller that
>>>>> stores its array metadata in the last sector of the array’s disks.
>>>>>
>>>>>
>>>>> memtest86 and memtest86+ for 18 hours came out okay. I'm on my third
>>>>> scrub and the number or errors has remained at 43. Checksum errors 
>>>>> continue
>>>>> to pile up as the pool is getting scrubbed.
>>>>>
>>>>> I'm just as flustered about this. Thanks again for the input.
>>>>>
>>>>>
>>>>> Given that you’re seeing a fairly large number of errors in your
>>>>> scrubs, the fact that memtest86 doesn’t find anything at all very strongly
>>>>> suggests that this is not actually a memory issue.
>>>>>
>>>>>  --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "zfs-macos" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to zfs-macos+unsubscr...@googlegroups.com.
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>  --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to a topic in the
>>>> Google Groups "zfs-macos" group.
>>>> To unsubscribe from this topic, visit
>>>> https://groups.google.com/d/topic/zfs-macos/qguq6LCf1QQ/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to
>>>> zfs-macos+unsubscr...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "zfs-macos" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to zfs-macos+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "zfs-macos" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/zfs-macos/qguq6LCf1QQ/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> zfs-macos+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "zfs-macos" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to zfs-macos+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>  --
>>
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "zfs-macos" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/zfs-macos/qguq6LCf1QQ/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> zfs-macos+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "zfs-macos" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to zfs-macos+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
> <ZFS.vbox>
>
>  --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "zfs-macos" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/zfs-macos/qguq6LCf1QQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> zfs-macos+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to zfs-macos+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to