ents have at least 3 VMs
> running a variant of Windows ontop of ZFS without any issues. Not sure what
> you mean with your NAT issue. Perhaps posting your setup info might be of
> more help.
>
>
>
> --
> Jason Belec
> Sent from my iPad
>
> On Apr 1, 2014, at 11:34 AM, Eric
ith Windows, ensure
> networking us functions. Then a clean VM of FreeBSD make sure networking is
> functioning however you want it to. Now setup ZFS where you may have to
> pre-set/create devices just for the VM to utilize so that OS's are not
> fighting for the same drive(s)/space.
>
in question,
> specifically Windows, the VM and the Guest. I have never really had issues
> like this but I've never tried with parts your using in the sequence
> described. As for why it might not work... The Guest settings info might be
> relevant here as well.
>
>
>
>
> As an example, I never host the VM on the OS drive, just like I never host
> ZFS on the OS drive FreeBSD can of course, but I believe attention must be
> paid to setup) even if I have room for a partition (tried that in the past).
>
>
> --
> Jason Belec
> Sent from my iPad
>
e OS drive, just like I never host
> ZFS on the OS drive FreeBSD can of course, but I believe attention must be
> paid to setup) even if I have room for a partition (tried that in the past).
>
>
>
> --
> Jason Belec
> Sent from my iPad
>
> On Apr 1, 2014, at 4:25 PM,
k after losing some updates.
>
> I don't know if you're running ZFS in a VM or running VMs on top of ZFS,
> but either way, you probably want to Google for "data loss" "VirtualBox"
> and whatever device you're emulating and see whether there are k
r being able to import your pool or having
> someone who knows how to operate zdb do some additional TXG rollback to get
> your data back after losing some updates.
>
> I don't know if you're running ZFS in a VM or running VMs on top of ZFS,
> but either way, you probably
eh, I suspected that
On Wed, Apr 2, 2014 at 2:38 PM, Daniel Becker wrote:
> The only time this should make a difference is when your host experiences
> an unclean shutdown / reset / crash.
>
> On Apr 2, 2014, at 8:49 AM, Eric wrote:
>
> I believe we are referring to the s
I have both my hands up, throwing anything and hoping for something to
stick to the wall =\
On Wed, Apr 2, 2014 at 8:37 PM, Daniel Becker wrote:
> On Apr 2, 2014, at 3:08 PM, Matt Elliott
> wrote:
>
> > Not true. ZFS flushes also mark known states. If the zfs stack issues
> a flush and the s
On Fri, Apr 11, 2014 at 4:02 PM, Chris Ridd wrote:
>
> On 11 Apr 2014, at 20:42, Eric wrote:
>
> > I don't have a proper dump, but I did get a kernel panic on my ZFS box.
> This is just informational. I'm not sure what caused it, but I'm guessing
> it's
to shreds if photos were missing, music was
>> corrupt, etc., etc.. And this was on the old out of date but stable ZFS
>> version we Mac users have been hugging onto for dear life. YMMV
>>
>> Never had RAM as the issue, here in the mad science lab across 10
>> rotating systems or i
data was
>> fine, wife would have torn me to shreds if photos were missing, music was
>> corrupt, etc., etc.. And this was on the old out of date but stable ZFS
>> version we Mac users have been hugging onto for dear life. YMMV
>>
>> Never had RAM as the issue, here in th
@ChrisInacio
THAT'S THE COOLEST THING I LEARNED TODAY :D
On Sat, Apr 19, 2014 at 12:15 PM, Chris Inacio wrote:
>
> This has been quite the interesting thread. Way back long ago when I was
> dong graduate work in microarchitecture (aka processor design) there were
> folks who wanted to put an
I completely agree. I'm experiencing these issues currently. Largely.
Doing a scrub is just obliterating my pool.
scan: scrub in progress since Mon Mar 31 10:14:52 2014
> 1.83T scanned out of 2.43T at 75.2M/s, 2h17m to go
> 0 repaired, 75.55% done
> config:
>
> NAME
On Monday, March 31, 2014 5:53:59 PM UTC-4, Bjoern Kahl wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Am 31.03.14 23:23, schrieb Eric Jaw:
> > I completely agree. I'm experiencing these issues currently.
> > Largely.
> >
> >
On Monday, March 31, 2014 5:55:21 PM UTC-4, Daniel Becker wrote:
>
> On Mar 31, 2014, at 2:23 PM, Eric Jaw >
> wrote:
>
> Doing a scrub is just obliterating my pool.
>
>
> Is it? I don’t think so:
>
Thanks for the response! Here's some more details on the se
ese
four drives in my pool, which have barely been touched.
>
> On Mar 31, 2014, at 5:55 PM, Daniel Becker >
> wrote:
>
> On Mar 31, 2014, at 2:23 PM, Eric Jaw >
> wrote:
>
> Doing a scrub is just obliterating my pool.
>
>
> Is it? I don’t think so:
r. At some point I may just get a 4TB USB3.0
> drive to copy stuff to and ship off to Glacier.
>
> Gregg
>
> On 3/31/2014 9:41 PM, Eric Jaw wrote:
>
>
>
> On Monday, March 31, 2014 5:55:21 PM UTC-4, Daniel Becker wrote:
>>
>> On Mar 31, 2014, at 2:23 PM,
On Tuesday, April 1, 2014 12:13:30 AM UTC-4, Daniel Becker wrote:
>
> On Mar 31, 2014, at 7:41 PM, Eric Jaw >
> wrote:
>
> I started using ZFS about a few weeks ago, so a lot of it is still new to
> me. I'm actually not completely certain about "proper procedur
the mad science lab across 10 rotating
> systems or in any client location - pick your decade. However I don't use
> cheap RAM either, and I only have 2 Systems requiring ECC currently that
> don't even connect to ZFS as they are both XServers with other lives.
>
>
>
20 matches
Mail list logo