That's precisely my scenario...
I have DRBD + OCFS and a gigaethernet cable link the two server.
The server with DRBD + OCFS has NFS with ubuntu 14.
2015-10-29 9:43 GMT-02:00 Gerald Brandt :
> I use NFSv4, and I've never had any problems. You have to hand modify the
>
Disk is virtio, alright!
Because I need live migration and HA.
With IDE/SATA there is no way to do that, AFAIK!
2015-10-29 11:07 GMT-02:00 Luis G. Coralle :
> Disk virtio too?
>
> 2015-10-29 10:03 GMT-03:00 Gilberto Nunes :
>
>> I alredy
Hi, network is virtio?
2015-10-25 20:33 GMT-03:00 Gilberto Nunes :
> Well friends...
>
> I really try hard to work with PVE, but is a pain in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )
> and beside
I use NFSv4, and I've never had any problems. You have to hand modify
the storage config file. I've used Ubuntu NFS servers with DRBD, and
more recently, Synology boxes (again with DRBD). I also bond my network
links for 2GB bandwidth.
I'm not sure why Proxmox doesn't support NFSv4 out of
I didn't use OCFS, eithet XFS or EXT4. No need for OCFS unless you're
dual primary.
Maybe OCFS is doing something weird for you?
Gerald
On 2015-10-29 06:50 AM, Gilberto Nunes wrote:
That's precisely my scenario...
I have DRBD + OCFS and a gigaethernet cable link the two server.
The server
>It works best with three nodes, other quorums can get tricky on restarts.
Well... At moment I have just one node... But have plan to add more
bricks...
2015-10-29 10:44 GMT-02:00 Lindsay Mathieson :
>
> On 29 October 2015 at 22:18, Gilberto Nunes
What cache settings do you have for the disks?
On October 29, 2015 2:09:33 PM CET, Gilberto Nunes
wrote:
>Disk is virtio, alright!
>Because I need live migration and HA.
>With IDE/SATA there is no way to do that, AFAIK!
>
>2015-10-29 11:07 GMT-02:00 Luis G. Coralle
I alredy test with/without virtio. With E1000, with realtek.
Same results...
2015-10-29 10:55 GMT-02:00 Luis G. Coralle :
> Hi, network is virtio?
>
> 2015-10-25 20:33 GMT-03:00 Gilberto Nunes :
>
>> Well friends...
>>
>> I really try
I'm giving up DRBD + OCFS + NFS and going to GlusterFS
2015-10-29 10:01 GMT-02:00 Gerald Brandt :
> I didn't use OCFS, eithet XFS or EXT4. No need for OCFS unless you're
> dual primary.
>
> Maybe OCFS is doing something weird for you?
>
> Gerald
>
>
>
> On 2015-10-29 06:50
It could be, but it could be also that with IDE or SATA the performance is
not the same as well with Virtio, or am I wrong???
2015-10-29 12:14 GMT-02:00 Lindsay Mathieson :
>
> On 29 October 2015 at 23:09, Gilberto Nunes
> wrote:
>
>>
Even with local storage? I thought that was one of the limitations of a node
with that kind of setup.
On Oct 29, 2015 10:14 AM, Lindsay Mathieson wrote:
>
>
> On 29 October 2015 at 23:09, Gilberto Nunes
> wrote:
>>
>> Disk is virtio,
At least as of ~v3.4, you can live migrate IDE but not SATA. I've run
into this a few times myself when I configured SATA CD-ROMs for some reason.
-Adam
On 15-10-29 09:14 AM, Lindsay Mathieson wrote:
On 29 October 2015 at 23:09, Gilberto Nunes
Oh God!
I re-run last test with dd and the VM just die!!! So sad :(
2015-10-29 14:32 GMT-02:00 Gilberto Nunes :
> Inside the guest, with GlusterFS I get this:
>
> dd if=/dev/zero of=writetest bs=8k count=131072
>
> 131072+0 registros de entrada
> 131072+0
Inside the guest, with GlusterFS I get this:
dd if=/dev/zero of=writetest bs=8k count=131072
131072+0 registros de entrada
131072+0 registros de saída
1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s
2015-10-29 13:57 GMT-02:00 Gilberto Nunes :
> This one
Was working on implementing OVS within my 4 node cluster. Things did
not go as planned (do they ever?) for reasons other than the issue below.
Anyway, my condition is I'm down to going backwards and have 3 of the 4
nodes configured with linux bridge configuration
vmbr0 and vmbr1
Node 3
On Thu, 29 Oct 2015 15:51:29 -0400
David Lawley wrote:
>
> This behaviour is happening on all the nodes ( 1 2 and 4) that have linux
> bridges..
>
> I know there is a lot of reading between the lines here. I have kind of
> glossed over the issue.
>
> Just wondering if
Ok works for me. So with this logic I should be able to shut down said
machine, remove nics, migrate to other host. Then restart, replace
nics/drivers.
I think I have tried this, but just sounding off right now. Bit puzzled
why vmbr0 worked on both.. just affected vmbr1.
On 10/29/2015
On 30 October 2015 at 02:32, Gilberto Nunes
wrote:
> dd if=/dev/zero of=writetest bs=8k count=131072
>
> 131072+0 registros de entrada
> 131072+0 registros de saída
> 1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s
Wow! I thought that was MB/s at first, but
Hi guys
Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???
Thanks for any help
Best regards
2015-10-27 13:25 GMT-02:00 Gilberto Nunes :
> What you say if drop soft and leave only proto=udp???
>
> 2015-10-27 12:36
Hi
On 10/26/2015 02:25 PM, Dmitry Petuhov wrote:
> There's issue with NFS: if you try to send over it more than network can deal
> with (100-120 MBps for 1-gigabit), it imposes several-second pauses, which
> are
> being interpreted like hardware errors. These bursts may be just few seconds
>
Hi
>Is that something that you can easily reproduce ? ( dd with conv=fsync
>or other tool)
Should I do such command in NFS server, mounted NFS share on Proxmox or
inside the guest?
>I do have heard about transient problems from NFS Servers at some
>hosters, when running vzdump on a NFS target,
21 matches
Mail list logo