Re: [PVE-User] I try hard but...

2015-10-30 Thread Gilberto Nunes
Luck!!!

Not so far!

Last night, I let running imapsync command. The command ran ok, and
finished at 22hs22 ( I saw the log today at morning!).
I went to sleep and when I woke up this morning, the VM just was stopped!
Uptime from Proxmox show me 18:28 hours.
My storage show me an uptime of 3 days.
Looking for a clue, I found in rrd graphic, that there is a lack of 2
hours, between 4 o'clock AM and 6 o'clock AM, just the period when VM stop
and don't go online again.

6 o'clock was the hour when I bring back vm online

There no HA enabled to this VM. There is just one VM. No cluster.
I was installed HA and Cluster previously in this server, but I turn it
down!

At least, there is no IO error at this time. This issue seem that was
solved.
But, really! I'm tired!!!

I do not know what to do, now!

What I must looking for???

I drop out nfs server and install GlusterFS.

My servers is not any regular machines! It is Dell R430... All those
machines are very good machines...

For God sake!!!



2015-10-29 20:41 GMT-02:00 Lindsay Mathieson :

>
> On 30 October 2015 at 04:17, Gilberto Nunes 
> wrote:
>
>> Well friends
>> I am happy again... I don't know if this "happiness" will last forever..
>> Only time will say...
>> But now, with GlusterFS the VM hold still.
>> Previously, with NFS, the iowait always high, near 15-20 %.
>> Now, with Glusterfs, I am right now transferring more than 40GB from one
>> server to VM and the iowait inside the VM hold 4,50 %...
>>
>
> Excellent news.
>
>
>> We will see what happens next
>>
>
> Luck :)
>
>
>
>
> --
> Lindsay
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
That's precisely my scenario...
I have DRBD + OCFS and a gigaethernet cable link the two server.
The server with DRBD + OCFS has NFS with ubuntu 14.


2015-10-29 9:43 GMT-02:00 Gerald Brandt :

> I use NFSv4, and I've never had any problems.  You have to hand modify the
> storage config file.  I've used Ubuntu NFS servers with DRBD, and more
> recently, Synology boxes (again with DRBD).  I also bond my network links
> for 2GB bandwidth.
>
> I'm not sure why Proxmox doesn't support NFSv4 out of the box.
>
> Gerald
>
>
>
> On 2015-10-29 04:46 AM, Gilberto Nunes wrote:
>
> Hi guys
>
> Last night, i get IO error again!
> Perhaps i must give up NFS and considrer glusterFS or iSCSI???
>
> Thanks for any help
>
> Best regards
>
> 2015-10-27 13:25 GMT-02:00 Gilberto Nunes :
>
>> What you say if drop soft and leave only proto=udp???
>>
>> 2015-10-27 12:36 GMT-02:00 Michael Rasmussen < 
>> m...@miras.org>:
>>
>>> Remember softlocks is dangerous and can cause data loss.
>>>
>>>
>>> On October 27, 2015 2:58:25 PM CET, Gilberto Nunes <
>>> gilberto.nune...@gmail.com> wrote:

 Now the VM seems to doing well...

 I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with
 soft and proto=udp, and right now stress the VM with a lot of impasync
 jobs, in order to migrate huge mailbox to oldserver to the VM...
 I will performe others tests yet, but I thing there peace here again...
 Thanks for help and sorry to blame proxmox, guys!
 My apologies!


 2015-10-26 17:55 GMT-02:00 Hector Suarez Planas <
 hector.sua...@codesa.co.cu>:

> ...
>
> Answer my own question: yes! There is a bug related NFS on kernel 3.13
>> that is the default on Ubuntu 14.04...
>> I will try with kernel 3.19...
>> Sorry for sent this to the list...
>>
>
> Rectify is wise. Do not blame Proxmox for every bad thing that happens
> to you with it. You must have patience with things that come from the 
> world
> of Open Source.
>
> :-)
>
> --
> =
> Lic. Hector Suarez Planas
> Administrador Nodo CODESA
> Santiago de Cuba
> -
> Blog: http://nihilanthlnxc.cubava.cu/
> ICQ ID: 681729738
> Conferendo ID: hspcuba
> =
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



>>> --
>>> Sent from my Android phone with K-9 Mail. Please excuse my brevity.
>>>  This mail was virus scanned and spam checked before delivery. This
>>> mail is also DKIM signed. See header dkim-signature.
>>>
>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>> Skype: gilberto.nunes36
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>
>
> ___
> pve-user mailing 
> listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
Disk is virtio, alright!
Because I need live migration and HA.
With IDE/SATA there is no way to do that, AFAIK!

2015-10-29 11:07 GMT-02:00 Luis G. Coralle :

> Disk virtio too?
>
> 2015-10-29 10:03 GMT-03:00 Gilberto Nunes :
>
>> I alredy test with/without virtio. With E1000, with realtek.
>> Same results...
>>
>> 2015-10-29 10:55 GMT-02:00 Luis G. Coralle 
>> :
>>
>>> Hi, network is virtio?
>>>
>>> 2015-10-25 20:33 GMT-03:00 Gilberto Nunes :
>>>
 Well friends...

 I really try hard to work with PVE, but is a pain in the ass...
 Nothing seems to work..
 I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb
 ) and beside follow all docs available in the wiki and internet, one single
 VM continue to crash over and over again...

 So I realise that is time to say good bye to Proxmox...

 Live long and prosper...




 --

 Gilberto Ferreira
 +55 (47) 9676-7530
 Skype: gilberto.nunes36


 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


>>>
>>>
>>> --
>>> Luis G. Coralle
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
> --
> Luis G. Coralle
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Luis G. Coralle
Hi, network is virtio?

2015-10-25 20:33 GMT-03:00 Gilberto Nunes :

> Well friends...
>
> I really try hard to work with PVE, but is a pain in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )
> and beside follow all docs available in the wiki and internet, one single
> VM continue to crash over and over again...
>
> So I realise that is time to say good bye to Proxmox...
>
> Live long and prosper...
>
>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 
Luis G. Coralle
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gerald Brandt
I use NFSv4, and I've never had any problems.  You have to hand modify 
the storage config file.  I've used Ubuntu NFS servers with DRBD, and 
more recently, Synology boxes (again with DRBD).  I also bond my network 
links for 2GB bandwidth.


I'm not sure why Proxmox doesn't support NFSv4 out of the box.

Gerald


On 2015-10-29 04:46 AM, Gilberto Nunes wrote:

Hi guys

Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???

Thanks for any help

Best regards

2015-10-27 13:25 GMT-02:00 Gilberto Nunes >:


What you say if drop soft and leave only proto=udp???

2015-10-27 12:36 GMT-02:00 Michael Rasmussen >:

Remember softlocks is dangerous and can cause data loss.


On October 27, 2015 2:58:25 PM CET, Gilberto Nunes
> wrote:

Now the VM seems to doing well...

I put some limits on the Virtual HD ( Thanks Dmitry),
mount NFS with soft and proto=udp, and right now stress
the VM with a lot of impasync jobs, in order to migrate
huge mailbox to oldserver to the VM...
I will performe others tests yet, but I thing there peace
here again...
Thanks for help and sorry to blame proxmox, guys!
My apologies!


2015-10-26 17:55 GMT-02:00 Hector Suarez Planas
>:

...

Answer my own question: yes! There is a bug
related NFS on kernel 3.13 that is the default on
Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...


Rectify is wise. Do not blame Proxmox for every bad
thing that happens to you with it. You must have
patience with things that come from the world of Open
Source.

:-)

-- 
=

Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
Sent from my Android phone with K-9 Mail. Please excuse my

brevity.
 This mail was virus scanned and spam checked before
delivery. This mail is also DKIM signed. See header
dkim-signature.




-- 


Gilberto Ferreira
+55 (47) 9676-7530 
Skype: gilberto.nunes36




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gerald Brandt
I didn't use OCFS, eithet XFS or EXT4.  No need for OCFS unless you're 
dual primary.


Maybe OCFS is doing something weird for you?

Gerald


On 2015-10-29 06:50 AM, Gilberto Nunes wrote:

That's precisely my scenario...
I have DRBD + OCFS and a gigaethernet cable link the two server.
The server with DRBD + OCFS has NFS with ubuntu 14.


2015-10-29 9:43 GMT-02:00 Gerald Brandt >:


I use NFSv4, and I've never had any problems.  You have to hand
modify the storage config file.  I've used Ubuntu NFS servers with
DRBD, and more recently, Synology boxes (again with DRBD).  I also
bond my network links for 2GB bandwidth.

I'm not sure why Proxmox doesn't support NFSv4 out of the box.

Gerald



On 2015-10-29 04:46 AM, Gilberto Nunes wrote:

Hi guys

Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???

Thanks for any help

Best regards

2015-10-27 13:25 GMT-02:00 Gilberto Nunes
>:

What you say if drop soft and leave only proto=udp???

2015-10-27 12:36 GMT-02:00 Michael Rasmussen >:

Remember softlocks is dangerous and can cause data loss.


On October 27, 2015 2:58:25 PM CET, Gilberto Nunes
> wrote:

Now the VM seems to doing well...

I put some limits on the Virtual HD ( Thanks Dmitry),
mount NFS with soft and proto=udp, and right now
stress the VM with a lot of impasync jobs, in order
to migrate huge mailbox to oldserver to the VM...
I will performe others tests yet, but I thing there
peace here again...
Thanks for help and sorry to blame proxmox, guys!
My apologies!


2015-10-26 17:55 GMT-02:00 Hector Suarez Planas
>:

...

Answer my own question: yes! There is a bug
related NFS on kernel 3.13 that is the
default on Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...


Rectify is wise. Do not blame Proxmox for every
bad thing that happens to you with it. You must
have patience with things that come from the
world of Open Source.

:-)

-- 
=

Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com

http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
Sent from my Android phone with K-9 Mail. Please excuse

my brevity.
 This mail was virus scanned and spam checked before
delivery. This mail is also DKIM signed. See header
dkim-signature.




-- 


Gilberto Ferreira
+55 (47) 9676-7530 
Skype: gilberto.nunes36




-- 


Gilberto Ferreira
+55 (47) 9676-7530 
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
>It works best with three nodes, other quorums can get tricky on restarts.

Well... At moment I have just one node... But have plan to add more
bricks...




2015-10-29 10:44 GMT-02:00 Lindsay Mathieson :

>
> On 29 October 2015 at 22:18, Gilberto Nunes 
> wrote:
>
>> I'm giving up DRBD + OCFS + NFS and going to GlusterFS
>
>
> Gluster is pretty good, I'm experimenting with it at the moment.
>
> It works best with three nodes, other quorums can get tricky on restarts.
>
>
> --
> Lindsay
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Michael Rasmussen
What cache settings do you have for the disks?

On October 29, 2015 2:09:33 PM CET, Gilberto Nunes  
wrote:
>Disk is virtio, alright!
>Because I need live migration and HA.
>With IDE/SATA there is no way to do that, AFAIK!
>
>2015-10-29 11:07 GMT-02:00 Luis G. Coralle
>:
>
>> Disk virtio too?
>>
>> 2015-10-29 10:03 GMT-03:00 Gilberto Nunes
>:
>>
>>> I alredy test with/without virtio. With E1000, with realtek.
>>> Same results...
>>>
>>> 2015-10-29 10:55 GMT-02:00 Luis G. Coralle
>
>>> :
>>>
 Hi, network is virtio?

 2015-10-25 20:33 GMT-03:00 Gilberto Nunes
>:

> Well friends...
>
> I really try hard to work with PVE, but is a pain in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through direct cable (
>1 gb
> ) and beside follow all docs available in the wiki and internet,
>one single
> VM continue to crash over and over again...
>
> So I realise that is time to say good bye to Proxmox...
>
> Live long and prosper...
>
>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


 --
 Luis G. Coralle

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530
>>> Skype: gilberto.nunes36
>>>
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>
>>
>> --
>> Luis G. Coralle
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
>-- 
>
>Gilberto Ferreira
>+55 (47) 9676-7530
>Skype: gilberto.nunes36
>
>
>
>
>___
>pve-user mailing list
>pve-user@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
I alredy test with/without virtio. With E1000, with realtek.
Same results...

2015-10-29 10:55 GMT-02:00 Luis G. Coralle :

> Hi, network is virtio?
>
> 2015-10-25 20:33 GMT-03:00 Gilberto Nunes :
>
>> Well friends...
>>
>> I really try hard to work with PVE, but is a pain in the ass...
>> Nothing seems to work..
>> I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )
>> and beside follow all docs available in the wiki and internet, one single
>> VM continue to crash over and over again...
>>
>> So I realise that is time to say good bye to Proxmox...
>>
>> Live long and prosper...
>>
>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
> --
> Luis G. Coralle
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
I'm giving up DRBD + OCFS + NFS and going to GlusterFS

2015-10-29 10:01 GMT-02:00 Gerald Brandt :

> I didn't use OCFS, eithet XFS or EXT4.  No need for OCFS unless you're
> dual primary.
>
> Maybe OCFS is doing something weird for you?
>
> Gerald
>
>
>
> On 2015-10-29 06:50 AM, Gilberto Nunes wrote:
>
> That's precisely my scenario...
> I have DRBD + OCFS and a gigaethernet cable link the two server.
> The server with DRBD + OCFS has NFS with ubuntu 14.
>
>
> 2015-10-29 9:43 GMT-02:00 Gerald Brandt :
>
>> I use NFSv4, and I've never had any problems.  You have to hand modify
>> the storage config file.  I've used Ubuntu NFS servers with DRBD, and more
>> recently, Synology boxes (again with DRBD).  I also bond my network links
>> for 2GB bandwidth.
>>
>> I'm not sure why Proxmox doesn't support NFSv4 out of the box.
>>
>> Gerald
>>
>>
>>
>> On 2015-10-29 04:46 AM, Gilberto Nunes wrote:
>>
>> Hi guys
>>
>> Last night, i get IO error again!
>> Perhaps i must give up NFS and considrer glusterFS or iSCSI???
>>
>> Thanks for any help
>>
>> Best regards
>>
>> 2015-10-27 13:25 GMT-02:00 Gilberto Nunes < 
>> gilberto.nune...@gmail.com>:
>>
>>> What you say if drop soft and leave only proto=udp???
>>>
>>> 2015-10-27 12:36 GMT-02:00 Michael Rasmussen < 
>>> m...@miras.org>:
>>>
 Remember softlocks is dangerous and can cause data loss.


 On October 27, 2015 2:58:25 PM CET, Gilberto Nunes <
 gilberto.nune...@gmail.com> wrote:
>
> Now the VM seems to doing well...
>
> I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with
> soft and proto=udp, and right now stress the VM with a lot of impasync
> jobs, in order to migrate huge mailbox to oldserver to the VM...
> I will performe others tests yet, but I thing there peace here again...
> Thanks for help and sorry to blame proxmox, guys!
> My apologies!
>
>
> 2015-10-26 17:55 GMT-02:00 Hector Suarez Planas <
> hector.sua...@codesa.co.cu>:
>
>> ...
>>
>> Answer my own question: yes! There is a bug related NFS on kernel
>>> 3.13 that is the default on Ubuntu 14.04...
>>> I will try with kernel 3.19...
>>> Sorry for sent this to the list...
>>>
>>
>> Rectify is wise. Do not blame Proxmox for every bad thing that
>> happens to you with it. You must have patience with things that come from
>> the world of Open Source.
>>
>> :-)
>>
>> --
>> =
>> Lic. Hector Suarez Planas
>> Administrador Nodo CODESA
>> Santiago de Cuba
>> -
>> Blog: 
>> http://nihilanthlnxc.cubava.cu/
>> ICQ ID: 681729738
>> Conferendo ID: hspcuba
>> =
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> 
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
>
 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.
  This mail was virus scanned and spam checked before delivery. This
 mail is also DKIM signed. See header dkim-signature.

>>>
>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>> Skype: gilberto.nunes36
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>> Skype: gilberto.nunes36
>>
>>
>>
>> ___
>> pve-user mailing 
>> listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>
>
> ___
> pve-user mailing 
> listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
It could be, but it could be also that with IDE or SATA the performance is
not the same as well with Virtio, or am I wrong???

2015-10-29 12:14 GMT-02:00 Lindsay Mathieson :

>
> On 29 October 2015 at 23:09, Gilberto Nunes 
> wrote:
>
>> Disk is virtio, alright!
>> Because I need live migration and HA.
>> With IDE/SATA there is no way to do that, AFAIK!
>
>
> You can live migrate IDE/SATA
>
>
> --
> Lindsay
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Laurent Dumont
Even with local storage? I thought that was one of the limitations of a node 
with that kind of setup.

On Oct 29, 2015 10:14 AM, Lindsay Mathieson  wrote:
>
>
> On 29 October 2015 at 23:09, Gilberto Nunes  
> wrote:
>>
>> Disk is virtio, alright!
>> Because I need live migration and HA.
>> With IDE/SATA there is no way to do that, AFAIK!
>
>
> You can live migrate IDE/SATA
>
>
> -- 
> Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Adam Thompson
At least as of ~v3.4, you can live migrate IDE but not SATA.  I've run 
into this a few times myself when I configured SATA CD-ROMs for some reason.

-Adam

On 15-10-29 09:14 AM, Lindsay Mathieson wrote:


On 29 October 2015 at 23:09, Gilberto Nunes 
> wrote:


Disk is virtio, alright!
Because I need live migration and HA.
With IDE/SATA there is no way to do that, AFAIK!


You can live migrate IDE/SATA


--
Lindsay


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
Oh God!

I re-run last test with dd and the VM just die!!! So sad :(

2015-10-29 14:32 GMT-02:00 Gilberto Nunes :

> Inside the guest, with GlusterFS I get this:
>
> dd if=/dev/zero of=writetest bs=8k count=131072
>
> 131072+0 registros de entrada
> 131072+0 registros de saída
> 1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s
>
>
>
>
> 2015-10-29 13:57 GMT-02:00 Gilberto Nunes :
>
>> This one can easy be a new thread, but let say it anyway
>>
>> I run simple dd test like this:
>>
>> dd if=/dev/zero of=writetest bs=8k count=131072
>>
>>
>> In NFS
>>
>> 131072+0 registros de entrada
>> 131072+0 registros de saída
>> 1073741824 bytes (1,1 GB) copiados, 17,1806 s, 62,5 MB/s
>>
>> In GlusterFS
>>
>> 131072+0 records in
>> 131072+0 records out
>> 1073741824 bytes (1.1 GB) copied, 9.13338 s, 118 MB/s
>>
>> And with conv=fsync
>>
>> GlusterFS
>>
>> dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
>> 131072+0 records in
>> 131072+0 records out
>> 1073741824 bytes (1.1 GB) copied, 11.0702 s, 97.0 MB/s
>>
>> NFS
>>
>>  dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
>> 131072+0 registros de entrada
>> 131072+0 registros de saída
>> 1073741824 bytes (1,1 GB) copiados, 14,2901 s, 75,1 MB/s
>>
>> Both servers are PowerEdge R430 with SAS HD 7.2 K RPM, with 16 GB of
>> memory, linked with 1 GB direct cable...
>>
>> GlusterFS won?!?!
>>
>> I would like to hear some opnions...
>>
>> Thanks a lot
>>
>>
>> 2015-10-29 13:02 GMT-02:00 Gilberto Nunes :
>>
>>> It could be, but it could be also that with IDE or SATA the performance
>>> is not the same as well with Virtio, or am I wrong???
>>>
>>> 2015-10-29 12:14 GMT-02:00 Lindsay Mathieson <
>>> lindsay.mathie...@gmail.com>:
>>>

 On 29 October 2015 at 23:09, Gilberto Nunes  wrote:

> Disk is virtio, alright!
> Because I need live migration and HA.
> With IDE/SATA there is no way to do that, AFAIK!


 You can live migrate IDE/SATA


 --
 Lindsay

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530
>>> Skype: gilberto.nunes36
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
Inside the guest, with GlusterFS I get this:

dd if=/dev/zero of=writetest bs=8k count=131072

131072+0 registros de entrada
131072+0 registros de saída
1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s




2015-10-29 13:57 GMT-02:00 Gilberto Nunes :

> This one can easy be a new thread, but let say it anyway
>
> I run simple dd test like this:
>
> dd if=/dev/zero of=writetest bs=8k count=131072
>
>
> In NFS
>
> 131072+0 registros de entrada
> 131072+0 registros de saída
> 1073741824 bytes (1,1 GB) copiados, 17,1806 s, 62,5 MB/s
>
> In GlusterFS
>
> 131072+0 records in
> 131072+0 records out
> 1073741824 bytes (1.1 GB) copied, 9.13338 s, 118 MB/s
>
> And with conv=fsync
>
> GlusterFS
>
> dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
> 131072+0 records in
> 131072+0 records out
> 1073741824 bytes (1.1 GB) copied, 11.0702 s, 97.0 MB/s
>
> NFS
>
>  dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
> 131072+0 registros de entrada
> 131072+0 registros de saída
> 1073741824 bytes (1,1 GB) copiados, 14,2901 s, 75,1 MB/s
>
> Both servers are PowerEdge R430 with SAS HD 7.2 K RPM, with 16 GB of
> memory, linked with 1 GB direct cable...
>
> GlusterFS won?!?!
>
> I would like to hear some opnions...
>
> Thanks a lot
>
>
> 2015-10-29 13:02 GMT-02:00 Gilberto Nunes :
>
>> It could be, but it could be also that with IDE or SATA the performance
>> is not the same as well with Virtio, or am I wrong???
>>
>> 2015-10-29 12:14 GMT-02:00 Lindsay Mathieson > >:
>>
>>>
>>> On 29 October 2015 at 23:09, Gilberto Nunes 
>>> wrote:
>>>
 Disk is virtio, alright!
 Because I need live migration and HA.
 With IDE/SATA there is no way to do that, AFAIK!
>>>
>>>
>>> You can live migrate IDE/SATA
>>>
>>>
>>> --
>>> Lindsay
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Lindsay Mathieson
On 30 October 2015 at 02:32, Gilberto Nunes 
wrote:

> dd if=/dev/zero of=writetest bs=8k count=131072
>
> 131072+0 registros de entrada
> 131072+0 registros de saída
> 1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s


Wow! I thought that was MB/s at first, but 1.3GB/s is pretty impressive.
Are you using writback caching with the VM?

Have you tried with conv=fdatasync and oflag=dsync?


-- 
Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
Hi guys

Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???

Thanks for any help

Best regards

2015-10-27 13:25 GMT-02:00 Gilberto Nunes :

> What you say if drop soft and leave only proto=udp???
>
> 2015-10-27 12:36 GMT-02:00 Michael Rasmussen :
>
>> Remember softlocks is dangerous and can cause data loss.
>>
>>
>> On October 27, 2015 2:58:25 PM CET, Gilberto Nunes <
>> gilberto.nune...@gmail.com> wrote:
>>>
>>> Now the VM seems to doing well...
>>>
>>> I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with
>>> soft and proto=udp, and right now stress the VM with a lot of impasync
>>> jobs, in order to migrate huge mailbox to oldserver to the VM...
>>> I will performe others tests yet, but I thing there peace here again...
>>> Thanks for help and sorry to blame proxmox, guys!
>>> My apologies!
>>>
>>>
>>> 2015-10-26 17:55 GMT-02:00 Hector Suarez Planas <
>>> hector.sua...@codesa.co.cu>:
>>>
 ...

 Answer my own question: yes! There is a bug related NFS on kernel 3.13
> that is the default on Ubuntu 14.04...
> I will try with kernel 3.19...
> Sorry for sent this to the list...
>

 Rectify is wise. Do not blame Proxmox for every bad thing that happens
 to you with it. You must have patience with things that come from the world
 of Open Source.

 :-)

 --
 =
 Lic. Hector Suarez Planas
 Administrador Nodo CODESA
 Santiago de Cuba
 -
 Blog: http://nihilanthlnxc.cubava.cu/
 ICQ ID: 681729738
 Conferendo ID: hspcuba
 =

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

>>>
>>>
>>>
>> --
>> Sent from my Android phone with K-9 Mail. Please excuse my brevity.
>>  This mail was virus scanned and spam checked before delivery. This
>> mail is also DKIM signed. See header dkim-signature.
>>
>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Emmanuel Kasper
Hi

On 10/26/2015 02:25 PM, Dmitry Petuhov wrote:
> There's issue with NFS: if you try to send over it more than network can deal 
> with (100-120 MBps for 1-gigabit), it imposes several-second pauses, which 
> are 
> being interpreted like hardware errors. These bursts may be just few seconds 
> long to trigger issue.
> 
> You can try to limit bandwidth in virtual HDD config to something like 60-70 
> MBps. This should be enough for 1-gigabit network.
> 
> But my opinion is that it's better to switch to iSCSI.


Is that something that you can easily reproduce  ? ( dd with conv=fsync
or other tool)

I do have heard about transient problems from NFS Servers at some
hosters, when running vzdump on a NFS target, the backup would hang at
some point because the NFS connection was lost.

Up to know I though this was caused by some kind of network throttling
done by the hoster, which was causing the traffic to be blocked longer
that the NFS timeout.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gilberto Nunes
Hi

>Is that something that you can easily reproduce  ? ( dd with conv=fsync
>or other tool)

Should I do such command in NFS server, mounted NFS share on Proxmox or
inside the guest?

>I do have heard about transient problems from NFS Servers at some
>hosters, when running vzdump on a NFS target, the backup would hang at
>some point because the NFS connection was lost.

Yeah! I have some problem like this in the pass, but solved with soft
options in storage.cf

>Up to know I though this was caused by some kind of network throttling
>done by the hoster, which was causing the traffic to be blocked longer
>that the NFS timeout.

Network path is 1 gb cable. There's nothing more between storage and
proxmox.
In NFS server, no log indicate any time out, even in proxmox server. Just
isnside the VM and this happen when I open multiple impasync comand (3 at
least) in other mail host, in order to sync mailbox over the network.


2015-10-29 8:35 GMT-02:00 Emmanuel Kasper :

> Hi
>
> On 10/26/2015 02:25 PM, Dmitry Petuhov wrote:
> > There's issue with NFS: if you try to send over it more than network can
> deal
> > with (100-120 MBps for 1-gigabit), it imposes several-second pauses,
> which are
> > being interpreted like hardware errors. These bursts may be just few
> seconds
> > long to trigger issue.
> >
> > You can try to limit bandwidth in virtual HDD config to something like
> 60-70
> > MBps. This should be enough for 1-gigabit network.
> >
> > But my opinion is that it's better to switch to iSCSI.
>
>
> Is that something that you can easily reproduce  ? ( dd with conv=fsync
> or other tool)
>
> I do have heard about transient problems from NFS Servers at some
> hosters, when running vzdump on a NFS target, the backup would hang at
> some point because the NFS connection was lost.
>
> Up to know I though this was caused by some kind of network throttling
> done by the hoster, which was causing the traffic to be blocked longer
> that the NFS timeout.
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-27 Thread Michael Rasmussen
Remember softlocks is dangerous and can cause data loss.

On October 27, 2015 2:58:25 PM CET, Gilberto Nunes  
wrote:
>Now the VM seems to doing well...
>
>I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with
>soft
>and proto=udp, and right now stress the VM with a lot of impasync jobs,
>in
>order to migrate huge mailbox to oldserver to the VM...
>I will performe others tests yet, but I thing there peace here again...
>Thanks for help and sorry to blame proxmox, guys!
>My apologies!
>
>
>2015-10-26 17:55 GMT-02:00 Hector Suarez Planas
>
>:
>
>> ...
>>
>> Answer my own question: yes! There is a bug related NFS on kernel
>3.13
>>> that is the default on Ubuntu 14.04...
>>> I will try with kernel 3.19...
>>> Sorry for sent this to the list...
>>>
>>
>> Rectify is wise. Do not blame Proxmox for every bad thing that
>happens to
>> you with it. You must have patience with things that come from the
>world of
>> Open Source.
>>
>> :-)
>>
>> --
>> =
>> Lic. Hector Suarez Planas
>> Administrador Nodo CODESA
>> Santiago de Cuba
>> -
>> Blog: http://nihilanthlnxc.cubava.cu/
>> ICQ ID: 681729738
>> Conferendo ID: hspcuba
>> =
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
>
>-- 
>
>Gilberto Ferreira
>+55 (47) 9676-7530
>Skype: gilberto.nunes36
>
>
>
>
>___
>pve-user mailing list
>pve-user@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-27 Thread Gilberto Nunes
What you say if drop soft and leave only proto=udp???

2015-10-27 12:36 GMT-02:00 Michael Rasmussen :

> Remember softlocks is dangerous and can cause data loss.
>
>
> On October 27, 2015 2:58:25 PM CET, Gilberto Nunes <
> gilberto.nune...@gmail.com> wrote:
>>
>> Now the VM seems to doing well...
>>
>> I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with soft
>> and proto=udp, and right now stress the VM with a lot of impasync jobs, in
>> order to migrate huge mailbox to oldserver to the VM...
>> I will performe others tests yet, but I thing there peace here again...
>> Thanks for help and sorry to blame proxmox, guys!
>> My apologies!
>>
>>
>> 2015-10-26 17:55 GMT-02:00 Hector Suarez Planas <
>> hector.sua...@codesa.co.cu>:
>>
>>> ...
>>>
>>> Answer my own question: yes! There is a bug related NFS on kernel 3.13
 that is the default on Ubuntu 14.04...
 I will try with kernel 3.19...
 Sorry for sent this to the list...

>>>
>>> Rectify is wise. Do not blame Proxmox for every bad thing that happens
>>> to you with it. You must have patience with things that come from the world
>>> of Open Source.
>>>
>>> :-)
>>>
>>> --
>>> =
>>> Lic. Hector Suarez Planas
>>> Administrador Nodo CODESA
>>> Santiago de Cuba
>>> -
>>> Blog: http://nihilanthlnxc.cubava.cu/
>>> ICQ ID: 681729738
>>> Conferendo ID: hspcuba
>>> =
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
>>
>>
> --
> Sent from my Android phone with K-9 Mail. Please excuse my brevity.
>  This mail was virus scanned and spam checked before delivery. This
> mail is also DKIM signed. See header dkim-signature.
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-27 Thread Gilberto Nunes
Now the VM seems to doing well...

I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with soft
and proto=udp, and right now stress the VM with a lot of impasync jobs, in
order to migrate huge mailbox to oldserver to the VM...
I will performe others tests yet, but I thing there peace here again...
Thanks for help and sorry to blame proxmox, guys!
My apologies!


2015-10-26 17:55 GMT-02:00 Hector Suarez Planas 
:

> ...
>
> Answer my own question: yes! There is a bug related NFS on kernel 3.13
>> that is the default on Ubuntu 14.04...
>> I will try with kernel 3.19...
>> Sorry for sent this to the list...
>>
>
> Rectify is wise. Do not blame Proxmox for every bad thing that happens to
> you with it. You must have patience with things that come from the world of
> Open Source.
>
> :-)
>
> --
> =
> Lic. Hector Suarez Planas
> Administrador Nodo CODESA
> Santiago de Cuba
> -
> Blog: http://nihilanthlnxc.cubava.cu/
> ICQ ID: 681729738
> Conferendo ID: hspcuba
> =
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
Admin or whatever is your name... I have more than 10 years deal with Unix
Linux and Windows.
I know what I done.
To the others: Yes! Should be straightforward any way...
Proxmox server is a PowerEdge R430 with 32 GB of memory.
Storage is the same server.
Both with SAS hard driver.
Between this servers there's a cable in order to provide a gigaethernet
link.
In second server, I have ubuntu 15.04 installed with DRBD and OCFS mounted
in /data FS.
In the same server, I have NFS installed and server FS to Proxmox machine.
In proxmox machine I have nothing, except a VM with Ubuntu 14.04 installed,
where Zimbra Mail Server was deploy...
Inside bothe physical servers, everything is ok... NO error in disc and
everything is running smoothly.
But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
This make FS corrputed in some point that make Zimbra crash!

BTW, I will return Zimbra to a physical machine right now and deploy lab
env for test purpose

Best regards


2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com :

> Your nfs settings.
>
> Hire you people with the knowledge or up skill your knowledge.
>
> Sent from my iPhone
>
> > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes 
> wrote:
> >
> > Well friends...
> >
> > I really try hard to work with PVE, but is a pain in the ass...
> > Nothing seems to work..
> > I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )
> and beside follow all docs available in the wiki and internet, one single
> VM continue to crash over and over again...
> >
> > So I realise that is time to say good bye to Proxmox...
> >
> > Live long and prosper...
> >
> >
> >
> >
> > --
> >
> > Gilberto Ferreira
> > +55 (47) 9676-7530
> > Skype: gilberto.nunes36
> >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Eneko Lacunza

Gilberto,

If you want help you have to provide more clear details, starting with 
PVE version used as reported by "pve-version -v". Also, I'm not able to 
fully understand you last description of your proxmox/NFS deployment.


There are many happy users of Proxmox, myself included. I won't say it 
is free of issues, but no software is either.


I have used NFS storage mostly for backups but also for demo/test 
purposes and it has worked right. I've had some issues, but all ended 
showing an error on my part.


I see you don't have a trivial setup on your NFS server, with DRBD and 
OCFS. Are you really sure that setup is working OK? Do you have other 
VMs living there?


Cheers
Eneko

El 26/10/15 a las 10:48, Gilberto Nunes escribió:
Admin or whatever is your name... I have more than 10 years deal with 
Unix Linux and Windows.

I know what I done.
To the others: Yes! Should be straightforward any way...
Proxmox server is a PowerEdge R430 with 32 GB of memory.
Storage is the same server.
Both with SAS hard driver.
Between this servers there's a cable in order to provide a 
gigaethernet link.
In second server, I have ubuntu 15.04 installed with DRBD and OCFS 
mounted in /data FS.

In the same server, I have NFS installed and server FS to Proxmox machine.
In proxmox machine I have nothing, except a VM with Ubuntu 14.04 
installed, where Zimbra Mail Server was deploy...
Inside bothe physical servers, everything is ok... NO error in disc 
and everything is running smoothly.

But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
This make FS corrputed in some point that make Zimbra crash!

BTW, I will return Zimbra to a physical machine right now and deploy 
lab env for test purpose


Best regards


2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com 
 >:


Your nfs settings.

Hire you people with the knowledge or up skill your knowledge.

Sent from my iPhone

> On 26 Oct 2015, at 1:33 AM, Gilberto Nunes
>
wrote:
>
> Well friends...
>
> I really try hard to work with PVE, but is a pain in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through direct cable
( 1 gb ) and beside follow all docs available in the wiki and
internet, one single VM continue to crash over and over again...
>
> So I realise that is time to say good bye to Proxmox...
>
> Live long and prosper...
>
>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530 
> Skype: gilberto.nunes36
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Dmitry Petuhov

  
  
Of coz. But serving virtual disks over
  gigabit ethernet is generally bad idea too.
  And stable 70 megabytes per second are much better than unstable
  100. Also I think that you don't need to limit iops, just
  throughput.
  
  26.10.2015 18:35, Gilberto Nunes пишет:


  But i thing that with this limitation, transfer 30
gb of mail's over network it talks forever, don't you agree??
  
  
2015-10-26 13:25 GMT-02:00 Gilberto
  Nunes :
  
Regard badnwidth limitation, you mean like
  this:
  
  
  irtio0:
stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G


  

  2015-10-26 11:25 GMT-02:00
Dmitry Petuhov :

  
There's issue with NFS: if you try to send
  over it more than network can deal with
  (100-120 MBps for 1-gigabit), it imposes
  several-second pauses, which are being
  interpreted like hardware errors. These bursts
  may be just few seconds long to trigger issue.
  
  You can try to limit bandwidth in virtual HDD
  config to something like 60-70 MBps. This
  should be enough for 1-gigabit network.
  
  But my opinion is that it's better to switch
  to iSCSI.
  
  26.10.2015 16:13, Gilberto Nunes пишет:


  

  
BTW, all HD is SAS with
  gigaethernet between the servers...

I already try with gigaethernet switch
in order to isoleted the Proxmox and
Storage from external (LAN) traffic
Not works at all!
  
  
2015-10-26
  11:12 GMT-02:00 Gilberto Nunes :
  

  

  

  

  

  HDD config is
standard... I do
not make any
change...
  
  I wonder why I
  have others VM, as
  I said before,
  with Ubuntu,
  CentOS, Windows 7
  and 2012, and work
  fine! 

Not of all have huge
big files or a lot
of connections, but
they stand solid!
  
  But when require a lot
  access and deal with
  big files, here's came
  the devil! The VM just
  get IO error and die
  before 2 or 3 days...

On both physical
servers, not hight load.
I check with 

Re: [PVE-User] I try hard but...

2015-10-26 Thread Hector Suarez Planas

...


I agree with iSCSI, but NFS leave me the option to make snapshot...
I will try with iSCSI...


I tried with iSCSI and GlusterFS with good results. With few equipment 
of course (I live in Cuba).




Thanks for the trick


Try it.

:-)

--
=
Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
"For Proxmox VE? Sure. :-)"
Yes! You bet!

2015-10-26 11:50 GMT-02:00 Hector Suarez Planas 
:

> Greetings.
>
> Well friends...
>>
>> I really try hard to work with PVE, but is a pain in the ass...
>>
>
> I´m totaly not agreed with you.
>
> Nothing seems to work..
>> I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )
>>
>
> NFS connected through 1 GbE? You asked that (bottleneck) a long time ago
> and you was obtained many responses.
>
> and beside follow all docs available in the wiki and internet, one single
>> VM continue to crash over and over again...
>>
>
> That crash must have a cause: HDDs, network connections, server hardware,
> etc.
>
>
>> So I realise that is time to say good bye to Proxmox...
>>
>
> Bad decision.
>
>
>> Live long and prosper...
>>
>
> For Proxmox VE? Sure. :-)
>
> --
> =
> Lic. Hector Suarez Planas
> Administrador Nodo CODESA
> Santiago de Cuba
> -
> Blog: http://nihilanthlnxc.cubava.cu/
> ICQ ID: 681729738
> Conferendo ID: hspcuba
> =
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Dmitry Petuhov

  
  
What's virtual HDD config? Which
  controller, which cache mode?
  
  I suppose it's bad idea to run KVM machines via NFS: it may
  produce big enough delays under high loads, which may look like
  timeouts on client side.
  If you want some network storage, iSCSI can be better chiose.
  
  26.10.2015 12:48, Gilberto Nunes пишет:


  

  

  

  

  

  

  

  Admin or whatever is your name...
I have more than 10 years deal with
Unix Linux and Windows.
  
  I know what I done.

To the others: Yes! Should be
straightforward any way...
  
  Proxmox server is a PowerEdge R430 with 32
  GB of memory.

Storage is the same server.
  
  Both with SAS hard driver.

Between this servers there's a cable in order to
provide a gigaethernet link.
  
  In second server, I have ubuntu 15.04 installed
  with DRBD and OCFS mounted in /data FS.

In the same server, I have NFS installed and server
FS to Proxmox machine.
  
  In proxmox machine I have nothing, except a VM with
  Ubuntu 14.04 installed, where Zimbra Mail Server was
  deploy...

Inside bothe physical servers, everything is ok... NO
error in disc and everything is running smoothly.
  
  But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...

This make FS corrputed in some point that make Zimbra crash!

  
  BTW, I will return Zimbra to a physical machine right now and
  deploy lab env for test purpose
  

Best regards

  

  

  

  

  
  

  

  

  

  

  
  
2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com
  :
  Your nfs
settings.

Hire you people with the knowledge or up skill your
knowledge.

Sent from my iPhone

  
> On 26 Oct 2015, at 1:33 AM, Gilberto Nunes 
wrote:
>
> Well friends...
>
> I really try hard to work with PVE, but is a pain
in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through
direct cable ( 1 gb ) and beside follow all docs
available in the wiki and internet, one single VM
continue to crash over and over again...
>
> So I realise that is time to say good bye to
Proxmox...
>
> Live long and prosper...
>
>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
  

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  




-- 

  

  

  

  

  

  

 

Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
BTW, all HD is SAS with gigaethernet between the servers...
I already try with gigaethernet switch in order to isoleted the Proxmox and
Storage from external (LAN) traffic Not works at all!

2015-10-26 11:12 GMT-02:00 Gilberto Nunes :

> HDD config is standard... I do not make any change...
> I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
> Windows 7 and 2012, and work fine!
> Not of all have huge big files or a lot of connections, but they stand
> solid!
> But when require a lot access and deal with big files, here's came the
> devil! The VM just get IO error and die before 2 or 3 days...
> On both physical servers, not hight load. I check with HTOP and TOP as
> well.
> iostat doesn't show nothing wrong.
> In side the VM, a lot I/O error from time to time...
> IO goes high endeed, but is so expect because I am using imapsync to sync
> mails to old server to Zimbra Mail Server...
> But I do not expect the VM die with IO errors!
> It's so frustrating... :(
>
> 2015-10-26 11:04 GMT-02:00 Dmitry Petuhov :
>
>> What's virtual HDD config? Which controller, which cache mode?
>>
>> I suppose it's bad idea to run KVM machines via NFS: it may produce big
>> enough delays under high loads, which may look like timeouts on client side.
>> If you want some network storage, iSCSI can be better chiose.
>>
>> 26.10.2015 12:48, Gilberto Nunes пишет:
>>
>> Admin or whatever is your name... I have more than 10 years deal with
>> Unix Linux and Windows.
>> I know what I done.
>> To the others: Yes! Should be straightforward any way...
>> Proxmox server is a PowerEdge R430 with 32 GB of memory.
>> Storage is the same server.
>> Both with SAS hard driver.
>> Between this servers there's a cable in order to provide a gigaethernet
>> link.
>> In second server, I have ubuntu 15.04 installed with DRBD and OCFS
>> mounted in /data FS.
>> In the same server, I have NFS installed and server FS to Proxmox machine.
>> In proxmox machine I have nothing, except a VM with Ubuntu 14.04
>> installed, where Zimbra Mail Server was deploy...
>> Inside bothe physical servers, everything is ok... NO error in disc and
>> everything is running smoothly.
>> But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
>> This make FS corrputed in some point that make Zimbra crash!
>>
>> BTW, I will return Zimbra to a physical machine right now and deploy lab
>> env for test purpose
>>
>> Best regards
>>
>>
>> 2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com <
>> ad...@extremeshok.com>:
>>
>>> Your nfs settings.
>>>
>>> Hire you people with the knowledge or up skill your knowledge.
>>>
>>> Sent from my iPhone
>>>
>>> > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes <
>>> gilberto.nune...@gmail.com> wrote:
>>> >
>>> > Well friends...
>>> >
>>> > I really try hard to work with PVE, but is a pain in the ass...
>>> > Nothing seems to work..
>>> > I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb
>>> ) and beside follow all docs available in the wiki and internet, one single
>>> VM continue to crash over and over again...
>>> >
>>> > So I realise that is time to say good bye to Proxmox...
>>> >
>>> > Live long and prosper...
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Gilberto Ferreira
>>> > +55 (47) 9676-7530
>>> > Skype: gilberto.nunes36
>>> >
>>> > ___
>>> > pve-user mailing list
>>> > pve-user@pve.proxmox.com
>>> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>>
>> ___
>> pve-user mailing 
>> listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
I agree with iSCSI, but NFS leave me the option to make snapshot...
I will try with iSCSI...

Thanks for the trick

2015-10-26 11:25 GMT-02:00 Dmitry Petuhov :

> There's issue with NFS: if you try to send over it more than network can
> deal with (100-120 MBps for 1-gigabit), it imposes several-second pauses,
> which are being interpreted like hardware errors. These bursts may be just
> few seconds long to trigger issue.
>
> You can try to limit bandwidth in virtual HDD config to something like
> 60-70 MBps. This should be enough for 1-gigabit network.
>
> But my opinion is that it's better to switch to iSCSI.
>
> 26.10.2015 16:13, Gilberto Nunes пишет:
>
> BTW, all HD is SAS with gigaethernet between the servers...
> I already try with gigaethernet switch in order to isoleted the Proxmox
> and Storage from external (LAN) traffic Not works at all!
>
> 2015-10-26 11:12 GMT-02:00 Gilberto Nunes :
>
>> HDD config is standard... I do not make any change...
>> I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
>> Windows 7 and 2012, and work fine!
>> Not of all have huge big files or a lot of connections, but they stand
>> solid!
>> But when require a lot access and deal with big files, here's came the
>> devil! The VM just get IO error and die before 2 or 3 days...
>> On both physical servers, not hight load. I check with HTOP and TOP as
>> well.
>> iostat doesn't show nothing wrong.
>> In side the VM, a lot I/O error from time to time...
>> IO goes high endeed, but is so expect because I am using imapsync to sync
>> mails to old server to Zimbra Mail Server...
>> But I do not expect the VM die with IO errors!
>> It's so frustrating... :(
>>
>> 2015-10-26 11:04 GMT-02:00 Dmitry Petuhov < 
>> mityapetu...@gmail.com>:
>>
>>> What's virtual HDD config? Which controller, which cache mode?
>>>
>>> I suppose it's bad idea to run KVM machines via NFS: it may produce big
>>> enough delays under high loads, which may look like timeouts on client side.
>>> If you want some network storage, iSCSI can be better chiose.
>>>
>>> 26.10.2015 12:48, Gilberto Nunes пишет:
>>>
>>> Admin or whatever is your name... I have more than 10 years deal with
>>> Unix Linux and Windows.
>>> I know what I done.
>>> To the others: Yes! Should be straightforward any way...
>>> Proxmox server is a PowerEdge R430 with 32 GB of memory.
>>> Storage is the same server.
>>> Both with SAS hard driver.
>>> Between this servers there's a cable in order to provide a gigaethernet
>>> link.
>>> In second server, I have ubuntu 15.04 installed with DRBD and OCFS
>>> mounted in /data FS.
>>> In the same server, I have NFS installed and server FS to Proxmox
>>> machine.
>>> In proxmox machine I have nothing, except a VM with Ubuntu 14.04
>>> installed, where Zimbra Mail Server was deploy...
>>> Inside bothe physical servers, everything is ok... NO error in disc and
>>> everything is running smoothly.
>>> But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
>>> This make FS corrputed in some point that make Zimbra crash!
>>>
>>> BTW, I will return Zimbra to a physical machine right now and deploy lab
>>> env for test purpose
>>>
>>> Best regards
>>>
>>>
>>> 2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com
>>> < ad...@extremeshok.com>:
>>>
 Your nfs settings.

 Hire you people with the knowledge or up skill your knowledge.

 Sent from my iPhone

 > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes <
 gilberto.nune...@gmail.com> wrote:
 >
 > Well friends...
 >
 > I really try hard to work with PVE, but is a pain in the ass...
 > Nothing seems to work..
 > I deploy Ubuntu with NFS storage connected through direct cable ( 1
 gb ) and beside follow all docs available in the wiki and internet, one
 single VM continue to crash over and over again...
 >
 > So I realise that is time to say good bye to Proxmox...
 >
 > Live long and prosper...
 >
 >
 >
 >
 > --
 >
 > Gilberto Ferreira
 > +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
 > Skype: gilberto.nunes36
 >
 > ___
 > pve-user mailing list
 > pve-user@pve.proxmox.com
 > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

>>>
>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>> ___
>>> pve-user mailing 
>>> listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>>
>>> ___
>>> 

Re: [PVE-User] I try hard but...

2015-10-26 Thread Dmitry Petuhov

  
  
There's issue with NFS: if you try to
  send over it more than network can deal with (100-120 MBps for
  1-gigabit), it imposes several-second pauses, which are being
  interpreted like hardware errors. These bursts may be just few
  seconds long to trigger issue.
  
  You can try to limit bandwidth in virtual HDD config to something
  like 60-70 MBps. This should be enough for 1-gigabit network.
  
  But my opinion is that it's better to switch to iSCSI.
  
  26.10.2015 16:13, Gilberto Nunes пишет:


  
BTW, all HD is SAS with gigaethernet between the servers...

I already try with gigaethernet switch in order to isoleted the
Proxmox and Storage from external (LAN) traffic Not works at
all!
  
  
2015-10-26 11:12 GMT-02:00 Gilberto
  Nunes :
  

  

  

  

  

  HDD config is standard... I do not
make any change...
  
  I wonder why I have others VM, as I said
  before, with Ubuntu, CentOS, Windows 7 and
  2012, and work fine! 

Not of all have huge big files or a lot of
connections, but they stand solid!
  
  But when require a lot access and deal with
  big files, here's came the devil! The VM just
  get IO error and die before 2 or 3 days...

On both physical servers, not hight load. I
check with HTOP and TOP as well.
  
  iostat doesn't show nothing wrong.

In side the VM, a lot I/O error from time to time...
  
  IO goes high endeed, but is so expect because I am
  using imapsync to sync mails to old server to Zimbra
  Mail Server...

But I do not expect the VM die with IO errors!
  
  It's so frustrating... :(


  

  2015-10-26 11:04 GMT-02:00
Dmitry Petuhov :

  
What's virtual HDD config? Which
  controller, which cache mode?
  
  I suppose it's bad idea to run KVM machines
  via NFS: it may produce big enough delays
  under high loads, which may look like timeouts
  on client side.
  If you want some network storage, iSCSI can be
  better chiose.
  
  26.10.2015 12:48, Gilberto Nunes пишет:


  

  

  

  

  

  

  

  

  Admin or
  whatever is
  your name... I
  have more than
  10 years deal
  with Unix
  Linux and
  Windows.
  
  I know what I
  done.

To the others:
Yes! Should be
  

Re: [PVE-User] I try hard but...

2015-10-26 Thread Hector Suarez Planas

Greetings.


Well friends...

I really try hard to work with PVE, but is a pain in the ass...


I´m totaly not agreed with you.


Nothing seems to work..
I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb )


NFS connected through 1 GbE? You asked that (bottleneck) a long time ago 
and you was obtained many responses.


and beside follow all docs available in the wiki and internet, one 
single VM continue to crash over and over again...


That crash must have a cause: HDDs, network connections, server 
hardware, etc.




So I realise that is time to say good bye to Proxmox...


Bad decision.



Live long and prosper...


For Proxmox VE? Sure. :-)

--
=
Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
Answer my own question: yes! There is a bug related NFS on kernel 3.13 that
is the default on Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...

2015-10-26 13:58 GMT-02:00 Gilberto Nunes :

> UPDATE
>
> I try use proto=udp and soft, get kernel OPS in NFS Server side:
>
> BUG: unable to handle kernel NULL pointer dereference at 0008
> [ 7465.127924] IP: []
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.128154] PGD 0
> [ 7465.128224] Oops:  [#1] SMP
> [ 7465.128336] Modules linked in: ocfs2 quota_tree ocfs2_dlmfs
> ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs drbd
> lru_cache nfsd auth_rpcgss nfs_acl nfs lockd sunrpc fscache ipmi_devintf
> gpio_ich dcdbas x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
> kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
> aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me mei
> shpchp wmi ipmi_si acpi_power_meter lp mac_hid parport xfs libcrc32c
> hid_generic usbhid hid igb i2c_algo_bit tg3 dca ahci ptp megaraid_sas
> libahci pps_core
> [ 7465.130171] CPU: 8 PID: 4602 Comm: nfsd Not tainted 3.13.0-66-generic
> #108-Ubuntu
> [ 7465.130407] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6
> 06/08/2015
> [ 7465.130648] task: 88046410b000 ti: 88044a78a000 task.ti:
> 88044a78a000
> [ 7465.130889] RIP: 0010:[]  []
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.131213] RSP: 0018:88044a78bbc0  EFLAGS: 00010206
> [ 7465.131385] RAX:  RBX: 8804607c2300 RCX:
> 00ec
> [ 7465.131613] RDX:  RSI: 0c7c RDI:
> 880464c0e600
> [ 7465.131844] RBP: 88044a78bbf8 R08:  R09:
> aea75158
> [ 7465.132074] R10: 00c0 R11: 0003 R12:
> 0008
> [ 7465.132304] R13: 880464c0e600 R14: 0c74 R15:
> 880464c0e600
> [ 7465.132535] FS:  () GS:88046e50()
> knlGS:
> [ 7465.132798] CS:  0010 DS:  ES:  CR0: 80050033
> [ 7465.132981] CR2: 0008 CR3: 01c0e000 CR4:
> 001407e0
> [ 7465.133211] Stack:
> [ 7465.133275]  81616f66 81616fb0 8804607c2300
> 88044a78bdf8
> [ 7465.133529]   0c74 880464c0e600
> 88044a78bc60
> [ 7465.133780]  8168b2ec 88044a430028 8804607c2370
> 0002
> [ 7465.134032] Call Trace:
> [ 7465.134109]  [] ? skb_checksum+0x26/0x30
> [ 7465.134284]  [] ? skb_push+0x40/0x40
> [ 7465.134451]  [] udp_recvmsg+0x1dc/0x380
> [ 7465.134624]  [] inet_recvmsg+0x6c/0x80
> [ 7465.134790]  [] sock_recvmsg+0x9a/0xd0
> [ 7465.134956]  [] ? del_timer_sync+0x4a/0x60
> [ 7465.135131]  [] ? schedule_timeout+0x17d/0x2d0
> [ 7465.135318]  [] kernel_recvmsg+0x3a/0x50
> [ 7465.135497]  [] svc_udp_recvfrom+0x89/0x440 [sunrpc]
> [ 7465.135699]  [] ? _raw_spin_unlock_bh+0x1b/0x40
> [ 7465.135902]  [] ? svc_get_next_xprt+0xd8/0x310
> [sunrpc]
> [ 7465.136120]  [] svc_recv+0x4a0/0x5c0 [sunrpc]
> [ 7465.136307]  [] nfsd+0xad/0x130 [nfsd]
> [ 7465.136476]  [] ? nfsd_destroy+0x80/0x80 [nfsd]
> [ 7465.136673]  [] kthread+0xd2/0xf0
> [ 7465.136829]  [] ? kthread_create_on_node+0x1c0/0x1c0
> [ 7465.137039]  [] ret_from_fork+0x58/0x90
> [ 7465.137212]  [] ? kthread_create_on_node+0x1c0/0x1c0
> [ 7465.145470] Code: 44 00 00 55 31 c0 48 89 e5 41 57 41 56 41 55 49 89 fd
> 41 54 41 89 f4 53 48 83 ec 10 8b 77 68 41 89 f6 45 29 e6 0f 84 89 00 00 00
> <48> 8b 42 08 48 89 d3 48 85 c0 75 14 0f 1f 80 00 00 00 00 48 83
> [ 7465.163046] RIP  []
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.171731]  RSP 
> [ 7465.180231] CR2: 0008
> [ 7465.205987] ---[ end trace 1edb9cef822eb074 ]---
>
>
> Somebody know any bug relate this issue???
>
> 2015-10-26 13:35 GMT-02:00 Gilberto Nunes :
>
>> But i thing that with this limitation, transfer 30 gb of mail's over
>> network it talks forever, don't you agree??
>>
>> 2015-10-26 13:25 GMT-02:00 Gilberto Nunes :
>>
>>> Regard badnwidth limitation, you mean like this:
>>>
>>>
>>> irtio0:
>>> stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G
>>>
>>> 2015-10-26 11:25 GMT-02:00 Dmitry Petuhov :
>>>
 There's issue with NFS: if you try to send over it more than network
 can deal with (100-120 MBps for 1-gigabit), it imposes several-second
 pauses, which are being interpreted like hardware errors. These bursts may
 be just few seconds long to trigger issue.

 You can try to limit bandwidth in virtual HDD config to something like
 60-70 MBps. This should be enough for 1-gigabit network.

 But my opinion is that it's better to switch to iSCSI.

 26.10.2015 16:13, Gilberto Nunes пишет:

Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
Regard badnwidth limitation, you mean like this:


irtio0:
stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G

2015-10-26 11:25 GMT-02:00 Dmitry Petuhov :

> There's issue with NFS: if you try to send over it more than network can
> deal with (100-120 MBps for 1-gigabit), it imposes several-second pauses,
> which are being interpreted like hardware errors. These bursts may be just
> few seconds long to trigger issue.
>
> You can try to limit bandwidth in virtual HDD config to something like
> 60-70 MBps. This should be enough for 1-gigabit network.
>
> But my opinion is that it's better to switch to iSCSI.
>
> 26.10.2015 16:13, Gilberto Nunes пишет:
>
> BTW, all HD is SAS with gigaethernet between the servers...
> I already try with gigaethernet switch in order to isoleted the Proxmox
> and Storage from external (LAN) traffic Not works at all!
>
> 2015-10-26 11:12 GMT-02:00 Gilberto Nunes :
>
>> HDD config is standard... I do not make any change...
>> I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
>> Windows 7 and 2012, and work fine!
>> Not of all have huge big files or a lot of connections, but they stand
>> solid!
>> But when require a lot access and deal with big files, here's came the
>> devil! The VM just get IO error and die before 2 or 3 days...
>> On both physical servers, not hight load. I check with HTOP and TOP as
>> well.
>> iostat doesn't show nothing wrong.
>> In side the VM, a lot I/O error from time to time...
>> IO goes high endeed, but is so expect because I am using imapsync to sync
>> mails to old server to Zimbra Mail Server...
>> But I do not expect the VM die with IO errors!
>> It's so frustrating... :(
>>
>> 2015-10-26 11:04 GMT-02:00 Dmitry Petuhov < 
>> mityapetu...@gmail.com>:
>>
>>> What's virtual HDD config? Which controller, which cache mode?
>>>
>>> I suppose it's bad idea to run KVM machines via NFS: it may produce big
>>> enough delays under high loads, which may look like timeouts on client side.
>>> If you want some network storage, iSCSI can be better chiose.
>>>
>>> 26.10.2015 12:48, Gilberto Nunes пишет:
>>>
>>> Admin or whatever is your name... I have more than 10 years deal with
>>> Unix Linux and Windows.
>>> I know what I done.
>>> To the others: Yes! Should be straightforward any way...
>>> Proxmox server is a PowerEdge R430 with 32 GB of memory.
>>> Storage is the same server.
>>> Both with SAS hard driver.
>>> Between this servers there's a cable in order to provide a gigaethernet
>>> link.
>>> In second server, I have ubuntu 15.04 installed with DRBD and OCFS
>>> mounted in /data FS.
>>> In the same server, I have NFS installed and server FS to Proxmox
>>> machine.
>>> In proxmox machine I have nothing, except a VM with Ubuntu 14.04
>>> installed, where Zimbra Mail Server was deploy...
>>> Inside bothe physical servers, everything is ok... NO error in disc and
>>> everything is running smoothly.
>>> But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
>>> This make FS corrputed in some point that make Zimbra crash!
>>>
>>> BTW, I will return Zimbra to a physical machine right now and deploy lab
>>> env for test purpose
>>>
>>> Best regards
>>>
>>>
>>> 2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com
>>> < ad...@extremeshok.com>:
>>>
 Your nfs settings.

 Hire you people with the knowledge or up skill your knowledge.

 Sent from my iPhone

 > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes <
 gilberto.nune...@gmail.com> wrote:
 >
 > Well friends...
 >
 > I really try hard to work with PVE, but is a pain in the ass...
 > Nothing seems to work..
 > I deploy Ubuntu with NFS storage connected through direct cable ( 1
 gb ) and beside follow all docs available in the wiki and internet, one
 single VM continue to crash over and over again...
 >
 > So I realise that is time to say good bye to Proxmox...
 >
 > Live long and prosper...
 >
 >
 >
 >
 > --
 >
 > Gilberto Ferreira
 > +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
 > Skype: gilberto.nunes36
 >
 > ___
 > pve-user mailing list
 > pve-user@pve.proxmox.com
 > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

>>>
>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>> ___
>>> pve-user mailing 
>>> 

Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
But i thing that with this limitation, transfer 30 gb of mail's over
network it talks forever, don't you agree??

2015-10-26 13:25 GMT-02:00 Gilberto Nunes :

> Regard badnwidth limitation, you mean like this:
>
>
> irtio0:
> stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G
>
> 2015-10-26 11:25 GMT-02:00 Dmitry Petuhov :
>
>> There's issue with NFS: if you try to send over it more than network can
>> deal with (100-120 MBps for 1-gigabit), it imposes several-second pauses,
>> which are being interpreted like hardware errors. These bursts may be just
>> few seconds long to trigger issue.
>>
>> You can try to limit bandwidth in virtual HDD config to something like
>> 60-70 MBps. This should be enough for 1-gigabit network.
>>
>> But my opinion is that it's better to switch to iSCSI.
>>
>> 26.10.2015 16:13, Gilberto Nunes пишет:
>>
>> BTW, all HD is SAS with gigaethernet between the servers...
>> I already try with gigaethernet switch in order to isoleted the Proxmox
>> and Storage from external (LAN) traffic Not works at all!
>>
>> 2015-10-26 11:12 GMT-02:00 Gilberto Nunes :
>>
>>> HDD config is standard... I do not make any change...
>>> I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
>>> Windows 7 and 2012, and work fine!
>>> Not of all have huge big files or a lot of connections, but they stand
>>> solid!
>>> But when require a lot access and deal with big files, here's came the
>>> devil! The VM just get IO error and die before 2 or 3 days...
>>> On both physical servers, not hight load. I check with HTOP and TOP as
>>> well.
>>> iostat doesn't show nothing wrong.
>>> In side the VM, a lot I/O error from time to time...
>>> IO goes high endeed, but is so expect because I am using imapsync to
>>> sync mails to old server to Zimbra Mail Server...
>>> But I do not expect the VM die with IO errors!
>>> It's so frustrating... :(
>>>
>>> 2015-10-26 11:04 GMT-02:00 Dmitry Petuhov < 
>>> mityapetu...@gmail.com>:
>>>
 What's virtual HDD config? Which controller, which cache mode?

 I suppose it's bad idea to run KVM machines via NFS: it may produce big
 enough delays under high loads, which may look like timeouts on client 
 side.
 If you want some network storage, iSCSI can be better chiose.

 26.10.2015 12:48, Gilberto Nunes пишет:

 Admin or whatever is your name... I have more than 10 years deal with
 Unix Linux and Windows.
 I know what I done.
 To the others: Yes! Should be straightforward any way...
 Proxmox server is a PowerEdge R430 with 32 GB of memory.
 Storage is the same server.
 Both with SAS hard driver.
 Between this servers there's a cable in order to provide a gigaethernet
 link.
 In second server, I have ubuntu 15.04 installed with DRBD and OCFS
 mounted in /data FS.
 In the same server, I have NFS installed and server FS to Proxmox
 machine.
 In proxmox machine I have nothing, except a VM with Ubuntu 14.04
 installed, where Zimbra Mail Server was deploy...
 Inside bothe physical servers, everything is ok... NO error in disc and
 everything is running smoothly.
 But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
 This make FS corrputed in some point that make Zimbra crash!

 BTW, I will return Zimbra to a physical machine right now and deploy
 lab env for test purpose

 Best regards


 2015-10-25 22:38 GMT-02:00 ad...@extremeshok.com
 < ad...@extremeshok.com>:

> Your nfs settings.
>
> Hire you people with the knowledge or up skill your knowledge.
>
> Sent from my iPhone
>
> > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes <
> gilberto.nune...@gmail.com> wrote:
> >
> > Well friends...
> >
> > I really try hard to work with PVE, but is a pain in the ass...
> > Nothing seems to work..
> > I deploy Ubuntu with NFS storage connected through direct cable ( 1
> gb ) and beside follow all docs available in the wiki and internet, one
> single VM continue to crash over and over again...
> >
> > So I realise that is time to say good bye to Proxmox...
> >
> > Live long and prosper...
> >
> >
> >
> >
> > --
> >
> > Gilberto Ferreira
> > +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
> > Skype: gilberto.nunes36
> >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> 

Re: [PVE-User] I try hard but...

2015-10-26 Thread Gilberto Nunes
UPDATE

I try use proto=udp and soft, get kernel OPS in NFS Server side:

BUG: unable to handle kernel NULL pointer dereference at 0008
[ 7465.127924] IP: []
skb_copy_and_csum_datagram_iovec+0x2d/0x110
[ 7465.128154] PGD 0
[ 7465.128224] Oops:  [#1] SMP
[ 7465.128336] Modules linked in: ocfs2 quota_tree ocfs2_dlmfs
ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs drbd
lru_cache nfsd auth_rpcgss nfs_acl nfs lockd sunrpc fscache ipmi_devintf
gpio_ich dcdbas x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me mei
shpchp wmi ipmi_si acpi_power_meter lp mac_hid parport xfs libcrc32c
hid_generic usbhid hid igb i2c_algo_bit tg3 dca ahci ptp megaraid_sas
libahci pps_core
[ 7465.130171] CPU: 8 PID: 4602 Comm: nfsd Not tainted 3.13.0-66-generic
#108-Ubuntu
[ 7465.130407] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6
06/08/2015
[ 7465.130648] task: 88046410b000 ti: 88044a78a000 task.ti:
88044a78a000
[ 7465.130889] RIP: 0010:[]  []
skb_copy_and_csum_datagram_iovec+0x2d/0x110
[ 7465.131213] RSP: 0018:88044a78bbc0  EFLAGS: 00010206
[ 7465.131385] RAX:  RBX: 8804607c2300 RCX:
00ec
[ 7465.131613] RDX:  RSI: 0c7c RDI:
880464c0e600
[ 7465.131844] RBP: 88044a78bbf8 R08:  R09:
aea75158
[ 7465.132074] R10: 00c0 R11: 0003 R12:
0008
[ 7465.132304] R13: 880464c0e600 R14: 0c74 R15:
880464c0e600
[ 7465.132535] FS:  () GS:88046e50()
knlGS:
[ 7465.132798] CS:  0010 DS:  ES:  CR0: 80050033
[ 7465.132981] CR2: 0008 CR3: 01c0e000 CR4:
001407e0
[ 7465.133211] Stack:
[ 7465.133275]  81616f66 81616fb0 8804607c2300
88044a78bdf8
[ 7465.133529]   0c74 880464c0e600
88044a78bc60
[ 7465.133780]  8168b2ec 88044a430028 8804607c2370
0002
[ 7465.134032] Call Trace:
[ 7465.134109]  [] ? skb_checksum+0x26/0x30
[ 7465.134284]  [] ? skb_push+0x40/0x40
[ 7465.134451]  [] udp_recvmsg+0x1dc/0x380
[ 7465.134624]  [] inet_recvmsg+0x6c/0x80
[ 7465.134790]  [] sock_recvmsg+0x9a/0xd0
[ 7465.134956]  [] ? del_timer_sync+0x4a/0x60
[ 7465.135131]  [] ? schedule_timeout+0x17d/0x2d0
[ 7465.135318]  [] kernel_recvmsg+0x3a/0x50
[ 7465.135497]  [] svc_udp_recvfrom+0x89/0x440 [sunrpc]
[ 7465.135699]  [] ? _raw_spin_unlock_bh+0x1b/0x40
[ 7465.135902]  [] ? svc_get_next_xprt+0xd8/0x310 [sunrpc]
[ 7465.136120]  [] svc_recv+0x4a0/0x5c0 [sunrpc]
[ 7465.136307]  [] nfsd+0xad/0x130 [nfsd]
[ 7465.136476]  [] ? nfsd_destroy+0x80/0x80 [nfsd]
[ 7465.136673]  [] kthread+0xd2/0xf0
[ 7465.136829]  [] ? kthread_create_on_node+0x1c0/0x1c0
[ 7465.137039]  [] ret_from_fork+0x58/0x90
[ 7465.137212]  [] ? kthread_create_on_node+0x1c0/0x1c0
[ 7465.145470] Code: 44 00 00 55 31 c0 48 89 e5 41 57 41 56 41 55 49 89 fd
41 54 41 89 f4 53 48 83 ec 10 8b 77 68 41 89 f6 45 29 e6 0f 84 89 00 00 00
<48> 8b 42 08 48 89 d3 48 85 c0 75 14 0f 1f 80 00 00 00 00 48 83
[ 7465.163046] RIP  []
skb_copy_and_csum_datagram_iovec+0x2d/0x110
[ 7465.171731]  RSP 
[ 7465.180231] CR2: 0008
[ 7465.205987] ---[ end trace 1edb9cef822eb074 ]---


Somebody know any bug relate this issue???

2015-10-26 13:35 GMT-02:00 Gilberto Nunes :

> But i thing that with this limitation, transfer 30 gb of mail's over
> network it talks forever, don't you agree??
>
> 2015-10-26 13:25 GMT-02:00 Gilberto Nunes :
>
>> Regard badnwidth limitation, you mean like this:
>>
>>
>> irtio0:
>> stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G
>>
>> 2015-10-26 11:25 GMT-02:00 Dmitry Petuhov :
>>
>>> There's issue with NFS: if you try to send over it more than network can
>>> deal with (100-120 MBps for 1-gigabit), it imposes several-second pauses,
>>> which are being interpreted like hardware errors. These bursts may be just
>>> few seconds long to trigger issue.
>>>
>>> You can try to limit bandwidth in virtual HDD config to something like
>>> 60-70 MBps. This should be enough for 1-gigabit network.
>>>
>>> But my opinion is that it's better to switch to iSCSI.
>>>
>>> 26.10.2015 16:13, Gilberto Nunes пишет:
>>>
>>> BTW, all HD is SAS with gigaethernet between the servers...
>>> I already try with gigaethernet switch in order to isoleted the Proxmox
>>> and Storage from external (LAN) traffic Not works at all!
>>>
>>> 2015-10-26 11:12 GMT-02:00 Gilberto Nunes :
>>>
 HDD config is standard... I do not make any change...
 I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
 

Re: [PVE-User] I try hard but...

2015-10-25 Thread ad...@extremeshok.com
Your nfs settings.

Hire you people with the knowledge or up skill your knowledge.

Sent from my iPhone

> On 26 Oct 2015, at 1:33 AM, Gilberto Nunes  wrote:
> 
> Well friends...
> 
> I really try hard to work with PVE, but is a pain in the ass...
> Nothing seems to work..
> I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb ) and 
> beside follow all docs available in the wiki and internet, one single VM 
> continue to crash over and over again...
> 
> So I realise that is time to say good bye to Proxmox...
> 
> Live long and prosper...
> 
> 
> 
> 
> -- 
> 
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-25 Thread Laurent Dumont
There is nothing magical about deploying Proxmox with NFS. It's supposed 
to be straightforward as it can be.


Either hire someone who can help you or go through the trouble of 
documenting your settings so that the community can help you.


On 10/25/2015 8:38 PM, ad...@extremeshok.com wrote:

Your nfs settings.

Hire you people with the knowledge or up skill your knowledge.

Sent from my iPhone


On 26 Oct 2015, at 1:33 AM, Gilberto Nunes  wrote:

Well friends...

I really try hard to work with PVE, but is a pain in the ass...
Nothing seems to work..
I deploy Ubuntu with NFS storage connected through direct cable ( 1 gb ) and 
beside follow all docs available in the wiki and internet, one single VM 
continue to crash over and over again...

So I realise that is time to say good bye to Proxmox...

Live long and prosper...




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-25 Thread Lindsay Mathieson
On 26 October 2015 at 11:23, Laurent Dumont 
wrote:

> or go through the trouble of documenting your settings so that the
> community can help you.
>

Indeed, we have almost no detail to work on.

I've had no problems with NFS, but without knowing what server setup you
are using, impossible to say. If you want to rule NFS out I'd suggets
running the VM off local storage.

Actually, with random crashes my first thought would be possible RAM parity
errors. How much ram do you have? is it ECC?


-- 
Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user