Re: [PVE-User] Enabling telemetry broke all my ceph managers

2020-06-18 Thread Brian :
Nice save. And thanks for the detailed info.

On Thursday, June 18, 2020, Lindsay Mathieson 
wrote:
> Clean nautilous install I setup last week
>
>  * 5 Proxmox nodes
>  o All on latest updates via no-subscription channel
>  * 18 OSD's
>  * 3 Managers
>  * 3 Monitors
>  * Cluster Heal good
>  * In a protracted rebalance phase
>  * All managed via proxmox
>
> I thought I would enable telemetry for caph as per this article:
>
> https://docs.ceph.com/docs/master/mgr/telemetry/
>
>
>  * Enabled the module (command line)
>  * ceph telemetry on
>  * Tested getting the status
>  * Set the contact and description
>ceph config set mgr mgr/telemetry/contact 'John Doe
>'
>ceph config set mgr mgr/telemetry/description 'My first Ceph cluster'
>ceph config set mgr mgr/telemetry/channel_ident true
>  * Tried sending it
>ceph telemetry send
>
> I *think* this is when the managers died, but it could have been earlier.
But around then the all ceph IO stopped and I discovered all three managers
had crashed and would not restart. I was shitting myself because this was
remote and the router is a pfSense VM :) Fortunately it kept going without
its disk responding.
>
> systemctl start ceph-mgr@vni.service
> Job for ceph-mgr@vni.service failed because the control process exited
with error code.
> See "systemctl status ceph-mgr@vni.service" and "journalctl -xe" for
details.
>
> From journalcontrol -xe
>
>-- The unit ceph-mgr@vni.service has entered the 'failed' state with
>result 'exit-code'.
>Jun 18 21:02:25 vni systemd[1]: Failed to start Ceph cluster manager
>daemon.
>-- Subject: A start job for unit ceph-mgr@vni.service has failed
>-- Defined-By: systemd
>-- Support: https://www.debian.org/support
>--
>-- A start job for unit ceph-mgr@vni.service has finished with a
>failure.
>--
>-- The job identifier is 91690 and the job result is failed.
>
>
> From systemctl status ceph-mgr@vni.service
>
> ceph-mgr@vni.service - Ceph cluster manager daemon
>Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; vendor
preset: enabled)
>   Drop-In: /lib/systemd/system/ceph-mgr@.service.d
>└─ceph-after-pve-cluster.conf
>Active: failed (Result: exit-code) since Thu 2020-06-18 20:53:52 AEST;
8min ago
>   Process: 415566 ExecStart=/usr/bin/ceph-mgr -f --cluster ${CLUSTER}
--id vni --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
>  Main PID: 415566 (code=exited, status=1/FAILURE)
>
> Jun 18 20:53:52 vni systemd[1]: ceph-mgr@vni.service: Service
RestartSec=10s expired, scheduling restart.
> Jun 18 20:53:52 vni systemd[1]: ceph-mgr@vni.service: Scheduled restart
job, restart counter is at 4.
> Jun 18 20:53:52 vni systemd[1]: Stopped Ceph cluster manager daemon.
> Jun 18 20:53:52 vni systemd[1]: ceph-mgr@vni.service: Start request
repeated too quickly.
> Jun 18 20:53:52 vni systemd[1]: ceph-mgr@vni.service: Failed with result
'exit-code'.
> Jun 18 20:53:52 vni systemd[1]: Failed to start Ceph cluster manager
daemon.
>
> I created a new manager service on an unused node and fortunately that
worked. I deleted/recreated the old managers and they started working. It
was a sweaty few minutes :)
>
>
> Everything resumed without a hiccup after that, impressed. Not game to
try and reproduce it though.
>
>
>
> --
> Lindsay
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Network Device models

2020-01-02 Thread Brian :
Hi Marco

The physical nic type is irrelevant. If the guest doesn't see the realteks
then most likely it's driver issue in the guest..

I'd try to get the using cards under than realtek issue address in opnsense

Brian


On Thursday, January 2, 2020, Bertorello, Marco 
wrote:
> Dear PVE Users,
>
> a, maybe, silly question about NIC cards models.
>
> I have a running Opnsense installation on a PVE VM, with 3 NICs, all as
> VirtIO (paravirtualized) as per [1].
>
> All works fine, but there is a bug[2][3] in Opnsense, using cards other
> than Realtek.
>
> But, if I try to run the VM using Realtek 8139 model, the OS doesn't see
> any interfaces. Is this right, since my physical NICs aren't Realtek?
>
> Thanks a lot and best regards,
>
> [1]
>
https://docs.netgate.com/pfsense/en/latest/virtualization/virtualizing-pfsense-with-proxmox.html
>
> [2] https://forum.opnsense.org/index.php?topic=9754.0
>
> [3] https://forum.opnsense.org/index.php?topic=14315.0
>
> --
> Marco Bertorello
> https://www.marcobertorello.it
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] kernel panic after live migration

2019-10-29 Thread Brian :
Hello

You would need to be a bit more verbose if you expect help.

Version of proxmox?
What panics? Host, guest?
Guest os?
Server hardware?
Disks?
Any logs?

As much relevant info as as you can provide...

On Tuesday, October 29, 2019, Humberto Jose De Sousa via pve-user <
pve-user@pve.proxmox.com> wrote:
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph MON quorum problem

2019-09-14 Thread Brian :
Have a mon that runs somewhere that isn't either of those rooms.

On Friday, September 13, 2019, Fabrizio Cuseo  wrote:
> Hello.
> I am planning a 6 hosts cluster.
>
> 3 hosts are located in the CedA room
> 3 hosts are located in the CedB room
>
> the two rooms are connected with a 2 x 10Gbit fiber (200mt) and in each
room i have 2 x 10Gbit stacked switch and each host have a 2 x 10Gbit (one
for each switch) for Ceph storage.
>
> My need is to have a full redundancy cluster that can survive to CedA (or
CedB) disaster.
>
> I have modified the crush map, so I have a RBD Pool that writes 2 copies
in CedA hosts, and 2 copies in CedB hosts, so a very good redundancy (disk
space is not a problem).
>
> But if I loose one of the rooms, i can't establish the needed quorum.
>
> Some suggestion to have a quick and not too complicated way to satisfy my
need ?
>
> Regards, Fabrizio
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Upgrade from 3.4, or reinstall?

2018-12-27 Thread Brian :
You will have to go to latest 3.x then to 4.x then to 5.x - upgrade
should be fine. But if its quicker to backup VMS and restore and some
downtime is acceptable ( downtime with upgrade on single box anyway )
then that maybe easier option for you.


On Thu, Dec 27, 2018 at 3:04 PM Gerald Brandt  wrote:
>
> Hi,
>
> I have an old 3.4 box. Is it worth upgrading, or should I just backup
> and reinstall?
>
> Gerald
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OVS Internal ports all in state unknown

2018-09-17 Thread Brian Sidebotham
Thanks for the responses to an essentially off-topic post.

Best Regards,

---
Brian Sidebotham


On Fri, 14 Sep 2018 at 19:07, Josh Knight  wrote:

> This looks to be expected. The operational state is provided by the
> kernel/driver for the interface.  For these virtual interfaces, it's just
> not being reported, probably because they can't actually go down.  This is
> common and not specific to proxmox. The openvswitch and tun drivers must
> not be reporting an operational state for the devices.
>
> I believe you can use `ovs-vsctl list Interface` or a similar command if
> you need to get the admin_state or link_state fields for the virtual
> interfaces.
>
> E.g. from my proxmox host I also see the same behavior.
>
> root@host:~# ip link show eno1
> 2: eno1:  mtu 1500 qdisc mq master
> ovs-system state UP mode DEFAULT group default qlen 1000
> root@host:~# cat /sys/class/net/eno1/operstate
> up
> root@host:~# ethtool -i eno1 | grep driver
> driver: tg3
>
> root@host:~# ip link show tap107i2
> 155: tap107i2:  mtu 1500 qdisc
> pfifo_fast master ovs-system state UNKNOWN mode DEFAULT group default qlen
> 1000
> root@host:~# cat /sys/class/net/tap107i2/operstate
> unknown
> root@host:~# ethtool -i tap107i2 | grep driver
> driver: tun
>
> root@host:~# ip link show bond0
> 23: bond0:  mtu 1500 qdisc noqueue state
> UNKNOWN mode DEFAULT group default qlen 1000
> root@host:~# cat /sys/class/net/bond0/operstate
> unknown
> root@host:~# ethtool -i bond0 | grep driver
> driver: openvswitch
>
>
>
>
>
>
> On Fri, Sep 14, 2018 at 10:42 AM Brian Sidebotham 
> wrote:
>
> > Hi Guys,
> >
> > We are using Openvswitch networking and we have a physical 1G management
> > network and two 10G physical links bonded. The physical interfaces show a
> > state of UP when doing "ip a".
> >
> > However for the OVS bond, bridges and internal ports we get a state of"
> > UNKNOWN". Is this expected?
> >
> > Everything else is essentially working OK - The GUI marks the bond,
> bridge
> > and internal ports as active and traffic is working as expected, but I
> > don't know why the state of these is not UP?
> >
> > An example of an internal port OVS Configuration in
> /etc/network/interfaces
> > (as setup by the GUI):
> >
> > allow-vmbr1 vlan233
> > iface vlan233 inet static
> > address  10.1.33.24
> > netmask  255.255.255.0
> > ovs_type OVSIntPort
> > ovs_bridge vmbr1
> > ovs_options tag=233
> >
> > and ip a output:
> >
> > 14: vlan233:  mtu 1500 qdisc noqueue
> state
> > *UNKNOWN* group default qlen 1000
> > link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
> > inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
> >valid_lft forever preferred_lft forever
> > inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
> >valid_lft forever preferred_lft forever
> >
> > The version we're running is detailed below. We rolled back the kernel as
> > we were having stability problems with 4.15.8 on our hardware (HP
> Proliant
> > Gen8)
> >
> > root@ :/etc/network# pveversion -v
> > proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
> > pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
> > pve-kernel-4.15: 5.2-5
> > pve-kernel-4.15.18-2-pve: 4.15.18-20
> > pve-kernel-4.13.16-2-pve: 4.13.16-48
> > pve-kernel-4.13.13-2-pve: 4.13.13-33
> > ceph: 12.2.7-pve1
> > corosync: 2.4.2-pve5
> > criu: 2.11.1-1~bpo90
> > glusterfs-client: 3.8.8-1
> > ksm-control-daemon: 1.2-2
> > libjs-extjs: 6.0.1-2
> > libpve-access-control: 5.0-8
> > libpve-apiclient-perl: 2.0-5
> > libpve-common-perl: 5.0-38
> > libpve-guest-common-perl: 2.0-17
> > libpve-http-server-perl: 2.0-10
> > libpve-storage-perl: 5.0-24
> > libqb0: 1.0.1-1
> > lvm2: 2.02.168-pve6
> > lxc-pve: 3.0.2+pve1-1
> > lxcfs: 3.0.0-1
> > novnc-pve: 1.0.0-2
> > openvswitch-switch: 2.7.0-3
> > proxmox-widget-toolkit: 1.0-19
> > pve-cluster: 5.0-29
> > pve-container: 2.0-25
> > pve-docs: 5.2-8
> > pve-firewall: 3.0-13
> > pve-firmware: 2.0-5
> > pve-ha-manager: 2.0-5
> > pve-i18n: 1.0-6
> > pve-libspice-server1: 0.12.8-3
> > pve-qemu-kvm: 2.11.2-1
> > pve-xtermjs: 1.0-5
> > qemu-server: 5.0-32
> > smartmontools: 6.5+svn4324-1
> > spiceterm: 3.0-5
> > vncterm: 1.5-3
> > zfsutils-linux: 0.7.9-pve1~bpo9
> >
> > ---
> > Brian Sidebotham
> >
> > Wanless Systems Limited
> > e: brian@wanless.systems
> &

[PVE-User] OVS Internal ports all in state unknown

2018-09-14 Thread Brian Sidebotham
Hi Guys,

We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".

However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?

Everything else is essentially working OK - The GUI marks the bond, bridge
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?

An example of an internal port OVS Configuration in /etc/network/interfaces
(as setup by the GUI):

allow-vmbr1 vlan233
iface vlan233 inet static
address  10.1.33.24
netmask  255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233

and ip a output:

14: vlan233:  mtu 1500 qdisc noqueue state
*UNKNOWN* group default qlen 1000
link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
   valid_lft forever preferred_lft forever
inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
   valid_lft forever preferred_lft forever

The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP Proliant
Gen8)

root@ :/etc/network# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

---
Brian Sidebotham

Wanless Systems Limited
e: brian@wanless.systems
m:+44 7739 359 883
o: +44 330 223 3595
<https://10.0.30.30:5001/webclient/#/call?phone=3302233595>

The information in this email is confidential and solely for the use of the
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.

Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.

Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by this
email. E-mail transmission cannot be guaranteed to be secure or error-free.
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.

Please note that all calls are recorded for monitoring and quality purposes.

Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12 0UN.
Registered in England.
Registered number: 6901359
<https://10.0.30.30:5001/webclient/#/call?phone=6901359>.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph with differents HDD Size

2018-08-22 Thread Brian :
Its really  not a great idea because the larger drives will tend to
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.

On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
 wrote:
>
> Hi there
>
>
> It's possible create a Ceph cluster with 4 servers, which has differents
> disk sizes:
>
> Server A - 2x 4TB
> Server B, C - 2x 8TB
> Server D - 2x 4TB
>
> This is ok?
>
> Thanks
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ubcleand info

2018-04-01 Thread Brian :
Crikey thats an old version - ubcleand seems to be something to do with openVZ



On Sat, Mar 31, 2018 at 8:00 PM, F00b 4rch  wrote:
> Hi all,
>
> Does someone know what does ubcleand process ?
> I see it running on old proxmox 3 and I can't find any info on it on
> man/net.
>
> More details :
>
> # uname -a Linux rk1hv2 2.6.32-39-pve #1 SMP Fri May 8 11:27:35 CEST 2015
> x86_64 GNU/Linux
>
> # lsb_release -a No LSB modules are available. Distributor ID: Debian
> Description: Debian GNU/Linux 7.9 (wheezy) Release: 7.9 Codename: wheezy
>
> # pveversion --verbose
> proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
> pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
> pve-kernel-2.6.32-39-pve: 2.6.32-156
> lvm2: 2.02.98-pve4
> clvm: 2.02.98-pve4
> corosync-pve: 1.4.7-1
> openais-pve: 1.1.4-3
> libqb0: 0.11.1-2
> redhat-cluster-pve: 3.2.0-2
> resource-agents-pve: 3.9.2-4
> fence-agents-pve: 4.0.10-2
> pve-cluster: 3.0-17
> qemu-server: 3.4-6
> pve-firmware: 1.1-4
> libpve-common-perl: 3.0-24
> libpve-access-control: 3.0-16
> libpve-storage-perl: 3.0-33
> pve-libspice-server1: 0.12.4-3
> vncterm: 1.1-8 vzctl: 4.0-1pve6
> vzprocps: 2.0.11-2
> vzquota: 3.1-2
> pve-qemu-kvm: 2.2-10
> ksm-control-daemon: 1.1-1
> glusterfs-client: 3.5.2-1
>
> Thanks
>
> Bests regards,
>
> F00b4rch
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Attache disk to other VM

2018-02-25 Thread Brian :
HI Gregor,

I think you will need to edit VM config at /etc/pve/qemu-server to do this.


On Mon, Feb 26, 2018 at 4:50 AM, Gregor Burck  wrote:
> Hi,
>
> I was able to rescue an image from a damaged pve host with dd (see other
> threat)
>
> I've defined a VM a and B. Cause B isn't started, files are there I see with a
> LiveCD, I want to attache the disk from B to A to Copy Data.
>
> Idoen't see how. Ore is move the right button? I only could select a target
> storage, no VM.
>
> Bye
>
> Gregor
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CEPH Luminous source packages

2018-02-13 Thread Brian :
Hi Mike,

I haven't installed luminous yet but if they are doing what they did
with previous packages then they're just using standard Ceph repo. The
source code will be https://github.com/ceph/ceph/tree/v12.0.0

if you replace 12.0.0 with the exact version of luminous currently in
the repo and installed that should be the source code.

On Tue, Feb 13, 2018 at 12:56 AM, Mike O'Connor  wrote:
> Hi All
>
> Where can I find the source packages that the Proxmox Ceph Luminous was
> built from ?
>
>
> Mike
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] User Management question

2017-11-09 Thread Chase, Brian E
The User Management documentation at 
(https://pve.proxmox.com/wiki/User_Management) is insufficient for me to be 
able to create a user with limited permissions.  A couple of examples that I do 
not see in the documentation that would be helpful are:


1.   Create a user that only has console access to specific virtual 
machines, but not others.  This user would not be able to add/delete VM's or 
change any settings on any existing VM's.

2.   Create another user that could create and manage new VM's, but only 
modify VM's the he/she created and have no ability to modify any settings of 
any VM's created by another user.

Examples are the way I learn best, so any who may be able to provide the above 
examples would help me a great deal.  I think once I see those two examples, I 
should be able to decipher the rest on my own using the documentation in the 
link shown above

Thanks,

Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Can't mount external USB drive to container

2017-11-09 Thread Chase, Brian E
I was able to use the GUI to add a USB device and subsequently mount it on a 
guest QEMU Virtual Machine, but those same options are not present in the web 
UI for containers, so I found some related documentation here:

https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines

I followed instructions, substituting the container number and the desired USB 
device found with the 'lsusb' command, and get this error:

root@pve:~# qm set 103 -usb0 host=0bc2:3322
Configuration file 'nodes/pve/qemu-server/103.conf' does not exist
root@pve:~#

I noticed that it was looking at a directory that caught my attention, the 
"qemu-server" portion of the pathname above set off a red flag, and made me 
believe that perhaps ONLY full blown QEMU virtual machines are supported when 
it comes to attaching USB devices.

Is there anyone out there who can tell me definitely whether or not containers 
support the connection of external USB storage devices?  If so, can you point 
me to documentation that works in order to make this container/USB device 
connections?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Any way to patch / force proxmox to support /etc/network/interfaces.d/*?

2017-05-19 Thread Brian :
you could probably cat /etc/network/interfaces.d/* >
/etc/network/interfaces as a horrible hack.


On Fri, May 19, 2017 at 1:03 PM, Eugen Mayer  wrote:
> Hallo,
>
> due to the nature of deploying with chef and configuring my network, 
> interfaces, bridges there, entries in /etc/network/interfaces.d/eth0 .. 
> /etc/network/interfaces.d/vmbr0 are created.
> The issue no is, that /etc/network/interfaces basically just includes
>
> source /etc/network/interfaces.d/*
>
> and is empty. Proxmox does not support that, does not list me any interfaces 
> in the UI and does not let me assign my KVM VM to any interface. Any way i 
> could patch that / force that / give proxmox a static list of interfaces ?
>
> Thanks
>
> --
> Eugen
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Destroying VM (sometimes) does not delete HDD

2017-02-02 Thread Brian ::
Would make sense to have TASK WARN maybe - that would certainly make
you check backup the output. Or perhaps try to delete the disk first,
if that fails then do nothing else in the task.

Are you using KRBD? Its usually the RBD being mounted with the kernel
module on a different box than the one you are deleting on that causes
this I think...




On Thu, Feb 2, 2017 at 9:55 AM, Florent B  wrote:
> Hi everyone,
>
> On a testing cluster, I have a problem when I destroy some VM, sometimes
> the task is "OK" but VM disk is not removed on RBD.
>
> See the task log :
>
> Removing all snapshots: 50% complete...2017-02-02 10:51:44.123621
> 7fa1a3fff700 -1 librbd::Operations: update notification timed-out
> 2017-02-02 10:51:55.058198 7fa1a3fff700 -1 librbd::Operations: update
> notification timed-out
> Removing all snapshots: 100% complete...
> Removing all snapshots: 100% complete...done.
> image has watchers - not removing
> Removing image: 0% complete...failed.
> rbd: error: image still has watchers
> Could not remove disk 'PVE01-RBD01:vm-107-disk-1', check manually: rbd
> rm 'vm-107-disk-1' error: rbd: error: image still has watchers
> image has watchers - not removing
> Removing image: 0% complete...failed.
> rbd: error: image still has watchers
> rbd rm 'vm-107-disk-1' error: rbd: error: image still has watchers
> TASK OK
>
>
> Maybe task should not be "OK" if disk is not removed, no ? And it
> deletes the VM config, it is deleted from PVE view.
>
> I'm running PVE 4.4-12/e71b7a74
>
> Thank you.
>
> Flo
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph: Some trouble creating OSD with journal on a sotware raid device...

2016-12-16 Thread Brian ::
This is probably by design..

On Thu, Dec 15, 2016 at 11:48 AM, Marco Gaiarin  wrote:
>
> Sorry, i came back to this topic because i've done some more tests.
>
> Seems that 'pveceph' tool have some trouble creating OSD with journal
> on ''nonstandard'' partition, for examply on a MD device.
>
> A command like:
>
> pveceph createosd /dev/sde --journal_dev /dev/md4
>
> fail mysteriously (OSD are added to the cluster, out and down, but on
> the node even the service get not created, eg, there's nothing to
> restart).
> Disks get partitioned/formatted (/dev/sde, but also /dev/md4).
>
>
> If instead i create on the device a GPT partition, for example of type
> Linux, i can do:
>
> pveceph createosd /dev/sde --journal_dev /dev/md4p1
>
> and creation of the OSD work flawlessy, with only a single warning
> emitted:
>
> WARNING:ceph-disk:Journal /dev/md4p1 was not prepared with ceph-disk. 
> Symlinking directly.
>
> But journal work as expected.
>
>
> I make a note. Thanks.
>
> --
> dott. Marco Gaiarin GNUPG Key ID: 240A3D66
>   Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
>   Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
>   marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
> http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] License issue

2016-11-19 Thread Brian ::
Don't install the licence until you're fully comfortable that you have
everything working the way you want it and you won't have any issue!

You can use the non sub repo for as long as you need.

On Sat, Nov 19, 2016 at 2:38 PM, Marcel van Leeuwen
 wrote:
> Hmmm, also true. I think this surly applies to less experienced Linux user 
> like me but also if you applies when you are not comfortable on a distro…
>
> Cheers,
>
> Marcel
>> On 19 Nov 2016, at 14:50, Kevin Lemonnier  wrote:
>>
>>>
>>> It is usually not required to do re-installs (what for?). [...]
>>>
>>
>> It's so so so so easy to mess up in a cluster and be locked out.
>> Unfortunatly the only way is to re-install, and that's basicaly the
>> only answer you get from both IRC and the forum to those problems.
>>
>> So yes, re-install is unfortunatly necessary.
>>
>> --
>> Kevin Lemonnier
>> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] License issue

2016-11-19 Thread Brian ::
Hi Marcel,

Its all explained here https://pve.proxmox.com/wiki/Package_Repositories

Cheers



On Sat, Nov 19, 2016 at 11:14 AM, Marcel van Leeuwen
 wrote:
> Yeah, I agree it’s normally not necessary to do re-installs. The reason I did 
> I was messing with remote NFS shares in LXC containers. So I did a couple of 
> stupid things (i still have not resolved this issue). I already installed the 
> license and was not aware of the limitation.
>
> For now I’ve add the pve-no-subscripition repository.
>
> What’s the difference between the pve-enterprise and the pve-no-subscription 
> repository? Are update just beter tested in the pve-enterprise repo?
>
>> On 19 Nov 2016, at 11:06, Dietmar Maurer  wrote:
>>
>>
>>> I subscribed for a license to support the project and of course to get
>>> updates. Now I’m in a testing phase so I installed my license a couple of
>>> times. I think I hit a maximum cause I can reactivate my license at the
>>> moment. I raised a ticket over at Maurer IT. I was not aware of this
>>> limitation. How do I prevent this from happening again? Just not install the
>>> license or not re-install ProxmoxVE?
>>
>> It is usually not required to do re-installs (what for?). And I guess
>> it is not necessary to activate the subscription for a test system
>> when you know you will reinstall soon (use pve-no-subscription for updates).
>>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Slow speeds when KVM guest is on NFS

2016-11-15 Thread Brian ::
Hi Mikhail

The guest that is running - what type of controller / cache?

Thanks


On Tue, Nov 15, 2016 at 10:05 PM, Mikhail <m...@plus-plus.su> wrote:
> On 11/16/2016 12:43 AM, Brian :: wrote:
>> What type of disk controller and what caching mode are you using?
>
> The storage server is built with 4 x 4TB ST4000NM0034 Seagate disks,
> attached to LSI Logic SAS3008 controller. Then there's Debian Jessie
> with software RAID10 using MDADM. This space is given to Proxmox host
> via iSCSI + LVM via 10 gbit ethernet. There's 32GB of RAM in this
> storage server, so almost all this RAM can be used for cache (nothing
> else runs there).
>
> I ran various tests on the storage server locally (created local LV,
> formatted it to EXT4 and ran there various disk-intensive tasks such as
> copying big files, etc). My average write speed to this MDADM raid10 /
> LVM / Ext4 filesystem is about 70-80mb/s. I guess it should be much
> faster then that, but I can't find out where's the bottleneck in this
> setup..
>
> # cat /proc/mdstat
> Personalities : [raid10]
> md0 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
>   7811819520 blocks super 1.2 512K chunks 2 near-copies [4/4] []
>   bitmap: 11/59 pages [44KB], 65536KB chunk
>
> unused devices: 
>
> # pvs
>   PV VG   Fmt  Attr PSize PFree
>   /dev/md0   vg0  lvm2 a--  7.28t 1.28t
>
> Thanks.
>
>>
>>
>>
>> On Tue, Nov 15, 2016 at 9:36 PM, Mikhail <m...@plus-plus.su> wrote:
>>> On 11/16/2016 12:33 AM, Brian :: wrote:
>>>> 90.4 MB/s isn't that far off.
>>>
>>> Hello,
>>>
>>> Yes, but I'm only able to get these results when doing simple "dd" test
>>> directly on Proxmox host machine inside NFS-mounted directory. KVM
>>> guest's filesystem is not getting even 1/4 of that speed when it's disk
>>> resides on the very same NFS (Debian installation from stock ISO takes
>>> ~hour to copy first halt of it's files..)
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Slow speeds when KVM guest is on NFS

2016-11-15 Thread Brian ::
What type of disk controller and what caching mode are you using?



On Tue, Nov 15, 2016 at 9:36 PM, Mikhail <m...@plus-plus.su> wrote:
> On 11/16/2016 12:33 AM, Brian :: wrote:
>> 90.4 MB/s isn't that far off.
>
> Hello,
>
> Yes, but I'm only able to get these results when doing simple "dd" test
> directly on Proxmox host machine inside NFS-mounted directory. KVM
> guest's filesystem is not getting even 1/4 of that speed when it's disk
> resides on the very same NFS (Debian installation from stock ISO takes
> ~hour to copy first halt of it's files..)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Slow speeds when KVM guest is on NFS

2016-11-15 Thread Brian ::
Ignore my reply - just reread the thread fully :)

NFS should work just fine.. no idea why you are seeing those lousy speeds.


On Tue, Nov 15, 2016 at 9:33 PM, Brian :: <b...@iptel.co> wrote:
> 90.4 MB/s isn't that far off.
>
>
> On Tue, Nov 15, 2016 at 5:25 PM, Mikhail <m...@plus-plus.su> wrote:
>> On 11/15/2016 06:09 PM, Gerald Brandt wrote:
>>> I don't know if it helps, but I always switch to NFSv4.
>>
>> Thanks for the tip. This did not help. I also tried with various caching
>> options (writeback, writethrough, etc) and RAW disk format instead of
>> qcow2 - nothing changed.
>>
>> I also have LVM over iSCSI export to that Proxmox host, and using LVM
>> over network (to the same storage server) I'm seeing expected speeds
>> close to 1gbit.
>>
>> So this means something is either wrong with NFS export options, or
>> something related to that part.
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Slow speeds when KVM guest is on NFS

2016-11-15 Thread Brian ::
90.4 MB/s isn't that far off.


On Tue, Nov 15, 2016 at 5:25 PM, Mikhail  wrote:
> On 11/15/2016 06:09 PM, Gerald Brandt wrote:
>> I don't know if it helps, but I always switch to NFSv4.
>
> Thanks for the tip. This did not help. I also tried with various caching
> options (writeback, writethrough, etc) and RAW disk format instead of
> qcow2 - nothing changed.
>
> I also have LVM over iSCSI export to that Proxmox host, and using LVM
> over network (to the same storage server) I'm seeing expected speeds
> close to 1gbit.
>
> So this means something is either wrong with NFS export options, or
> something related to that part.
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph down?

2016-10-12 Thread Brian ::
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/

24 hours down so far. Can't wait to read the RFO

On Wed, Oct 12, 2016 at 4:21 PM, Karsten Becker
 wrote:
> Hi,
>
> I ncan confirm that I was not able to call the documentation sections of
> the Ceph pages yesterday... they were down also.
>
> So just drink some tea.
>
> Regards
> Karsten
>
>
>
> On 12.10.2016 17:16, Marco Gaiarin wrote:
>>
>> 'download.ceph.com' seems down.
>>
>> So, a simple:
>>   pveceph install
>>
>> stall. After fiddling a bit, i've done:
>>
>>  root@capitanamerica:~# diff -ud /usr/share/perl5/PVE/CLI/pveceph.pm.orig 
>> /usr/share/perl5/PVE/CLI/pveceph.pm
>>  --- /usr/share/perl5/PVE/CLI/pveceph.pm.orig 2016-10-12 17:11:37.433742652 
>> +0200
>>  +++ /usr/share/perl5/PVE/CLI/pveceph.pm  2016-10-12 17:11:49.329745187 
>> +0200
>>  @@ -119,7 +119,7 @@
>>
>>   my $source = $devrepo ?
>>   "deb 
>> http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/$devrepo jessie 
>> main\n" :
>>  -"deb http://download.ceph.com/debian-$cephver jessie main\n";
>>  +"deb http://eu.ceph.com/debian-$cephver jessie main\n";
>>
>>   PVE::Tools::file_set_contents("/etc/apt/sources.list.d/ceph.list", 
>> $source);
>>
>>
>> probably a '--mirror' option to 'pveceph install' can be useful...
>>
>
>
> Ecologic Institut gemeinnuetzige GmbH
> Pfalzburger Str. 43/44, D-10717 Berlin
> Geschaeftsfuehrerin / Director: Dr. Camilla Bausch
> Sitz der Gesellschaft / Registered Office: Berlin (Germany)
> Registergericht / Court of Registration: Amtsgericht Berlin (Charlottenburg), 
> HRB 57947
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Brian ::
Hi Lindsay

I think with clusters with VM type workload and at the scale that
proxmox users tend to build < 20 OSD servers that cache tier is adding
layer of complexity that isn't going to payback. If you want decent
IOPS / throughput at this scale with Ceph no spinning rust allowed
anywhere  :)

Regards


On Sat, Oct 8, 2016 at 11:21 PM, Lindsay Mathieson
 wrote:
> On 9/10/2016 7:45 AM, Lindsay Mathieson wrote:
>>
>> cache tiering was limited and a poor fit for VM Hosting, generally the
>> performance was with it
>
>
> "was *worse* with it"
>
>
> :)
>
>
> --
> Lindsay Mathieson
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Manager Skins/Themes

2016-10-03 Thread Brian ::
Jeasuss - someone got out of the bed on the wrong side today!

I've just been working on something that had had me stuck in the 4.3
UI for the past 48 hours on and off.
Personally I like it  but thats just my opinion - and I did give the
guys feedback after I upgraded.

I'm sure some things can be improved but certainly my opinion isn't
anywhere near as negative as yours.



On Mon, Oct 3, 2016 at 4:11 PM, John Crisp  wrote:
> Is there any way that there could be a choice of skins/themes for the
> manager ? You just get used to one, and yet another layout change comes
> barreling down.
>
> The latest 4.3 update IMHO are absolutely awful with the second vertical
> column. It is confusing on the eye and just looks messy. It makes it
> even harder to look at on a mobile screen with the extra column.
>
> Clearly is only half thought out with an almost completely blank bar to
> the right of 'Summary' where the menu (sensibly) used to go, with just
> the "Averages" dropdown far right.
>
> If I knew how I'd try and hack it back myself but get lost in a sea of
> JS and CSS. I'd already gotten fed up with the flat 'bile' green in
> graphs and can't figure where to change it to something less offensive.
> The extra vertical column just completes the set.
>
> Can't you have a themes manager and then at least we can have a choice
> rather than having this unpleasantness shoved down our throats ?
>
> Or is there somewhere you have notes on the style so we can modify
> colours etc too suit ourselves ? I have been trying to find this
> 'vertical' change in git so I could reverse it out, but cannot see it.
> Some notes on this would be appreciated.
>
> "A big THANK-YOU to our active community for all feedback, testing, bug
> reporting and patch submissions. "
>
> I'd like to know where all the 'feedback' came from for this change ?
>
> Rgds
> John
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage migration issue with thin provisionning SAN storage

2016-10-03 Thread Brian ::
Hi Alexandre,

If guests are linux you could try use the scsi driver with discard enabled

fstrim -v / then may make the unused space free on the underlying FS then.

I don't use LVM but this certainly works with other types of storage..





On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
 wrote:
> Hello,
>
> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm 
> hitting a major issue with storage migrations..
> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS 
> to LVM on Proxmox.
>
> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
> Thin provisionning works fine on Proxmox newly created VMs.
>
> But, i just discovered that when using qm move_disk to migrate from NFS to 
> LVM, it actually allocates all blocks of data !
> It's a huge problem for me and clearly a nogo... as the SAN storage arrays 
> are filling up very quickly !
>
> After further investigations, in qemu & proxmox... I found in proxmox code 
> that qemu_drive_mirror is called with those arguments :
>
> (In /usr/share/perl5/PVE/QemuServer.pm)
>
>5640 sub qemu_drive_mirror {
> ...
>5654 my $opts = { timeout => 10, device => "drive-$drive", mode => 
> "existing", sync => "full", target => $qemu_target };
>
> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block 
> targets, but proxmox does not use it.
> Is there any reason why this flag is not enabled during qemu drive mirroring 
> ??
>
> Cheers,
> Alexandre.
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] New Disks Section

2016-09-30 Thread Brian ::
Thanks Fabian.


On Fri, Sep 30, 2016 at 11:51 AM, Fabian Grünbichler
<f.gruenbich...@proxmox.com> wrote:
> On Fri, Sep 30, 2016 at 11:36:56AM +0100, Brian :: wrote:
>> Hi guys
>>
>> This doesn't seem to work for me..
>>
>> I get blank screen in disks section of gui.
>>
>> The command you use in Diskmange.pm translates to:
>>
>> /usr/sbin/smartctl  -a -f brief /dev/device
>>
>> If I run that for /usr/sbin/smartctl  -a -f brief /dev/sda I get tons
>> of info about the device so that works.
>>
>> Looking at the logs when I access the disks section of the gui:
>>
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
>> isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
>> line 91.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "was" isn't
>> numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 92.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "completed" isn't
>> numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 93.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
>> isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
>> line 91.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "was" isn't
>> numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 92.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "completed" isn't
>> numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 93.
>> Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
>> isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
>> line 91.
>>
>> 
>>
>> So something isn't happy..
>>
>> If I can provide anymore info let me know.
>>
>> Thanks
>
> this was already reported[1] and should be fixed with the next
> libpve-storage-perl update[2,3]. at least the diskmanage view should not
> be empty any more ;) in your case, you should also see the smart
> status/attributes. NVME and SCSI users will have to wait for another
> round of fixes/updates.
>
> 1: https://bugzilla.proxmox.com/show_bug.cgi?id=1126
> 2: 
> https://git.proxmox.com/?p=pve-storage.git;a=commit;h=0c486b09dfc686d24d8b6259424876efda1f
> 3: 
> https://git.proxmox.com/?p=pve-storage.git;a=commit;h=1c9995536424825e4ceca7702d8bfd337acf0f4e
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] New Disks Section

2016-09-30 Thread Brian ::
Hi guys

This doesn't seem to work for me..

I get blank screen in disks section of gui.

The command you use in Diskmange.pm translates to:

/usr/sbin/smartctl  -a -f brief /dev/device

If I run that for /usr/sbin/smartctl  -a -f brief /dev/sda I get tons
of info about the device so that works.

Looking at the logs when I access the disks section of the gui:

Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
line 91.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "was" isn't
numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 92.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "completed" isn't
numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 93.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
line 91.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "was" isn't
numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 92.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "completed" isn't
numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm line 93.
Sep 30 11:34:19 server-vm3 pvedaemon[3066]: Argument "Read_scanning"
isn't numeric in addition (+) at /usr/share/perl5/PVE/Diskmanage.pm
line 91.



So something isn't happy..

If I can provide anymore info let me know.

Thanks
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] storpool

2016-09-14 Thread Brian ::
Anyone looked at it or considered using it with Proxmox?


https://storpool.com/
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph - Better understanings

2016-05-27 Thread Brian ::
Its pretty straight forward. If you have 6TB and you are using size =
3 in ceph - you have 2TB (give or take) of usable storage.


On Fri, May 27, 2016 at 9:27 PM, Daniel Eschner  wrote:
> Hi all,
>
> i am playing with Ceph and some thinks i dont understand.
> The Proxmox docu tells how to setup easily - everthinks is working.
> But i want to understand how the Recplication and Redundancy works.
>
> I Setup 3 SSD OSDs ans created a Pool with Size 3 and Mn Size 2
>
> Proxmox tells me the available disk size is 1200GB (3x 400GB SSDs)
> How can that redundant when when i use 1TB storage?
>
> I see used 6GB  but when i see the Ceph Storage it tells me used 18GB - Thats 
> means replicated on all SSDs.
> But usebale is 1,2TB - dont understand how can that be - its impossible for 
> my understanding
>
> I didnt find any good explanation in the web :-(
>
>
> https://www.dropbox.com/s/jhzjq7773n0wn5t/Screenshot%202016-05-27%2022.18.47.png?dl=0
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] KVM Guest Ubuntu goes RO-Filesystem every some night

2016-05-20 Thread Brian ::
Eneko probably onto something - any power management enabled?

On Fri, May 20, 2016 at 7:52 AM, Christopher Meyering
<christopher.meyer...@dia-software.de> wrote:
> Hi,
>
> the servers are "brand new" (oldest bought about 4 months ago).
> The following bios is used:
> Vendor: Intel Corporation
> Version: SE5C610.86B.01.01.0009.060120151350
> Release Date: 06/01/2015
>
> the logs on the guest don't say anything (bad luck: they lay on the
> partition which went ro), and in host syslog / kern.log isn't anything
> strange either.
>
> I forget a thing:
> The guests drive live in an zfs store on the raid systems.
> Maybe it's important to know this.
>
> I'm thinking of switching to ubuntu virtual kernel, but i'm not quite sure
> if it will help or break even more stuff.
>
> greetings,
> chris
>
>
> Am 20.05.2016 um 08:42 schrieb Brian :::
>>
>> Is there anything in logs on guest or host around that time?
>>
>> On Fri, May 20, 2016 at 7:42 AM, Eneko Lacunza <elacu...@binovo.es> wrote:
>>>
>>> Hi,
>>>
>>> El 20/05/16 a las 08:19, Christopher Meyering escribió:
>>>>
>>>> Hi folks,
>>>>
>>>> Maybe someone of you has some idea how to get rid of my strange
>>>> "ro-filesytem" problem.
>>>>
>>>> First of all some basics:
>>>> Proxmox runs on a potent host, powering the kvm-guests with 2 raids. (1
>>>> ssd raid & 2 sas raid)
>>>> In our pve cluster we currently run 4 identical hosts with overall 50
>>>> kvm
>>>> guests on it.
>>>>
>>>> Every guest runs ubuntu server 14.04 with webservers, mysql & solr
>>>> services.
>>>>
>>>> Since 3 weeks ago some guests started to fall into ro-state root
>>>> filesystems at night without any noticeable reason.
>>>> The stress on the hosts and guests is quite normal at these times and
>>>> the
>>>> problems occure on different times between 01:00 and 06:30 in the
>>>> morning.
>>>> Occurently i can see some serious cpu soft locks with over 40s stuck
>>>> time.
>>>
>>>
>>> This is on the PVE hosts? What brand/model are the servers, BIOS
>>> version/date? (dmidecode)
>>>
>>> If BIOS is old (pre-2010 I think), there can be power management
>>> problems.
>>> You can also check power management in BIOS/UEFI and limit power saving
>>> to
>>> see if that helps.
>>>
>>> Cheers
>>> Eneko
>>>
>>> --
>>> Zuzendari Teknikoa / Director Técnico
>>> Binovo IT Human Project, S.L.
>>> Telf. 943493611
>>>943324914
>>> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
>>> www.binovo.es
>>>
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> --
>
> Mit freundlichem Gruß
> Christopher Meyering
> Technical Consultant
>
> DIA Connecting Software
> 
>
> DIA Connecting Software GmbH & Co. KG · Menkestraße 23 · 26419 Schortens
> Fon 04461 / 899 89 39 · Fax 04461 / 899 89 41
> www.dia-software.de <http://www.dia-software.de> ·
> christopher.meyer...@dia-software.de
> <mailto:christopher.meyer...@dia-software.de>
>
> Persönlich haftende Gesellschafterin: DIA Connecting Software Beteiligungs
> GmbH
> Handelsregister Wilhelmshaven HR B 131 604
> Geschäftsführer: Dipl.-Ing. Ansgar Gallas, Marco Walther
> Registergericht: Amtsgericht Oldenburg
> Handelsregister Wilhelmshaven: HR A 13 08 89
> Umsatzsteuer-Identifikationsnummer: DE813971952
>
> Die Information in dieser Email ist vertraulich und ist ausschließlich für
> den Adressaten bestimmt. Jeglicher Zugriff auf diese Email durch andere
> Personen als den Adressaten ist untersagt.
> Sollten Sie nicht der für diese Email bestimmte Adressat sein, ist Ihnen
> jede Veröffentlichung, Vervielfältigung oder Weitergabe wie auch das
> Ergreifen oder Unterlassen von Maßnahmen im Vertrauen
> auf erlangte Information untersagt. In dieser Email enthaltene Meinungen
> oder Empfehlungen unterliegen den Bedingungen des jeweiligen
> Kundenverhältnisses mit dem Adressaten.
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph or Gluster

2016-04-22 Thread Brian ::
Hi Mohamed

10Gbps or faster at a minimum or you will have pain. Even using 4
nodes with 4 spinner disks in each node and you will be maxing out
1Gbps network. For any backfills or adding new OSDs you don't want to
be waiting on 1Gbps ethernet speeds.

Dedicated 10Gbps network for ceph communication at a minimum and you
will have nice results.



On Fri, Apr 22, 2016 at 2:00 PM, Mohamed Sadok Ben Jazia
 wrote:
> Thank you Eneko,
> I read in proxmox forum that distributed storage needs 10GBit or faster on
> the local network and a dedicated network.
> Could you detail your used infrastructure to see if it matches those
> conditions?
>
>
>
> On 22 April 2016 at 12:06, Eneko Lacunza  wrote:
>
>> Hi Mohamed,
>>
>> El 22/04/16 a las 12:42, Mohamed Sadok Ben Jazia escribió:
>>
>>> Hello list,
>>> In order to set a high scalable proxmox infrastructure with a number of
>>> clusters, i plan to use distributed storing system, for this i have some
>>> questions.
>>> 1- I have a choice between Ceph and Gluster, which is better for proxmox.
>>>
>> I have no experience with Gluster, Ceph has been great for our use.
>>
>>> 2- Is it better to install one of those systems on the nodes or on
>>> separated servers.
>>>
>> Better on separated systems, but works quite well on the same systems if
>> the load is ok. Proxmox Ceph Server integration is very nice and saves lots
>> of work.
>>
>>> 3- Can this architecture realise a stable product, with VM and LXC
>>> migration (not live migration), store backups and snapshots, store iso
>>> files and lxc container templates.
>>>
>> In order to use Ceph for backups and ISO/templates, you'll have to use
>> CephFS. It is considered experimental in current Ceph version in Proxmox
>> (Hammer) but today a new Ceph LTS version has been released (Jewel), that
>> marks CephFS stable and production-ready. I think this will be integrated
>> in Proxmox shortly, developers are talking about this in the mailing list
>> today.
>>
>> I use NFS for backups and ISO/templates. For our storage needs this is
>> enough.
>>
>> Cheers
>> Eneko
>>
>> --
>> Zuzendari Teknikoa / Director Técnico
>> Binovo IT Human Project, S.L.
>> Telf. 943493611
>>   943324914
>> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
>> www.binovo.es
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] "failed to access perfctr " on debian 8 KVM

2016-04-17 Thread Brian ::
seemingly It's not an error, it says that the CPU doesn't support
performance counters.

Qemu doesn't support them - if you don't need them, don't worry about it


On Sun, Apr 17, 2016 at 6:04 PM, sebast...@debianfan.de
 wrote:
> Hello,
>
> while starting a debian 8 KVM, there is a message "failed to access perfctr
> " in the virtual KVM-window.
>
> Is this a problem of the KVM or of the host ?
>
> thx
>
> Sebastian
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ipmi_si ipmi_si.0: Could not set the global enables: 0xcc.

2016-03-29 Thread Brian ::
Hi

my kernel log is filled with

[323360.835740] ipmi_si ipmi_si.0: Could not set the global enables: 0xcc.

everytime we poll with our monitoring software (3 entries every 5 minutes)

I believe its fixed :
https://sourceforge.net/p/openipmi/mailman/message/34383470/

Is there any change we can get this patch into the pve 4 kernel?

https://forum.proxmox.com/threads/kernel-ipmi_si-ipmi_si-0-could-not-set-the-global-enables-0xcc.24920/

Thanks,
Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] BTRFS...

2016-02-02 Thread brian mullan
Gilberto,

I have used btrfs for almost 2 years and like it alot for its features.
I've not used it with proxmox though.

Matter of fact I just changed my machine to use btrfs raid10 (btrfs raid
not mdm raid).   In btrfs this is only 1 command:

reference:  btrfs with multiple devices
<https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices>
example:
# Use raid10 for both data and metadata
mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde


However, I have learned a lot over that time also.

First, I don't think its yet recommended to run KVM on btrfs.  This could
have changed but it would be wise to understand what the concerns were.

I think it had to do with the COW algorithm of btrfs will churn with every
change made to the running kvm 'image" file... causing high cpu etc.

reference:http://www.linux-kvm.org/page/Tuning_KVM

see the statement at the bottom...
*Don't use the linux filesystem btrfs on the host for the image files. It
will result in low IO performance. The kvm guest may even freeze when high
IO traffic is done on the guest. *

However, LXC containers on the other hand actually work wonderfully with
btrfs as you can specify the LXC container "backingstore" to be btrfs and
cloning, etc becomes almost instantaneous.




*If 'btrfs' is specified, then the target filesystem must be btrfs,
and the  container rootfs will be created as a new subvolume. This
allows snapshotted clones to be created, but also causes rsync
--one-filesystem to treat it as a separate  filesystem.*


So today I have my KVM images run off an ext4 disk but my lxc containers
are on my main btrfs raid10.

Others may have better insight.

Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Kernel Panic with usb port mapping

2015-05-05 Thread Brian Hart
Hello!

We updated an older install of proxmox over the weekend from 3.1 up to
3.4.  A VM that we have that uses a couple of USB devices (following this
documentation: https://pve.proxmox.com/wiki/USB_physical_port_mapping) will
no longer startup.  When we try to start this VM with the USB parameters in
the VMs config file it causes a kernel panic and locks up the hardware.
This is a Dell server with a DRAC controller as well and when this happens
it also some how kills the DRAC to where we can't even access the server.
It eventually reboots on its own and comes back to life.

Has something changed with the USB support in the newer versions?  Is that
documentation obsolete and maybe a new method of doing this?  Or is this a
bug that was introduced in a newer version?

Thanks,
Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Hotplug Memory boots VM with 1GB

2015-04-15 Thread Brian Hart
Alexandre --

Thanks for the response.  Yes, ACPI hotplug does work and is built into the
kernel in CentO 6.  ACPI hotplug has actually been supported since CentOS 5
as well.  I am able to hotplug disks, network, CPU and Memory once the
system is fully booted and ACPI does identify the event.  It just seems
that its a timing issue when proxmox tries to hotplug the memory I think
that Linux/ACPI isn't ready for it that early (at least for CentOS 6).
Hope this helps - let me know if I can gather any other information for
you.  We really weren't looking to move to CentOS 7 for a while since
CentOS 6 isn't EOL until 2020 and there is some reluctance in our group
around some of the changes in RHEL/Cent 7.

Thanks,
Brian





On Wed, Apr 15, 2015 at 1:27 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Yes, I do have the udev rules in place and when I increase the memory in
 the web gui from say 4GB to 4.5GB it will add all of the memory that it was
 missing from boot plus the additional 500. When I look in
 /sys/devices/system/memory it only lists the memory modules that were
 from when the system was on. I can also see in dmesg output Hotplug Mem
 Device from during boot but its almost like the kernel isn't getting
 notified during boot and which also keeps udev from ever seeing it and
 setting it to online. CentOS 6 should have no problem with hotplugging
 memory but it doesn't seem to take it with the way Proxmox handles it at
 boot.

 So,it's seem than acpi hotplug is not working.
 Can you hotplug virtio disk for example ?

 Also, I'm not sure that centos6 2.6.32 kernel have support for memory
 hotplug (maybe redhat has backported from 3.10 kernel, but I can't confirm)


 - Mail original -
 De: Brian Hart brianh...@ou.edu
 À: aderumier aderum...@odiso.com
 Cc: proxmoxve pve-user@pve.proxmox.com
 Envoyé: Lundi 13 Avril 2015 04:36:43
 Objet: Re: [PVE-User] Hotplug Memory boots VM with 1GB

 Hi - Thanks for the response!
 Yes, I do have the udev rules in place and when I increase the memory in
 the web gui from say 4GB to 4.5GB it will add all of the memory that it was
 missing from boot plus the additional 500. When I look in
 /sys/devices/system/memory it only lists the memory modules that were from
 when the system was on. I can also see in dmesg output Hotplug Mem Device
 from during boot but its almost like the kernel isn't getting notified
 during boot and which also keeps udev from ever seeing it and setting it to
 online. CentOS 6 should have no problem with hotplugging memory but it
 doesn't seem to take it with the way Proxmox handles it at boot.

 Thanks!

 Brian



 On Sun, Apr 12, 2015 at 11:18 AM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 Hi,

 do you have the udev rules

 /lib/udev/rules.d/80-hotplug-cpu-mem.rules (not sure about centos path)

 SUBSYSTEM==memory, ACTION==add, TEST==state, ATTR{state}==offline,
 ATTR{state}=online


 This is udev which enable the memory module after acpi hotplug.

 - Mail original -
 De: Brian Hart  brianh...@ou.edu 
 À: proxmoxve  pve-user@pve.proxmox.com 
 Envoyé: Vendredi 10 Avril 2015 18:28:15
 Objet: Re: [PVE-User] Hotplug Memory boots VM with 1GB

 Hello All,
 I had some more time to play with memory hotplugging and was trying to
 make this work again. I noticed in the wiki it says it requires a kernel
 greater than 3.10. I was trying to get this to work with CentOS 6 systems
 as that is primarily what we are running and it is not on a 3.x kernel yet.
 However, Cent 6 does support hotplugging memory and CPU. I think the issue
 though is the memory is hotplugged at an an odd time that the VM isn't
 ready for it with CentOS 6. Once the system boots I still see 987MB of
 starting memory and if I increase the amount of memory through the proxmox
 GUI the VM immediately detects all of the memory it was supposed to have
 plus the additional memory I just added. I have also found that I can
 manually tell it to probe for memory by doing an echo 0x1 
 /sys/devices/system/memory/probe and it will find one piece (you have to
 do this repeatedly for every module if it doesn't auto-detect the hotplug).

 Anybody have an idea of how to tell CentOS 6 to detect the hotplugged
 memory after boot? Also, I'm curious why the additional memory has to be
 hotplugged? When using VMware it doesn't do it in this way so how come
 KVM/QEMU does?

 Thanks,

 Brian


 On Sun, Mar 8, 2015 at 8:16 PM, Brian Hart  brianh...@ou.edu  wrote:



 Ah, you're absolutely right. It is in the wiki. I had skipped to the
 bottom part of the wiki that said Memory Hotplug' and overlooked the part
 that referenced Linux guests specifically. It looks like it is probably
 udev rules I'm missing. I will look at adding that in and try again.
 Thank you for the follow up!

 Brian



 On Sun, Mar 8, 2015 at 12:16 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:

 BQ_BEGIN
 I have updated the wiki last month about memory  cpu hotplug

 https://pve.proxmox.com/wiki

Re: [PVE-User] Hotplug Memory boots VM with 1GB

2015-04-12 Thread Brian Hart
Hi - Thanks for the response!

Yes, I do have the udev rules in place and when I increase the memory in
the web gui from say 4GB to 4.5GB it will add all of the memory that it was
missing from boot plus the additional 500.  When I look in
/sys/devices/system/memory it only lists the memory modules that were from
when the system was on.  I can also see in dmesg output Hotplug Mem
Device from during boot but its almost like the kernel isn't getting
notified during boot and which also keeps udev from ever seeing it and
setting it to online.  CentOS 6 should have no problem with hotplugging
memory but it doesn't seem to take it with the way Proxmox handles it at
boot.

Thanks!

Brian



On Sun, Apr 12, 2015 at 11:18 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,

 do you have the udev rules

 /lib/udev/rules.d/80-hotplug-cpu-mem.rules  (not sure about centos path)

 SUBSYSTEM==memory, ACTION==add, TEST==state, ATTR{state}==offline,
 ATTR{state}=online


 This is udev which enable the memory module after acpi hotplug.

 - Mail original -
 De: Brian Hart brianh...@ou.edu
 À: proxmoxve pve-user@pve.proxmox.com
 Envoyé: Vendredi 10 Avril 2015 18:28:15
 Objet: Re: [PVE-User] Hotplug Memory boots VM with 1GB

 Hello All,
 I had some more time to play with memory hotplugging and was trying to
 make this work again. I noticed in the wiki it says it requires a kernel
 greater than 3.10. I was trying to get this to work with CentOS 6 systems
 as that is primarily what we are running and it is not on a 3.x kernel yet.
 However, Cent 6 does support hotplugging memory and CPU. I think the issue
 though is the memory is hotplugged at an an odd time that the VM isn't
 ready for it with CentOS 6. Once the system boots I still see 987MB of
 starting memory and if I increase the amount of memory through the proxmox
 GUI the VM immediately detects all of the memory it was supposed to have
 plus the additional memory I just added. I have also found that I can
 manually tell it to probe for memory by doing an echo 0x1 
 /sys/devices/system/memory/probe and it will find one piece (you have to
 do this repeatedly for every module if it doesn't auto-detect the hotplug).

 Anybody have an idea of how to tell CentOS 6 to detect the hotplugged
 memory after boot? Also, I'm curious why the additional memory has to be
 hotplugged? When using VMware it doesn't do it in this way so how come
 KVM/QEMU does?

 Thanks,

 Brian


 On Sun, Mar 8, 2015 at 8:16 PM, Brian Hart  brianh...@ou.edu  wrote:



 Ah, you're absolutely right. It is in the wiki. I had skipped to the
 bottom part of the wiki that said Memory Hotplug' and overlooked the part
 that referenced Linux guests specifically. It looks like it is probably
 udev rules I'm missing. I will look at adding that in and try again.
 Thank you for the follow up!

 Brian



 On Sun, Mar 8, 2015 at 12:16 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:

 BQ_BEGIN
 I have updated the wiki last month about memory  cpu hotplug

 https://pve.proxmox.com/wiki/Hotplug_%28qemu_disk,nic,cpu,memory%29

 - Mail original -
 De: Brian Hart  brianh...@ou.edu 
 À: proxmoxve  pve-user@pve.proxmox.com 
 Envoyé: Samedi 7 Mars 2015 06:36:32
 Objet: [PVE-User] Hotplug Memory boots VM with 1GB

 Hello -
 I'm not sure if I'm running into a bug or if I'm missing something. I have
 done several tests playing with different settings for a VM and have found
 that consistently whenever I set Hotplug to allow memory it limits the
 guest VM to 1GB of memory even if I have more assigned to it at startup.
 Has anybody else seen this? Is there something special about the hotplug
 that this is expected that I'm not aware of?

 Thanks,
 Brian Hart




 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





 BQ_END



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Using raw LVM without partitions inside VM

2015-04-12 Thread Brian Hart
Thanks everyone for the replies.  I've played with the LVM Filters before
but it didn't occur to me to do that in this scenario.  I'll give that a
shot!

Thanks,
Brian




On Sun, Apr 12, 2015 at 11:26 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,

 I have had a customer with same problem, (raw lvm in guest + lvm disk on
 host for the vms).

 The problem is that the host is seeing lvm disks from the guests because
 of vgscan/lvscan.

 The solution was to use fileting in lvm.conf of the host, to only scan the
 hosts devices.

 I don't remember the config 'filter = [.], sorry

 - Mail original -
 De: Brian Hart brianh...@ou.edu
 À: proxmoxve pve-user@pve.proxmox.com
 Envoyé: Samedi 11 Avril 2015 05:17:57
 Objet: [PVE-User] Using raw LVM without partitions inside VM

 Hello everybody,
 For a long time now I've used raw LVM on disks inside of virtual machines
 without using disk partitions. I create a separate small disk to serve for
 the boot drive and give it a partition. This is formatted and mounted in
 /boot. Then we create a separate disk to contain everything else in an LVM
 structure. Outside of Proxmox this is perfectly acceptable as long as you
 do not need to boot from the device which we do not since we create that
 separate device. The partition table would only serve as a method for the
 bios to interact with the disk for boot purposes. The main advantage here
 is it makes the non-boot sections of the system very fluid and makes adding
 removing space on a live system SO much easier without having to worry
 about the restrictions of a partition table.

 We've been doing this successfully in VMware for a long time but only
 today did we attempt this in Proxmox and ran into a serious issue which
 long story short - resulted in the loss of a disk. I understand what went
 wrong and why this happened and luckily it was just a template that it
 happened to so nothing major lost, we can rebuild it. On Proxmox we use an
 iSCSI SAN with multipath connections for our backend storage so we do LVM
 on proxmox for our disks for our VMs. I know some answers on the forum are
 to use partitions and I understand why that is the answer given but we do
 this very intentionally with a deep understanding of how it would normally
 work. The reason it doesn't is because of how the disks are handled on LVM
 backed storage on the host in this case.

 What I am hoping for are alternate suggestion on how we can use raw LVM on
 disks with proxmox? Do we need to use a different storage method? Would
 this same problem exist some how with qcow2 files or on a ZFS backed
 storage (such as ZFS over iSCSI)? It seems like it shouldn't for the same
 reasons it doesn't happen on VMware with VMDK files but I wanted to be
 sure. If I understand the issue correctly it should only be because we're
 doing LVM on a raw block device and so the proxmox host sees that directly.
 I would expect something like a qcow2 file would sufficiently shield it but
 maybe not the ZFS over iSCSI(?) I'm not sure. I'm basically looking for any
 creative solutions to accomplish what we are trying to do or any advice
 that doesn't follow the beaten path of use partitions.

 Thanks for any feedback or suggestions --

 Brian



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Hotplug Memory boots VM with 1GB

2015-03-08 Thread Brian Hart
Ah, you're absolutely right.  It is in the wiki.  I had skipped to the
bottom part of the wiki that said Memory Hotplug' and overlooked the part
that referenced Linux guests specifically.  It looks like it is probably
udev rules I'm missing.  I will look at adding that in and try again.

Thank you for the follow up!

Brian



On Sun, Mar 8, 2015 at 12:16 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 I have updated the wiki last month about memory  cpu hotplug

 https://pve.proxmox.com/wiki/Hotplug_%28qemu_disk,nic,cpu,memory%29

 - Mail original -
 De: Brian Hart brianh...@ou.edu
 À: proxmoxve pve-user@pve.proxmox.com
 Envoyé: Samedi 7 Mars 2015 06:36:32
 Objet: [PVE-User] Hotplug Memory boots VM with 1GB

 Hello -
 I'm not sure if I'm running into a bug or if I'm missing something. I have
 done several tests playing with different settings for a VM and have found
 that consistently whenever I set Hotplug to allow memory it limits the
 guest VM to 1GB of memory even if I have more assigned to it at startup.
 Has anybody else seen this? Is there something special about the hotplug
 that this is expected that I'm not aware of?

 Thanks,
 Brian Hart




 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Hotplug Memory boots VM with 1GB

2015-03-06 Thread Brian Hart
Hello -

I'm not sure if I'm running into a bug or if I'm missing something.  I have
done several tests playing with different settings for a VM and have found
that consistently whenever I set Hotplug to allow memory it limits the
guest VM to 1GB of memory even if I have more assigned to it at startup.
Has anybody else seen this?  Is there something special about the hotplug
that this is expected that I'm not aware of?

Thanks,
Brian Hart
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] implementing openvswitch

2015-01-27 Thread Brian Hart
Sten,

Do you see any other indications that anything is failing in the cluster?
Things such as 'pvecm status' and 'clustat' - how do they output?  When
looking in the web interface are you able to see and connect to all nodes?
Do they show a green or red status?

I ask because I recently went round and around with clustering and open
vSwitch and in the end it was a minor misconfiguration on my part.  But I
have been through all the possible troubleshooting.  It SHOULD work
regardless of how the other nodes are configured with regards to OVS and
linux bridges.  I had one node configured with OVS and the other without
and they were able to talk for me without issue.  My problem came down to
an MTU mismatch between the nodes.

Brian


On Tue, Jan 27, 2015 at 2:40 AM, Sten Aus sten@eenet.ee wrote:

  Hi

 I have a problem when implementing Open vSwitch. When I configure my one
 node to use ovs, all cluster work is heavily disturbed - GUI lacks,
 commands take forever to reply.
 As soon as I take proxmox interface down, everything normalizes.

 Switches support multicast and IGMP snooping is turned off. Multicast ping
 (*ssmpingd* on one node and *asmping* on second node) works fluently on
 both directions.

 *Should cluster work when all of the nodes are using Linux bridges and one
 is using ovs?* This node has been a cluster member and has used Linux
 bridges as well.

 *On all nodes:*
 - running kernel: 2.6.32-34-pve
 - proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
 - pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)

 *On this node:*
 openvswitch-switch:
   Installed: 2.3.0-1

 *ovs-vsctl show* output

 *# ovs-vsctl show*
 dec7d5db-216b-42cf-bee7-60886d30c44e
 Bridge vmbr1234
 Port backnet
 tag: 10
 Interface backnet
 type: internal
 Port eth0
 Interface eth0
 Port proxmox
 tag: 15
 Interface proxmox
 type: internal
 Port storage84
 tag: 84
 Interface storage84
 type: internal
 Port storage88
 tag: 88
 Interface storage88
 type: internal
 Port vmbr1234
 Interface vmbr1234
 type: internal
 ovs_version: 2.3.0

 OVS is configured properly, at least I think so, because I have
 connectivity between hosts and I can ping and ssh between nodes, ISCSI
 connection is up and running.

 Any directions or ideas are helpful!

 Thanks!
 Sten


 On 21.01.15 15:53, Sten Aus wrote:

 Hi

 Will my cluster stay OK, if I implement openvswitch one by one for each
 node?

 Node A is empty, I've migrated all VMs to other nodes and will install
 openvswitch-switch and make config changes in /etc/network/interfaces, but
 if I don't manage to do it for all nodes today, will everything be okay and
 can I use this node?

 All of my switches are supporting multicast and it's turned on.

 Thanks!



 ___
 pve-user mailing 
 listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Open vSwitch and Clustering

2015-01-19 Thread Brian Hart
Alexandre,

Thank you for the suggestions - I'll give this a try with the switch
configuration.  We've got other proxmox clusters with several servers in
them we just hadn't tried the OVS setup yet.  Once these two nodes are
setup I plan to rebuild an existing cluster and add it with these two new
ones so I won't be leaving it as a two node setup for long.  Thanks for the
advice!

Brian



On Mon, Jan 19, 2015 at 1:01 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 Hi,

 maybe it's multicast filtering related.

 OVS don't support igmp snooping (multicast filtering), and also don't
 support igmp querier.

 Linux bridge have igmp snooping enable by default and also igmp querier.

 to get igmp snooping, you need at least 1 igmp querier on your network.
 They are some elections when multicast igmp queriers are enabled.


 My best recommendation could be to have igmp querier enabled on your
 physical switch, to avoid problem.


 also, you really need to have 3 nodes minimum to have quorum for cluster
 services.


 - Mail original -
 De: Brian Hart brianh...@ou.edu
 À: proxmoxve pve-user@pve.proxmox.com
 Envoyé: Vendredi 16 Janvier 2015 23:54:18
 Objet: [PVE-User] Open vSwitch and Clustering

 We got two new systems that we installed the latest version of proxmox on.
 Did an apt-get update and dist-upgrade to make sure they were on the most
 recent versions. We then created a cluster and got them talking and all
 that went fine.

 The problem came when we added OVS to one of the nodes and set it all up
 in the GUI. Once we did a reboot we have not been able to get the cluster
 communications to work since then. We have tried to revert it back
 completely - removing open vswitch packages and all and doing a stock
 config in the /etc/network/interfaces file but we cannot get multicast to
 work now between the two nodes. Nothing else changed.

 Has anybody else run across this or are there known issues with doing OVS
 and clustering? I've done a lot of google searching but have not found a
 clear answer or solution to a problem like this.

 Any help or advice is greatly appreciated!

 Thanks,
 Brian


 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Updates for 1.9

2012-09-07 Thread Hart, Brian R.
All,

I know proxmox 2.1 is the most recent but a colleague of mine has a system
that he can't currently schedule the time needed to do the full upgrade to
2.1 yet and he is still on 1.9.  He's had a few kernel dumps in the last
few months that have totally hung his systems up requiring a reboot.

We are troubleshooting and testing hardware to make sure that there isn't
some bad RAM or something but I was wondering if it is still safe to just
to an apt-get update and apt-get upgrade to get the most recent packages
for 1.9 and not go to 2.x?  Is this a safe thing to do for now until we
can find a better opportunity to upgrade to 2.1?

Thanks,

Brian Hart


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user