Re: [PVE-User] ZFS Replication on different storage

2019-03-14 Thread Yannis Milios
Yes, it is possible...

https://pve.proxmox.com/pve-docs/chapter-pvesr.html



On Thu, 14 Mar 2019 at 11:19, Fabrizio Cuseo  wrote:

> Hello.
> I have a customer with a small cluster, 2 servers (different models).
>
> I would like to replicate VMs from host A to host B, but from local-zfs
> (host A) to "zfs-data-2" (host B).
>
> On the GUI this is not possibile, what about some workaround ?
>
> Regards, Fabrizio
>
>
> --
> ---
> Fabrizio Cuseo - mailto:f.cu...@panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it  mailto:i...@panservice.it
> Numero verde nazionale: 800 901492
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Restore VM Backup in new HDD with new size...

2019-02-06 Thread Yannis Milios
Since it is a Linux installation, you could try to backup the system via a
livecd with fsarchiver to an external drive, then restore it to the virtual
disk.

http://www.fsarchiver.org/

Yannis

On Wed, 6 Feb 2019 at 10:18, Gilberto Nunes 
wrote:

> Hi list
>
> I have here a VM with has directly attached 2 HDD disk, each one has 1 TB
> of size.
> I had make a bacup an the final vma file has 95GB. I know this is
> compressed.
> My question in:
> Is there a way to restore this VM backup but create a virtual HDD with less
> size than the original disks??
> I meant, restore a VM which has 1 TB of HDD to a VM which a new HDD with
> 200 GB for instance.
> My problem here is that I need to release this 2 physical HDD. This 2 HDD
> has a CentOS 7 installed, with LVM! I tried clonezilla but doesn't work.
>
> Thanks for any help
>
> Best
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] WebUI Asking login when changing node

2018-12-29 Thread Yannis Milios
A few things I would try are ...

- Clear browser cache.
- Check if installed package versions are the same on *all* nodes
(pveversion -v).
- Restart pveproxy service on pve2 (or any other related service).
- Check the logs of pve2 when you try accessing it for any clues.

Do you get the same problem when accessing pve2 from pve3?

Yannis


On Sat, 29 Dec 2018 at 11:32, Nicola Ferrari (#554252) 
wrote:

> Hi list!
>
> We're experiencing a little problem on a 3 machines cluster, let's call
> them pve1, pve2, pve3
>
> accessing webui pointing to https://pve1:8006 shows entire cluster
> correctly..
> ssh on all nodes is working and accessing other nodes doesn't require
> credentials (so keys are working)
> live migration between nodes is working... but...
>
> clicking on pve2 (or any resource on pve2) if viewing interface at
> pve1's ip keeps asking to re-enter login credentials!
> so to manage pve2 resources we need to point to pve2's ip and everything
> is fine (but obviewsly clicking on pve1 from pve2 re-asks login...)
>
> This happens after cluster upgrade from pve4 to pve5 in last july..
>
> pveversion -v output:
> proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
> pve-manager: 5.2-6 (running version: 5.2-6/bcd5f008)
> pve-kernel-4.15: 5.2-4
> pve-kernel-4.15.18-1-pve: 4.15.18-17
> pve-kernel-4.13.16-2-pve: 4.13.16-48
> corosync: 2.4.2-pve5
> criu: 2.11.1-1~bpo90
> glusterfs-client: 3.8.8-1
> ksm-control-daemon: 1.2-2
> libjs-extjs: 6.0.1-2
> libpve-access-control: 5.0-8
> libpve-apiclient-perl: 2.0-5
> libpve-common-perl: 5.0-37
> libpve-guest-common-perl: 2.0-17
> libpve-http-server-perl: 2.0-9
> libpve-storage-perl: 5.0-24
> libqb0: 1.0.1-1
> lvm2: 2.02.168-pve6
> lxc-pve: 3.0.0-3
> lxcfs: 3.0.0-1
> novnc-pve: 1.0.0-2
> proxmox-widget-toolkit: 1.0-19
> pve-cluster: 5.0-29
> pve-container: 2.0-24
> pve-docs: 5.2-5
> pve-firewall: 3.0-13
> pve-firmware: 2.0-5
> pve-ha-manager: 2.0-5
> pve-i18n: 1.0-6
> pve-libspice-server1: 0.12.8-3
> pve-qemu-kvm: 2.11.2-1
> pve-xtermjs: 1.0-5
> qemu-server: 5.0-30
> smartmontools: 6.5+svn4324-1
> spiceterm: 3.0-5
> vncterm: 1.5-3
>
>
> Any hints from you experts? :)
> Thanks to everybody!
>
> Nick
>
>
> --
> +-+
> | Linux User  #554252 |
> +-+
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-16 Thread Yannis Milios
That's up to you to decide, PVE supports both hyper-converged setups (where
compute and storage nodes share the same hardware) and scenarios where
compute/storage nodes are separate.
You can choose for example to have 3 nodes in a PVE cluster, acting as
compute nodes and 3 separate nodes for the Ceph cluster, acting as storage
nodes.
The main difference is that the costs involved in this scenario are much
higher compared to the hyper-converged setup.
It's also possible to manage Ceph clusters by using PVE gui (i.e use the
webgui for the Ceph tasks only).

Clearly in a hyper-converged setup, as the host(s) resources will be shared
between the compute and storage layer, proper measures in the design and
capacity planning must be taken in order to avoid downgraded performance.
For example ZFS is well known to consume high amounts of RAM.In a
hyper-converged setup, if improperly configured, it may consume half of
host RAM, potentially leaving VMs under pressure ..

Yannis



On Sun, 16 Dec 2018 at 13:28, Frank Thommen 
wrote:

> Hi,
>
> I understand that with the new PVE release PVE hosts (hypervisors) can
> be used as Ceph servers.  But it's not clear to me if (or when) that
> makes sense.  Do I really want to have Ceph MDS/OSD on the same hardware
> as my hypervisors?  Doesn't that a) accumulate multiple POFs on the same
> hardware and b) occupy computing resources (CPU, RAM), that I'd rather
> use for my VMs and containers?  Wouldn't I rather want to have a
> separate Ceph cluster?
>
> Or didn't I get the point of the Ceph integration?
>
> Cheers
> Frank
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Local interface on Promox server

2018-11-25 Thread Yannis Milios
 We don’t have any client machine in the server room, so
> when we fix something in the room (cables, routing, etc…), we need to
> go out and check the VMs on another machine outside the room,
> sometimes making us come back, etc…
>

Is it really that difficult to get a laptop in the server room to manage
the servers?


> I know VMs can be controlled by command line using qemu, but is there
> another way to locally control the machines on the Proxmox server,
> except by installing a desktop manager and pointing the web browser on
> localhost:8006? Is it even safe to do that?
>

Personally I would avoid installing a full Desktop environment on the PVE
hosts. Apart from adding unnecessary load, it can also
expand the attack surface on the servers. If you insist though, I would
recommend a simple Window Manager instead,
something like Fluxbox for example.

We have a KVM in our bay, we can physically access the machines, is
> there maybe a way to physically be connected to a VM (as if we were
> physically connected to a Windows VM for instance)?
>

None that I'm aware of, but sounds like you are trying to over complicate
things... :)

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] please help setup correctly proxmox cluster

2018-10-22 Thread Yannis Milios
The previous two posts provided you already with enough tips (including a
link to the wiki) on how to troubleshoot your situation.

It’s now up to you to give some effort in reading carefully what is being
said there in order first to understand and then troubleshoot the problem.

In my opinion (and the others posters) this is caused to some kind of
malfunction on the cluster communication. If the cluster communication is
not working properly, then you will have such behaviour.
I would give attention in particular to the the fact that the nodes, are
not “in the same place”,as you stated, hence the need to implement the VLAN
approach.
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] dual host HA solution in 5.2

2018-09-28 Thread Yannis Milios
Another option would be going cheap and adding something like this as a 3rd
node ...

https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node

On Fri, 28 Sep 2018 at 19:03, Mark Adams  wrote:

> If you have to stick with 2 servers, personally I would go for zfs as your
> storage. Storage replication using zfs in proxmox has been made super
> simple.
>
> This is asynchronous though, unlike DRBD. You would have to manually start
> your VM's should the "live" node go down and the data will be out of date
> depending on how frequently you've told it to sync. IMO, this is a decent
> setup if you are limited to 2 servers and is very simple.
>
> Then you also get the great features such as high performance snapshots
> (LVM sucks at this..), clones and even really simple replication to another
> server (IE a disaster recovery location) with pve-zsync. Not to mention all
> the other features of zfs - compression, checksumming etc (google it if you
> don't know).
>
> Regards,
> Mark
>
>
>
>
> On Fri, 28 Sep 2018 at 16:51, Woods, Ken A (DNR) 
> wrote:
>
> >
> > > On Sep 28, 2018, at 07:12, Adam Weremczuk 
> > wrote:
> > > Please advise if you have better ideas
> >
> > Buy another server.
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] blue screen on VM windows2016 with 5.1

2018-09-05 Thread Yannis Milios
:pve2:1BF0:3214AACA:5B8E8D34:qmstart:110:root@pam: VM 110
> already running
> Sep  4 15:48:37 pve2 pvedaemon[4779]:  end task
> UPID:pve2:1BA0:32149646:5B8E8D00:vncproxy:110:root@pam: OK
> Sep  4 15:48:37 pve2 pvedaemon[4545]:  starting task
> UPID:pve2:1BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam:
> Sep  4 15:48:57 pve2 pvedaemon[4545]:  end task
> UPID:pve2:1BF7:3214AB38:5B8E8D35:vncproxy:110:root@pam: OK
> Sep  4 15:48:59 pve2 pvedaemon[4779]:  starting task
> UPID:pve2:1C12:3214B388:5B8E8D4B:vncproxy:110:root@pam:
> Sep  4 15:52:43 pve2 pvedaemon[931]:  starting task
> UPID:pve2:1D4D:32150B3B:5B8E8E2B:vncproxy:110:root@pam:
> Sep  4 15:52:52 pve2 pvedaemon[931]:  starting task
> UPID:pve2:1D56:32150EA8:5B8E8E34:qmreset:110:root@pam:
> Sep  4 15:52:52 pve2 pvedaemon[931]:  end task
> UPID:pve2:1D56:32150EA8:5B8E8E34:qmreset:110:root@pam: OK
> Sep  4 15:53:57 pve2 pvedaemon[4545]:  starting task
> UPID:pve2:1DB0:321527E9:5B8E8E75:vncproxy:110:root@pam:
> Sep  4 15:53:59 pve2 pvedaemon[4545]:  end task
> UPID:pve2:1DB0:321527E9:5B8E8E75:vncproxy:110:root@pam: OK
> Sep  4 15:54:08 pve2 pvedaemon[931]:  starting task
> UPID:pve2:1DC1:32152C73:5B8E8E80:qmshutdown:110:root@pam:
> Sep  4 15:56:20 pve2 pvedaemon[4545]:  successful auth for
> user 'root@pam'
> Sep  4 15:57:20 pve2 pvedaemon[4545]:  successful auth for
> user 'root@pam'
> Sep  4 15:57:20 pve2 pvedaemon[931]:  successful auth for user
> 'root@pam'
> Sep  4 15:57:40 pve2 pvedaemon[4545]:  starting task
> UPID:pve2:1F03:32157F30:5B8E8F54:vncproxy:110:root@pam:
> Sep  4 15:59:38 pve2 pvedaemon[4545]:  end task
> UPID:pve2:1F03:32157F30:5B8E8F54:vncproxy:110:root@pam: OK
> Sep  4 15:59:43 pve2 pvedaemon[4779]:  starting task
> UPID:pve2:1FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam:
> Sep  4 15:59:44 pve2 pvedaemon[4779]:  end task
> UPID:pve2:1FB2:3215AF0C:5B8E8FCF:vncproxy:110:root@pam: OK
> I don't see any thing in syslog & kern.log
> I haven't try switching the vdisk temporarily to IDE, but I don't know
> what else to do...
> signature Cordialement.
> Vincent MALIEN
> Le 05/09/2018 à 09:33, Yannis Milios a écrit :
> > If both VMs fail with a BSOD, then definitely something must be wrong
> > somewhere.
> > Win2016 is supported in PVE 5+, so don't think it's necessary to upgrade
> to
> > a newer version.
> > I would focus my attention on  any potential hardware issues on the
> actual
> > host (RAM,Storage etc).
> > What's your underlying storage type (RAID,SSD,HDD) ? What are the load
> > average values on the host ?
> > Any clues in the Syslog ? Have you tried switching the vdisk temporarily
> to
> > IDE (even though, I don't think that will help in your case).
> >
> >
> >
> > On Wed, 5 Sep 2018 at 08:04, Vincent Malien 
> wrote:
> >
> >> Hi pve users,
> >> I run 2 VM using windows 2016 witch do often blue screen and today this
> >> message: guest has not initialize the display (yet)
> >> here is my config:
> >> proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
> >> pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
> >> pve-kernel-4.13.4-1-pve: 4.13.4-25
> >> libpve-http-server-perl: 2.0-6
> >> lvm2: 2.02.168-pve6
> >> corosync: 2.4.2-pve3
> >> libqb0: 1.0.1-1
> >> pve-cluster: 5.0-15
> >> qemu-server: 5.0-17
> >> pve-firmware: 2.0-3
> >> libpve-common-perl: 5.0-20
> >> libpve-guest-common-perl: 2.0-13
> >> libpve-access-control: 5.0-7
> >> libpve-storage-perl: 5.0-16
> >> pve-libspice-server1: 0.12.8-3
> >> vncterm: 1.5-2
> >> pve-docs: 5.1-12
> >> pve-qemu-kvm: 2.9.1-2
> >> pve-container: 2.0-17
> >> pve-firewall: 3.0-3
> >> pve-ha-manager: 2.0-3
> >> ksm-control-daemon: 1.2-2
> >> glusterfs-client: 3.8.8-1
> >> lxc-pve: 2.1.0-2
> >> lxcfs: 2.0.7-pve4
> >> criu: 2.11.1-1~bpo90
> >> novnc-pve: 0.6-4
> >> smartmontools: 6.5+svn4324-1
> >> zfsutils-linux: 0.7.2-pve1~bpo90
> >>
> >> qm config of 1VM:
> >> agent: 1
> >> bootdisk: scsi0
> >> cores: 4
> >> ide0: none,media=cdrom
> >> memory: 12288
> >> name: srverp
> >> net0: virtio=F2:30:F0:DE:09:1F,bridge=vmbr0
> >> numa: 0
> >> ostype: win10
> >> scsi0: local-lvm:vm-110-disk-1,discard=on,size=500G
> >> scsihw: virtio-scsi-pci
> >> smbios1: uuid=51c201a6-cd20-488c-9c89-f3f0fe4abd06
> >> sockets: 1
> >>
> >> virtio is virtio-win-0.1.141
> >> I checked the VM disk with windows tool, no error.
> >> should I upgrade to 5.2 or some thing else?
> >>
> >> --
> >> Cordialement.
> >> Vincent MALIEN
> >> /12 Avenue Yves Farge
> >> BP 20258
> >> 37702 St Pierre des Corps cedex 2/
> >> ___
> >> pve-user mailing list
> >> pve-user@pve.proxmox.com
> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] blue screen on VM windows2016 with 5.1

2018-09-05 Thread Yannis Milios
If both VMs fail with a BSOD, then definitely something must be wrong
somewhere.
Win2016 is supported in PVE 5+, so don't think it's necessary to upgrade to
a newer version.
I would focus my attention on  any potential hardware issues on the actual
host (RAM,Storage etc).
What's your underlying storage type (RAID,SSD,HDD) ? What are the load
average values on the host ?
Any clues in the Syslog ? Have you tried switching the vdisk temporarily to
IDE (even though, I don't think that will help in your case).



On Wed, 5 Sep 2018 at 08:04, Vincent Malien  wrote:

> Hi pve users,
> I run 2 VM using windows 2016 witch do often blue screen and today this
> message: guest has not initialize the display (yet)
> here is my config:
> proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
> pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
> pve-kernel-4.13.4-1-pve: 4.13.4-25
> libpve-http-server-perl: 2.0-6
> lvm2: 2.02.168-pve6
> corosync: 2.4.2-pve3
> libqb0: 1.0.1-1
> pve-cluster: 5.0-15
> qemu-server: 5.0-17
> pve-firmware: 2.0-3
> libpve-common-perl: 5.0-20
> libpve-guest-common-perl: 2.0-13
> libpve-access-control: 5.0-7
> libpve-storage-perl: 5.0-16
> pve-libspice-server1: 0.12.8-3
> vncterm: 1.5-2
> pve-docs: 5.1-12
> pve-qemu-kvm: 2.9.1-2
> pve-container: 2.0-17
> pve-firewall: 3.0-3
> pve-ha-manager: 2.0-3
> ksm-control-daemon: 1.2-2
> glusterfs-client: 3.8.8-1
> lxc-pve: 2.1.0-2
> lxcfs: 2.0.7-pve4
> criu: 2.11.1-1~bpo90
> novnc-pve: 0.6-4
> smartmontools: 6.5+svn4324-1
> zfsutils-linux: 0.7.2-pve1~bpo90
>
> qm config of 1VM:
> agent: 1
> bootdisk: scsi0
> cores: 4
> ide0: none,media=cdrom
> memory: 12288
> name: srverp
> net0: virtio=F2:30:F0:DE:09:1F,bridge=vmbr0
> numa: 0
> ostype: win10
> scsi0: local-lvm:vm-110-disk-1,discard=on,size=500G
> scsihw: virtio-scsi-pci
> smbios1: uuid=51c201a6-cd20-488c-9c89-f3f0fe4abd06
> sockets: 1
>
> virtio is virtio-win-0.1.141
> I checked the VM disk with windows tool, no error.
> should I upgrade to 5.2 or some thing else?
>
> --
> Cordialement.
> Vincent MALIEN
> /12 Avenue Yves Farge
> BP 20258
> 37702 St Pierre des Corps cedex 2/
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PRoxmox and ceph with just 3 server.

2018-08-31 Thread Yannis Milios
This seems a good reading as well...
https://ceph.com/geen-categorie/ceph-osd-reweight/

On Fri, 31 Aug 2018 at 12:10, Eneko Lacunza  wrote:

> You can do so from CLI:
>
> ceph osd crush reweight osd.N
>
>
> https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/
>
> El 31/08/18 a las 13:01, Gilberto Nunes escribió:
> > Thanks a lot for all this advice guys.
> > I still learn with Ceph.
> > So I have a doubt regarding how to change the weight from certain hdd
> > Is there some command to do that?
> >
> > Em sex, 31 de ago de 2018 05:58, Ronny Aasen 
> > escreveu:
> >
> >> when adding a older machine to your cluster, keep in mind that the
> >> slowest node with determine the overall speed of the ceph cluster (since
> >> a vm's disk will be spread all over)
> >>
> >>
> >> for RBD vm's you want low latency, so use things like
> >> nvram > ssd > hdd  with osd latency significant difference here.
> >>
> >> 100Gb/25Gb > 40Gb/10Gb (1Gb is useless in this case imho)
> >>
> >> as long as you have enough cores, higher ghz is better then lower ghz.
> >> due to lower latency
> >>
> >> kind regards.
> >> Ronny Aasen
> >>
> >>
> >>
> >> On 31. aug. 2018 00:21, Gilberto Nunes wrote:
> >>> An HPE Server will remain after deploy 3 servers with proxmox and ceph.
> >>> I thing I will use this HPE server as 4th node!
> >>>
> >>>
> >>> ---
> >>> Gilberto Nunes Ferreira
> >>>
> >>> (47) 3025-5907
> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>>
> >>> Skype: gilberto.nunes36
> >>>
> >>>
> >>>
> >>>
> >>> 2018-08-30 18:16 GMT-03:00 Ronny Aasen :
> >>>
>  if HA is important, you should consider having a 4th ceph osd server
> >> (does
>  not have to also be proxmox)
> 
>  with ceph's default of 3 replicas, that you will want to use in  a
>  production setup, you do not have any failure domain.
>  IOW the loss of any one node = a degraded ceph cluster.  if you have
> an
>  additional node, ceph will rebalance and return to HEALTH_OK on the
> >> failure
>  of a node.
> 
>  with vm's iops are important so you must keep latency to a minimum.
> 
>  both of these are explained a bit more in detail in the link he
> posted.
> 
> 
>  kind regards
>  Ronny Aasen
> 
> 
> 
>  On 30.08.2018 20:46, Gilberto Nunes wrote:
> 
> > Hi Martin.
> >
> > Not really worried about highest performance, but to know if it will
> >> work
> > properly, mainly HA!
> > I plan work with mesh network too.
> >
> > Tanks a lot
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> > 2018-08-30 15:40 GMT-03:00 Martin Maurer :
> >
> > Hello,
> >> Not really. Please read in detail the following:
> >>
> >> https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-
> >> 2018-02.41761/
> >>
> >>
> >> On 30.08.2018 16:47, Gilberto Nunes wrote:
> >>
> >> Hi there
> >>> It's possible create a scenario with 3 PowerEdge r540, with Proxmox
> >> and
> >>> Ceph.
> >>> The server has this configuration:
> >>>
> >>> 32 GB memory
> >>> SAS 2x 300 GB
> >>> SSD 1x 480 GB
> >>>
> >>> 2 VM with SQL and Windows server.
> >>>
> >>> Thanks
> >>>
> >>> ---
> >>> Gilberto Nunes Ferreira
> >>>
> >>> (47) 3025-5907
> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>>
> >>> Skype: gilberto.nunes36
> >>> ___
> >>> pve-user mailing list
> >>> pve-user@pve.proxmox.com
> >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>
> >>>
> >>> --
> >> Best Regards,
> >>
> >> Martin Maurer
> >>
> >> mar...@proxmox.com
> >> http://www.proxmox.com
> >>
> >> 
> >> Proxmox Server Solutions GmbH
> >> Bräuhausgasse 37, 1050 Vienna, Austria
> >> Commercial register no.: FN 258879 f
> >> Registration office: Handelsgericht Wien
> >>
> >> ___
> >> pve-user mailing list
> >> pve-user@pve.proxmox.com
> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>
> >> ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> 
>  ___
>  pve-user mailing list
>  pve-user@pve.proxmox.com
>  https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> >>> ___
> >>> pve-user mailing list
> >>> pve-user@pve.proxmox.com
> >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>
> >> 

Re: [PVE-User] Snapshot rollback slow

2018-08-29 Thread Yannis Milios
Can’t comment on the I/O issues, but in regards to the snapshot rollback, I
would personally prefer to clone the snapshot instead of rolling back. It
has been proven for me much faster to recover in emergencies.
Then, after recovering, to release the clone from the its snapshot
reference, you can flatten the clone.
You can find this info in Ceph docs.



On Wed, 29 Aug 2018 at 16:56, Marcus Haarmann 
wrote:

> Hi,
>
> we have a small Proxmox cluster, on top of ceph.
> Version is proxmox 5.2.6, Ceph 12.2.5 luminous
> Hardware is 4 machines, dual Xeon E5, 128 GB RAM
> local SATA (raid1) for OS
> local SSD for OSD (2 OSD per machine, no Raid here)
> 4x 10GBit (copper) NICs
>
> We came upon the following situation:
> VM snapshot was created to perform a dangerous installation process, which
> should be revertable
> Installation was done and a rollback to snapshot was initiated (because
> something went wrong).
> However, the rollback of snapshot took > 1 hour and during this timeframe,
> the whole cluster
> was reacting very slow.
> We tried to find out the reason for this, and it looks like an I/O
> bottleneck.
> For some reason, the main I/O was done on two local OSD processes (on the
> same host where the VM was running).
> The iostat output said the data transmission rate was about 30MB/s per OSD
> disk but util was 100%. (whatever this means)
> The underlying SSD are not damaged and have a significant higher
> throughput normally.
> OSD is based on filestore/XFS (we encountered some problems with bluestore
> and decided to use filestore again)
> There are a lot of read/write operations in parallel at this time.
>
> Normal cluster operation is relatively fluent, only copying machines
> affects I/O but we can see
> transfer rates > 200 MB/s in iostat in this case. (this is not very fast
> for the SSD disks from my point of view,
> but it is not only sequential write)
> Also, I/O utilization is not near 100% when a copy action is executed.
>
> SSD and SATA disks are on separate controllers.
>
> Any ideas where to tune for better snapshot rollback performance ?
> I am not sure how the placement of the snapshot data is done from proxmox
> or ceph.
>
> Under the hood, there are rbd devices, which are snapshotted. So it should
> be up to the ceph logic
> where the snapshots are done (maybe depending on the initial layout of the
> original device ) ?
> Would the crush map influence that ?
>
> Also live backup takes snapshots as I can see. We have had very strange
> locks on running backups
> in the past (mostly gone since the disks were put on separate
> controllers).
>
> Could this be the same reason ?
>
> Another thing we found is the following (not on all hosts):
> [614673.831726] libceph: mon1 192.168.16.32:6789 session lost, hunting
> for new mon
> [614673.848249] libceph: mon2 192.168.16.34:6789 session established
> [614704.551754] libceph: mon2 192.168.16.34:6789 session lost, hunting
> for new mon
> [614704.552729] libceph: mon1 192.168.16.32:6789 session established
> [614735.271779] libceph: mon1 192.168.16.32:6789 session lost, hunting
> for new mon
> [614735.272339] libceph: mon2 192.168.16.34:6789 session established
>
> This leads to a kernel problem, which is still not solved (because not
> backported to 4.15).
> I am not sure if this is a reaction to a ceph problem or the reason for
> the ceph problem.
>
> Any thoughts on this ?
>
> Marcus Haarmann
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] windows 2008 enterprise with uefi

2018-08-28 Thread Yannis Milios
> yes :-) ("press any key to boot the windows dvd" or something like that)
>
> The "windows is loading files..." progress bar appears, and goes full,
> and then it just stops progressing. (and I waited many hours)
>

Have you tried to boot from the same ISO from BIOS legacy mode ? If you
have same issue there, then perhaps you should check if the ISO file is
corrupted.
It could be also something related to the old qemu/pve version as well, I
would try with a more recent version. Keep to disk controller to IDE for
best compatibility, you can change that later to SCSI or something else...

Y
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] windows 2008 enterprise with uefi

2018-08-28 Thread Yannis Milios
Did you interrupt the boot process on the VM by pressing ESC, in order to
select the DVD drive as the boot device ?



On Tue, 28 Aug 2018 at 09:09, lists  wrote:

> Hi,
>
> I am trying to move a physical windows 2008 enterprise uefi installation
> (Version 6.0.6002 Service Pack 2 Build 6002) into proxmox, and I'm
> getting nowhere.
>
> Tried all kinds of approaches, and this was my latest attempt:
>
> Creating a full system backup using windows backup, and then boot the
> windows install iso in proxmox, to perform a system restore from this
> backup into proxmox.
>
> But as soon as I enable uefi in my proxmox VM config, the windows iso no
> longer boots. However, the physical server IS this same OS in uefi mode,
> the combination should work, I guess.
>
> Anyone with a tip or a tric..?
>
> This is proxmox 4.4-20, so it's a bit older. I could try it on a fresh
> new proxmox 5.2 install, but first I wanted to ask here.
>
> Anyone?
>
> MJ
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox and DRBD

2018-08-18 Thread Yannis Milios
Both drbd8 and drbd9 can be configured in pve without issues. The former,
can be configured in pairs of two pve nodes, where for the latter, three
pve nodes is the minimum.IIRC drbd8 kmod is included by default in the pve
kernel, so you will only need to install drbd-utils to get started.
For drbd9, follow the instructions as described in Linbit's
documentation.In my opinion enabling HA in just 2 pve nodes, with drbd8 in
dual primary mode, is just too dangerous.As a minimum, proper (hardware)
fencing must be configured in both drbd and pve to avoid data corruption.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-07 Thread Yannis Milios
>
>  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
>> > /dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
>> > and then a LV (lvcreate -L100% pve/data)
>>
>>
Try the above as it was suggested to you ...


> >But I suspect I have no space to create an
>> >additional zfs volume since the one mounted on "/" occupied all the space
>
>
No, that's a wrong assumption, zfs does not pre-allocate the whole space of
the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but if you
insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top and then
add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] vm----proxmox----switch---proxmox2---vm2

2018-08-03 Thread Yannis Milios
If both VMs have their vnic attached to vmbr1, then it might be also worth
having a look at the firewall on the o/s inside the VM. If that's enabled
on both sides, it could potentially block ping requests...



On Friday, August 3, 2018, Josh Knight  wrote:

> for vm1 and vm2, are the LAN interfaces on the same subnet/vlan?  A diagram
> with sample subnets might help, my initial suspect would be a ip route
> issue if they're not on the same vlan.
>
> Josh Knight
>
>
> On Fri, Aug 3, 2018 at 8:45 AM, Renato Gallo  wrote:
>
> > Hello,
> >
> > I need to separate completely the routing and traffic of WAN and LAN
> >
> > I have vm1 on proxmox1
> > I have vm2 on proxmox2
> >
> > each proxmox has a two nics one for WAN and one for LAN
> > the two proxmoxes are connected to each other on the LAN interface via a
> > switch
> >
> > The two vm's have two nics vmbr0 connected to WAN and vmbr1 connected to
> > LAN
> >
> > each of the vm's can ping both proxmoxes LAN address
> >
> > I cannot ping the other vm from none of each vm
> >
> > what am I missing ?
> >
> > Renato Gallo
> >
> > System Engineer
> > sede legale e operativa: Via San brunone, 13 - 20156 - Milano (MI)
> > Tel. +39 02 - 87049490
> > Fax +39 02 - 48677349
> > Mobile. +39 342 - 6350524
> > Wi | FreeNumbers: https://freenumbers.way-interactive.com
> > Wi | SMS: https://sms.way-interactive.com
> > Wi | Voip: https://voip.way-interactive.com
> > Asterweb: http://www.asterweb.org
> >
> > Le informazioni contenute in questo messaggio e negli eventuali allegati
> > sono riservate e per uso esclusivo del destinatario.
> > Persone diverse dallo stesso non possono copiare o distribuire il
> > messaggio a terzi.
> > Chiunque riceva questo messaggio per errore è pregato di distruggerlo e
> di
> > informare immediatamente [ mailto:i...@sigmaware.it | info@ ]
> asterweb.org
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Increase RAW Disk SIZE from UI

2018-07-27 Thread Yannis Milios
Well, that’s basic linux sysadmin, nothing related to pve ...

You can find dozens of articles by googling. If you feel lazy to search,
here’s some examples ...

https://access.redhat.com/articles/1190213


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/ext4grow



On Fri, 27 Jul 2018 at 17:24, Rakesh Jain  wrote:

> Hello team,
>
> We are using Virtual Environment 5.2-1 in our environment.
>
> I have Resized raw disk image from 30 GB to 112 GB. We don’t have LVM
> configured.
>
> What I did is ->
> Shutdown the VM Virtual Machine 100 (labs-provision) on node pve01 from GUI
> Increase the DISK size, Memory Size and Increased CPU cores from GUI
> Restarted the VM from GUI
>
> Now when I see the disk it shows the increased size but how can I make
> this change reflecting to /dev/sda3  ??
>
> rakesh.jain@labs-provision:~$ sudo fdisk -l
> [sudo] password for rakesh.jain:
> GPT PMBR size mismatch (67108863 != 234881023) will be corrected by
> w(rite).
> Disk /dev/sda: 112 GiB, 120259084288 bytes, 234881024 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: gpt
> Disk identifier: 5101DB8D-88D6-487F-9660-CBA9D05EE18F
>
> Device  Start  End  Sectors  Size Type
> /dev/sda12048 4095 20481M BIOS boot
> /dev/sda24096   268287   264192  129M Linux filesystem
> /dev/sda3  268288 67106815 66838528 31.9G Linux filesystem
>
> rakesh.jain@labs-provision:~$ df -hT
> Filesystem Type  Size  Used Avail Use% Mounted on
> udev   devtmpfs  2.0G 0  2.0G   0% /dev
> tmpfs  tmpfs 396M   41M  356M  11% /run
> /dev/sda3  ext4   32G   18G   12G  61% /
> tmpfs  tmpfs 2.0G 0  2.0G   0% /dev/shm
> tmpfs  tmpfs 5.0M 0  5.0M   0% /run/lock
> tmpfs  tmpfs 2.0G 0  2.0G   0% /sys/fs/cgroup
> /dev/sda2  ext2  125M   35M   84M  30% /boot
> tmpfs  tmpfs 396M 0  396M   0% /run/user/26528
> tmpfs  tmpfs 396M 0  396M   0% /run/user/37379
>
> Please let me know. Any help is appreciated.
>
> -Rakesh Jain
>
> This email and any attachments thereto may contain private, confidential,
> and/or privileged material for the sole use of the intended recipient. Any
> review, copying, or distribution of this email (or any attachments thereto)
> by others is strictly prohibited. If you are not the intended recipient,
> please contact the sender immediately and permanently delete the original
> and any copies of this email and any attachments thereto.
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-05 Thread Yannis Milios
 > Yes I realise it is, what I'm saying is should it also be doing those
> steps?

Usually you don't have to, but as things often can go wrong you *may* have
to do things manually sometimes.
GUI is great and saves lots of work, however knowing how to manually solve
problems when they arise via the CLI in my opinion is also a
must.Especially when you deal with a complicated storage like Ceph 

Y

On Thu, Jul 5, 2018 at 11:53 AM Alwin Antreich 
wrote:

> On Thu, Jul 05, 2018 at 11:05:52AM +0100, Mark Adams wrote:
> > On 5 July 2018 at 11:04, Alwin Antreich  wrote:
> >
> > > On Thu, Jul 05, 2018 at 10:26:34AM +0100, Mark Adams wrote:
> > > > Hi Anwin;
> > > >
> > > > Thanks for that - It's all working now! Just to confirm though,
> shouldn't
> > > > the destroy button handle some of these actions? or is it left out on
> > > > purpose?
> > > >
> > > > Regards,
> > > > Mark
> > > >
> > > I am not sure, what you mean exactly but the destroyosd (CLI/GUI) is
> > > doing more then those two steps.
> > >
> > >
> > Yes I realise it is, what I'm saying is should it also be doing those
> > steps?
> Well, it is doing those too. Just with the failed creation of the OSD
> not all entries are set and the destroy might fail on some (eg. no
> service, no mount).
>
> The osd create/destroy is up for a change anyway with the move from
> ceph-disk (deprecated in Mimic) to ceph-volume. Sure room for
> improvement. ;)
>
>
> --
> Cheers,
> Alwin
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Pass through usb eToken device on PX 5.2

2018-06-15 Thread Yannis Milios
>>Have plugged the token into the VM Host and used Add hardware to pass through 
>>the USB
>>token to the VM

I'm using something similar (usb smart card reader/pki card for user
authentication), but in my case I decided that perhaps it's better to
connect the USB token to the client machine rather than the VM host.
In this way you can enable smart card redirection on the RDP client
machine, which then can pass through the PKI card to the VM guest
without the need to redirect the actual usb PKI device.
In addition, you can use SPICE remote-viewer and its built in USB
device redirection option, to pass through the usb token to the VM
guest.The difference in this case is that you cannot share that usb
device both with client (host) machine and the VM guest at the same
time. Hence, the RDP option works better for me in this case.

It would be nice if smart card pass through was supported in
remote-viewer/SPICE server as well..

Y
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Joining 5.2 node to 4.1 Cluster Not Working

2018-04-12 Thread Yannis Milios
According to wiki [1], it is tested (and supported) to add pve5.x nodes to
pve4.x cluster.
Perhaps you will have to upgrade the existing cluster first to the latest
version (4.4) , then proceed to the addition of  the pve5.2
node(s).Moreover it is also stated, that this should be used as an
intermediate step, before upgrading all cluster nodes to v5.2 and not as a
permanent setup.

[1]
https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0#Caveats_to_know_before_you_start

On Thu, Apr 12, 2018 at 7:35 PM, JR Richardson 
wrote:

> Hi All,
>
>
>
> I'm in the middle of adding some compute nodes to an existing cluster. I
> was
> certain I could add a newer 5.x node to existing 4.1 cluster bit I get
> error
> with some script deprecations in perl.
>
>
>
> Should I build new compute nodes as 4.1, join to the cluster, then once all
> nodes are added, do the upgrade of all nodes to 5.2? Or is there a work
> around to adding a 5.2 node to a 4.1 cluster?
>
>
>
> Best practice? I have to add more compute nodes for migrating VMs around to
> release existing 4.1 compute nodes so I can do the upgrades to 5.2.
>
>
>
> Thanks.
>
>
> JR
>
>
>
> JR Richardson
>
> Engineering for the Masses
>
> Chasing the Azeotrope
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Move root ZFS to another machine

2018-02-06 Thread Yannis Milios
>This is a hardware RAID controller, >but disks are not configured as RAID.

ZFS doesn’t like RAID controllers even when they are not configured in raid
mode.

If you really want to use ZFS with it's power, get a proper HBA card (or a
RAID controller that can be flashed with IT - Initiator-target - firmware)
and you're good to go. Otherwise a simple SATA controller should do the job
in your case.

Things get trickier when you have to boot from a zfs pool though, initramfs
and other stuff need to be modified to adapt to the new situation ...


Y





-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows Spice Client Usb drv

2018-01-03 Thread Yannis Milios
>> Seems to me that redirect USB in the same way Linux spice client does...
>> Is that correct???

Yes, that's the Windows implementation for SPICE usb redirection.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Post Install of Promox VE 5

2017-10-24 Thread Yannis Milios
>
>
>   I am having an issue as to when I add iscsi storage to the mix, all
> of the buttons are greyed out.


 Have you checked this first ?  https://pve.proxmox.com/wiki/Storage:_iSCSI
 Are you planning to use direct mode (separate iscsi lun for each vm) or
lvm (content mode) ? I guess the second since you are interested about disk
images.

I would like mine to do Disk IMage and Container but I do not not
> see anywhere that can I configure it for that.
>
>
Datacenter -> Storage ->  -> Edit -> Content

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] problem with cluster configuration, please help

2017-10-17 Thread Yannis Milios
>> node pve already defined

To add additional nodes to the cluster, you need to run 'pvecm add
 *on* the 2nd and 3rd node.
For example let's assume you have 3 nodes (pve1,pve2,pve3).

To create the cluster:
---

on pve1:
pvecm create pvecluster1

To add additional nodes:


on pve2:
pvecm add 

on pve3:
pvecm add 

More info here:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Create_the_cluster

and

https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node

Yannis

On Tue, Oct 17, 2017 at 9:05 AM, Жюль Верн  wrote:

> Good day ! I install three nodes of proxmox from image. And try to create a
> cluster. Command pvecm create proxtest on node1 was successful, type pvecm
> add on node2:
>
> root@pve:~# pvecm add 10.22.16.50
> The authenticity of host '10.22.16.50 (10.22.16.50)' can't be established.
> ECDSA key fingerprint is 1c:e8:06:20:76:4d:a0:89:f1:22:92:81:1f:af:b2:1b.
> Are you sure you want to continue connecting (yes/no)? yes
> root@10.22.16.50's password:
> node pve already defined
> copy corosync auth key
> stopping pve-cluster service
> Stopping pve cluster filesystem: pve-cluster.
> backup old database
> Starting pve cluster filesystem : pve-cluster.
> Starting cluster:
>Checking if cluster has been disabled at boot... [  OK  ]
>Checking Network Manager... [  OK  ]
>Global setup... [  OK  ]
>Loading kernel modules... [  OK  ]
>Mounting configfs... [  OK  ]
>Starting cman... [  OK  ]
>Waiting for quorum... [  OK  ]
>Starting fenced... [  OK  ]
>Starting dlm_controld... [  OK  ]
>Tuning DLM kernel config... [  OK  ]
>Unfencing self... [  OK  ]
> waiting for quorum...
>
> At this moment i type pvecm add ... command on node3 and get "unable to
> copy ssh ID" What i doing wrong ?
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Impossible to start VMs automatically if the proxmox hosts started without SAN volume groups at boot once

2017-09-26 Thread Yannis Milios
>
> But it's annoying that Proxmox blocks the VMs and that we can't start them
> after that, without disabling them and enable them manually.
>
>
Looks like it's designed to work like that, but still may be some room for
fine tuning:

https://pve.proxmox.com/wiki/High_Availability

and more specifically :


https://pve.proxmox.com/wiki/High_Availability#ha_manager_start_failure_policy
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Openvswicth conflict with network manager on host boot [was Multiple bridge on single physical interface]

2017-09-25 Thread Yannis Milios
No, I haven't experienced this issue on my setup. Can you post your
/etc/network/interfaces file and package versions (pveversion -v) ?

On Mon, Sep 25, 2017 at 11:41 AM, Jean-mathieu CHANTREIN <
jean-mathieu.chantr...@univ-angers.fr> wrote:

> Hello.
>
> - Mail original -
> > De: "Yannis Milios" <yannis.mil...@gmail.com>
> > À: "pve-user" <pve-user@pve.proxmox.com>
> > Envoyé: Jeudi 21 Septembre 2017 12:24:48
> > Objet: Re: [PVE-User] Multiple bridge on single physical interface
>
> >>
> >> VM in subnetwork1(resp.2) on host1 must be communicate with VM in
> >> subnetwork1(resp.2) on host2 via just one single interface and my host
> must
> >> be not reacheable by subnetwork.
> >>
> >> How I can make this ?
> >>
> >>
> >>
> > Isolation at layer 2 can be achieved either by using 2 separate physical
> > network cards or by utilising VLANs.
> > I have done something similar by using openvswitch on pve. You can have a
> > look if you want:
> >
> > https://pve.proxmox.com/wiki/Open_vSwitch
>
> Thanks for you reply.
>
> I was a little afraid to put my hands in openvswitch... I have tried and
> it's totally answer to my problematic. However, I think that there is a
> conflict between network-manager and openvswitch at the boot of host:
> Network manager fails with a timeout (5 minutes) to active interfaces and
> virtual bridges.
>
> Once logged on host, I have to make this to active all interfaces:
> systemctl start networking.service # No time out anymore..., active all
> "regular" interface and "classic" bridge linux
> systemctl stop networking.service
> systemctl start networking.service # No time out again, active all
> interfaces!
>
> I have install net-tools package (apparently ifconfig is need by OVS...)
> and I have enable openvswitch-switch.service with systemctl:
> systemctl enable openvswitch-switch.service
>
> Is anyone ever encountering that?
>
> Best regards.
>
> Jean-Mathieu
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Snapshot size

2017-09-21 Thread Yannis Milios
Here is an example for vm102 on rbdpool1:

root@pve3:~# rbd du -p rbdpool1 vm-102-disk-1

warning: fast-diff map is not enabled for vm-102-disk-1. operation may be
slow.

NAME   PROVISIONED   USED
vm-102-disk-1@snap1  12288M   11852M
vm-102-disk-1@snap2  12288M204M
vm-102-disk-1@snap3  12288M536M
vm-102-disk-112288M   45056k
   12288M   12636M


On Thu, Sep 21, 2017 at 8:30 AM, Uwe Sauter  wrote:

> Hi,
>
> thanks, but I forgot to mention that all my VMs have Ceph as backend and
> thus snapshots can't be handled as "local".
>
> Regards,
>
> Uwe
>
>
> Am 21.09.2017 um 09:06 schrieb nmachk...@verdnatura.es:
> > El 2017-09-19 07:32, Uwe Sauter escribió:
> >> Hi,
> >>
> >> suppose I have several snapshots of a VM:
> >>
> >> Snap1
> >> └── Snap2
> >> └── Snap3
> >> └── Snap4
> >> └── Snap5
> >>
> >> Is there a way to determine the size of each snapshot?
> >>
> >> Regards,
> >>
> >> Uwe
> >> ___
> >> pve-user mailing list
> >> pve-user@pve.proxmox.com
> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> > Try this at relevant node. The following is output for containers
> 300,301,302
> >
> > # zfs list -t snapshot
> > NAME  USED  AVAIL
> REFER  MOUNTPOINT
> > ctpool/subvol-300-disk-1@bind9   58.4M  -
> 453M  -
> > ctpool/subvol-300-disk-1@dhcp57.1M  -
> 455M  -
> > ctpool/subvol-301-disk-1@junto10y20  2.61M  -
> 505M  -
> > ctpool/subvol-301-disk-1@sharednetwork   3.86M  -
> 509M  -
> > ctpool/subvol-301-disk-1@junto10a30  4.63M  -
> 519M  -
> > ctpool/subvol-302-disk-1@junto10y20  1.79M  -
> 451M  -
> > ctpool/subvol-302-disk-1@sharednetwork   3.33M  -
> 454M  -
> >
> > deNada ;-)
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Multiple bridge on single physical interface

2017-09-21 Thread Yannis Milios
>
> VM in subnetwork1(resp.2) on host1 must be communicate with VM in
> subnetwork1(resp.2) on host2 via just one single interface and my host must
> be not reacheable by subnetwork.
>
> How I can make this ?
>
>
>
Isolation at layer 2 can be achieved either by using 2 separate physical
network cards or by utilising VLANs.
I have done something similar by using openvswitch on pve. You can have a
look if you want:

https://pve.proxmox.com/wiki/Open_vSwitch
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE Cluster and /etc/hosts.conf

2017-09-19 Thread Yannis Milios
>
> /etc/hosts.conf has some effect over cluster and migration??
>

Yes, by default pve will use the ip addresses specified there for cluster,
vm live migration and management.


> I meant, if I have two nic, in two servers, one nic to administrative
> access and the second, working as a private network between the two nodes.
>

Ok, which subnet is configured as the management (administrative) network ?


> After I create the cluster using the pvecm create NAME, I go through second
> node and do this:
>
> pvecm add pve01
>
> I see the pvecm connect to pve01, using the internal IP, which is
> 10.1.1.10, as instate it before.
>
> So for now on, the cluster and migration traffic would ran between two
> nodes using CIDR 10.1.1.0/24.
>
> That's is correct??
>


Yes, but management as well...  I think you'll have to do the opposite, so
configure as pve01 and pve02 the ip addresses of the management network.
Then, as previously mentioned by the others *separate* the cluster ring
group and the live migration traffic by using the /etc/pve/datacenter.cfg
file. This is my understanding at least.

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Migration error!

2017-08-25 Thread Yannis Milios
My understanding is that in pvesr the live migration of guest vm is not
supported:

"Virtual guest with active replication cannot currently use online
migration. Offline migration is supported in general"

On Fri, 25 Aug 2017 at 16:48, Fábio Rabelo 
wrote:

> Sorry  my knowledge do not go beyond here ...
>
> I abandoned shared storage years ago for lack of trustworthy
>
>
> Fábio Rabelo
>
> 2017-08-25 10:22 GMT-03:00 Gilberto Nunes :
> > According to the design model of Proxmox Storage Replication, there is a
> > schedule to make the sync.
> > And of course, I set up the VM, I have scheduled the sync and for finish.
> > But still stuck!
> >
> >
> >
> >
> >
> > 2017-08-25 10:19 GMT-03:00 Fábio Rabelo :
> >
> >> I never used zfs on Linux .
> >>
> >> But, in the Solaris OS family, this replication must be set up
> beforehand
> >> ...
> >>
> >> Someone with some milestone with zfs on linux can confirm or deny that
> ??
> >>
> >>
> >> Fábio Rabelo
> >>
> >> 2017-08-25 10:11 GMT-03:00 Gilberto Nunes :
> >> > So.. One of the premise of the ZFS Replication volume, is to replicate
> >> > local volume to another node.
> >> > Or am I wrong?
> >> >
> >> >
> >> > Obrigado
> >> >
> >> > Cordialmente
> >> >
> >> >
> >> > Gilberto Ferreira
> >> >
> >> > Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal Server |
> >> > Zimbra Mail Server
> >> >
> >> > (47) 3025-5907
> >> > (47) 99676-7530
> >> >
> >> > Skype: gilberto.nunes36
> >> >
> >> >
> >> > konnectati.com.br 
> >> >
> >> >
> >> > https://www.youtube.com/watch?v=dsiTPeNWcSE
> >> >
> >> >
> >> > 2017-08-25 10:07 GMT-03:00 Fábio Rabelo :
> >> >
> >> >> this entry :
> >> >>
> >> >> 2017-08-25 09:24:44 can't migrate local disk 'stg:vm-100-disk-1':
> can't
> >> >> live migrate attached local disks without with-local-disks option
> >> >>
> >> >> Seems to be the responsable .
> >> >>
> >> >> Local disk ?
> >> >>
> >> >> where this image are stored ?
> >> >>
> >> >>
> >> >> Fábio Rabelo
> >> >>
> >> >> 2017-08-25 9:36 GMT-03:00 Gilberto Nunes  >:
> >> >> > If I turn off the VM, migrate goes on.
> >> >> > But make offline migration is out of the question!!!
> >> >> >
> >> >> >
> >> >> >
> >> >> > 2017-08-25 9:28 GMT-03:00 Gilberto Nunes <
> gilberto.nune...@gmail.com
> >> >:
> >> >> >
> >> >> >> Hi again
> >> >> >>
> >> >> >> I try remove all replication jobs and image files from target
> node...
> >> >> >> Still get critical error:
> >> >> >>
> >> >> >> qm migrate 100 prox02 --online
> >> >> >> 2017-08-25 09:24:43 starting migration of VM 100 to node 'prox02'
> >> >> >> (10.1.1.20)
> >> >> >> 2017-08-25 09:24:44 found local disk 'stg:vm-100-disk-1' (in
> current
> >> VM
> >> >> >> config)
> >> >> >> 2017-08-25 09:24:44 can't migrate local disk 'stg:vm-100-disk-1':
> >> can't
> >> >> >> live migrate attached local disks without with-local-disks option
> >> >> >> 2017-08-25 09:24:44 ERROR: Failed to sync data - can't migrate VM
> -
> >> >> check
> >> >> >> log
> >> >> >> 2017-08-25 09:24:44 aborting phase 1 - cleanup resources
> >> >> >> 2017-08-25 09:24:44 ERROR: migration aborted (duration 00:00:02):
> >> Failed
> >> >> >> to sync data - can't migrate VM - check log
> >> >> >> migration aborted
> >> >> >> prox01:~# qm migrate 100 prox02 --online --with-local-disks
> >> >> >> 2017-08-25 09:24:58 starting migration of VM 100 to node 'prox02'
> >> >> >> (10.1.1.20)
> >> >> >> 2017-08-25 09:24:58 found local disk 'stg:vm-100-disk-1' (in
> current
> >> VM
> >> >> >> config)
> >> >> >> 2017-08-25 09:24:58 copying disk images
> >> >> >> 2017-08-25 09:24:58 ERROR: Failed to sync data - can't live
> migrate
> >> VM
> >> >> >> with replicated volumes
> >> >> >> 2017-08-25 09:24:58 aborting phase 1 - cleanup resources
> >> >> >> 2017-08-25 09:24:58 ERROR: migration aborted (duration 00:00:01):
> >> Failed
> >> >> >> to sync data - can't live migrate VM with replicated volumes
> >> >> >> migration aborted
> >> >> >> prox01:~# pvesr status
> >> >> >> JobID  EnabledTarget   LastSync
> >> >> >>   NextSync   Duration  FailCount State
> >> >> >> 100-0  Yeslocal/prox02  2017-08-25_09:25:01
> >> >> >>  2017-08-25_12:00:00  15.200315  0 OK
> >> >> >>
> >> >> >> Somebody help me!
> >> >> >>
> >> >> >> Cheers
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> 2017-08-24 9:55 GMT-03:00 Gilberto Nunes <
> gilberto.nune...@gmail.com
> >> >:
> >> >> >>
> >> >> >>> Well...
> >> >> >>> I will try it
> >> >> >>>
> >> >> >>> Thanks
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>> 2017-08-24 4:37 GMT-03:00 Dominik Csapak :
> >> >> >>>
> >> >>  On 08/23/2017 08:50 PM, Gilberto Nunes wrote:
> >> >> 
> >> >> > more info:
> >> >> >
> >> >> >
> >> >> > pvesr status
> >> >> > 

Re: [PVE-User] Random kernel panics of my KVM VMs

2017-08-15 Thread Yannis Milios
Have you tried to change the VM scsi controller so something different than
LSI? Does that help?

Yannis

On Tue, 15 Aug 2017 at 08:02, Bill Arlofski  wrote:

>
> Hello everyone.
>
> I am not sure this is the right place to ask, but I am also not sure where
> to
> start, so this list seemed like a good place. I am happy for any direction
> as
> to the best place to turn to for a solution. :)
>
> For quite some time now I have been having random kernel panics on random
> VMs.
>
> I have a two-node cluster, currently running a pretty current PVE version:
>
> PVE Manager Version pve-manager/5.0-23/af4267bf
>
> Now, these kernel panics have continued through several VM kernel upgrades,
> and even continue after the 4.x to 5.x Proxmox upgrade several weeks ago.
> In
> addition, I have moved VMs from one Proxmox node to the other to no avail,
> ruling out hardware on one node or the other.
>
> Also, it does not matter if the VMs have their (QCOW2) disks on the Proxmox
> node's local hardware RAID storage or the Synology NFS-connected storage
>
> I am trying to verify this by moving a few VMs that seem to panic more
> often
> than others back to some local hardware RAID storage on one node as I write
> this email...
>
> Typically the kernel panics occur during the nightly backups of the VMs,
> but I
> cannot say that this is always when they occur. I _can_ say that the kernel
> panic always reports the sym53c8xx_2 module as the culprit though...
>
> I have set up remote kernel logging on one VM and here is the kernel panic
> reported:
>
> 8<
> [138539.201838] Kernel panic - not syncing: assertion "i &&
> sym_get_cam_status(cp->cmd) == DID_SOFT_ERROR" failed: file
> "drivers/scsi/sym53c8xx_2/sym_hipd.c", line 3399
> [138539.201838]
> [138539.201838] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.9.34-gentoo #5
> [138539.201838] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014
> [138539.201838]  88023fd03d90 813a2408 8800bb842700
> 81c51450
> [138539.201838]  88023fd03e10 8111ff3f 88020020
> 88023fd03e20
> [138539.201838]  88023fd03db8 813c70f3 81c517b0
> 81c51400
> [138539.201838] Call Trace:
> [138539.201838]   [138539.201838]  []
> dump_stack+0x4d/0x65
> [138539.201838]  [] panic+0xca/0x203
> [138539.201838]  [] ? swiotlb_unmap_sg_attrs+0x43/0x60
> [138539.201838]  [] sym_interrupt+0x1bff/0x1dd0
> [138539.201838]  [] ? e1000_clean+0x358/0x880
> [138539.201838]  [] sym53c8xx_intr+0x37/0x80
> [138539.201838]  [] __handle_irq_event_percpu+0x38/0x1a0
> [138539.201838]  [] handle_irq_event_percpu+0x1e/0x50
> [138539.201838]  [] handle_irq_event+0x27/0x50
> [138539.201838]  [] handle_fasteoi_irq+0x89/0x160
> [138539.201838]  [] handle_irq+0x6e/0x120
> [138539.201838]  [] ?
> atomic_notifier_call_chain+0x15/0x20
> [138539.201838]  [] do_IRQ+0x46/0xd0
> [138539.201838]  [] common_interrupt+0x7f/0x7f
> [138539.201838]   [138539.201838]  [] ?
> default_idle+0x1b/0xd0
> [138539.201838]  [] arch_cpu_idle+0xa/0x10
> [138539.201838]  [] default_idle_call+0x1e/0x30
> [138539.201838]  [] cpu_startup_entry+0xd5/0x1c0
> [138539.201838]  [] start_secondary+0xe8/0xf0
> [138539.201838] Shutting down cpus with NMI
> [138539.201838] Kernel Offset: disabled
> [138539.201838] ---[ end Kernel panic - not syncing: assertion "i &&
> sym_get_cam_status(cp->cmd) == DID_SOFT_ERROR" failed: file
> "drivers/scsi/sym53c8xx_2/sym_hipd.c", line 3399
> 8<
>
> The dmesg output on the Proxmox nodes' does not show any issues during the
> times of these VM kernel panics.
>
> I appreciate any comments, questions, or some direction on this.
>
> Thank you,
>
> Bill
>
>
> --
> Bill Arlofski
> Reverse Polarity, LLC
> http://www.revpol.com/blogs/waa
> ---
> He picks up scraps of information
> He's adept at adaptation
>
> --[ Not responsible for anything below this line ]--
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster won't reform so I can't restart VMs

2017-08-11 Thread Yannis Milios
Since there were no config changes,  I would have a look on cluster
communication, i.e switch issues ?



On Fri, Aug 11, 2017 at 11:02 AM, Chris Tomkins 
wrote:

> Hi Proxmox users,
>
> I have a 4 node cluster. It has been in production for a few months with
> few/no issues.
>
> This morning one of my admins reported that each node appeared isolated
> ("Total votes:  1"). All VMs were up and unaffected. Unfortunately I
> made the mistake of stopping VMs on 3 of the nodes to apply updates and
> reboot as I assumed this would clear the issue. Now the scenario remains
> the same but the VMs are down on 3 of the nodes and it won't allow me to
> start them as I have no quorum.
>
> No config changes were made and this cluster was fine and had quorum last
> time I looked (last week).
>
> I don't want to take the wrong action and make this worse - any advice
> would be greatly appreciated!
>
> hypervisors ar1406/ar1600/ar1601 are up to date and have been rebooted this
> morning. ar1407 has not been rebooted or updated (yet) as the VMs on it are
> critical.
>
> Thanks,
>
> Chris
>
> [LIVE]root@ar1406:~# for i in ar1406 ar1407 ar1600 ar1601; do ssh $i 'cat
> /etc/pve/.members'; done
> {
> "nodename": "ar1406",
> "version": 3,
> "cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate":
> 0 },
> "nodelist": {
>   "ar1407": { "id": 2, "online": 0},
>   "ar1601": { "id": 3, "online": 0},
>   "ar1600": { "id": 4, "online": 0},
>   "ar1406": { "id": 1, "online": 1, "ip": "10.0.6.201"}
>   }
> }
> {
> "nodename": "ar1407",
> "version": 3,
> "cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate":
> 0 },
> "nodelist": {
>   "ar1407": { "id": 2, "online": 1, "ip": "10.0.6.202"},
>   "ar1601": { "id": 3, "online": 0},
>   "ar1600": { "id": 4, "online": 0},
>   "ar1406": { "id": 1, "online": 0}
>   }
> }
> {
> "nodename": "ar1600",
> "version": 3,
> "cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate":
> 0 },
> "nodelist": {
>   "ar1407": { "id": 2, "online": 0},
>   "ar1601": { "id": 3, "online": 0},
>   "ar1600": { "id": 4, "online": 1, "ip": "10.0.6.203"},
>   "ar1406": { "id": 1, "online": 0}
>   }
> }
> {
> "nodename": "ar1601",
> "version": 3,
> "cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate":
> 0 },
> "nodelist": {
>   "ar1407": { "id": 2, "online": 0},
>   "ar1601": { "id": 3, "online": 1, "ip": "10.0.6.204"},
>   "ar1600": { "id": 4, "online": 0},
>   "ar1406": { "id": 1, "online": 0}
>   }
> }
>
> --
>
> Chris Tomkins
>
> Brandwatch | Senior Network Engineer (Linux/Network)
>
> chr...@brandwatch.com | (+44) 01273 448 949
>
> @Brandwatch
>
> New York  |  San Francisco  |  Brighton  |  Singapore  |  Berlin |
>  Stuttgart
>
>
> Discover how organizations are using Brandwatch to create their own success
> 
>
>
> Email disclaimer 
>
>
> [image: bw-signature logo.png]
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS Checksum Error in VM but not on Host

2017-07-24 Thread Yannis Milios
>
>
>> 2 reason for this, first having checksums^^, second snapshots.
> And I prefer ZFS over any other filesystem.
>
>
Whats the reason why ZFS is not good in a VM?


IMHO that's a waste of system resources. Since your VM disk already lies on
a ZFS filesystem, where it can leverage all features you said (checksums,
snapshots etc), what's the point of having ZFS inside VM at the same time?
ZFS is not just another f/s, it consumes a lot of resources, particularly
RAM. Of course I don't say it's not doable, I would use it in VM just for
testing stuff...

>


> I understand the error and the solution, but not really why it happen. In
> the meantime I got an answer from Wolfgang Link who thinks it could be a
> bit flip in memory...
>
>
Usually checksum errors are caused by RAM issues (other factors could be
damaged SATA cables and more). Are you using ECC RAM on the server? I would
suggest you to post this question on ZoL mailing list, there you can get
much better feedback about pros and cons.

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS Checksum Error in VM but not on Host

2017-07-24 Thread Yannis Milios
Hello,

>> RAIDZ1 (2 Disks) -> qemu -> ZFS (1 Disk)


Is there any particular reason of having this kind of setup? I mean in
general using ZFS inside a VM is not recommended.


 >>  NAMESTATE READ WRITE CKSUM
 >>   backup  ONLINE   0   0  * 13*
 >>  sdc ONLINE 0   0 *26*

>> errors: Permanent errors have been detected in the following files:


>> */battlefield/backup/kunde/serv*er/data/d7/d79c0feb29ef024ce01
64253ee08e6daa986bd1d599f4640167de2c3d7828524

Apparently you had checksum errors which lead to corruption of that file.
Since this pool is not redundant, you will have to delete the file, restore
it from a backup and then scrub the volume.

Yannis

On Mon, Jul 24, 2017 at 11:11 AM, Harald Leithner 
wrote:

> Hi,
>
> I'm not sure if this is Proxmox/Qemu related but I try it here.
>
> We have a VM on a ZFS Pool with Proxmox kernel for ZFS, so the result is
>
> RAIDZ1 (2 Disks) -> qemu -> ZFS (1 Disk)
>
> We got 2 Mails from ZFS inside the VM:
>
> ---
>
> ZFS has detected a checksum error:
>
>eid: 37
>  class: checksum
>   host: backup
>   time: 2017-07-21 15:07:59+0200
>  vtype: disk
>  vpath: /dev/sdc1
>  vguid: 0x003AC1491C2AC7D2
>  cksum: 1
>   read: 0
>  write: 0
>   pool: backup
>
> ---
>
> ZFS has detected a data error:
>
>eid: 36
>  class: data
>   host: backup
>   time: 2017-07-21 15:07:59+0200
>   pool: backup
>
> ---
>
> The status of the zfs pool inside the VM:
>
> ---
>
>  zpool status -v
>   pool: backup
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://zfsonlinux.org/msg/ZFS-8000-8A
>   scan: scrub repaired 0 in 0h42m with 0 errors on Sun Jul  9 01:06:26 2017
> config:
>
> NAMESTATE READ WRITE CKSUM
> backup  ONLINE   0 013
>   sdc   ONLINE   0 026
>
> errors: Permanent errors have been detected in the following files:
>
>
> /battlefield/backup/kunde/server/data/d7/d79c0feb29ef024ce01
> 64253ee08e6daa986bd1d599f4640167de2c3d7828524
>
> ---
>
> But on the host there is no error:
>
>  zpool status -v
>
>   pool: slow
>  state: ONLINE
>   scan: scrub in progress since Fri Jul 21 15:45:13 2017
> 54.0G scanned out of 486G at 60.8M/s, 2h1m to go
> 0 repaired, 11.11% done
> config:
>
> NAMESTATE READ WRITE CKSUM
> slowONLINE   0 0 0
>   mirror-0  ONLINE   0 0 0
> sda2ONLINE   0 0 0
> sdb2ONLINE   0 0 0
>
> errors: No known data errors
>
>
> (Scrub is also finished with no errors)
>
> ---
>
> HOST:
>
>  pveversion --verbose
> proxmox-ve: 5.0-16 (running kernel: 4.10.15-1-pve)
> pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
> pve-kernel-4.10.15-1-pve: 4.10.15-15
> pve-kernel-4.4.35-1-pve: 4.4.35-77
> pve-kernel-4.10.8-1-pve: 4.10.8-7
> pve-kernel-4.4.59-1-pve: 4.4.59-87
> pve-kernel-4.10.11-1-pve: 4.10.11-9
> pve-kernel-4.10.17-1-pve: 4.10.17-16
> libpve-http-server-perl: 2.0-5
> lvm2: 2.02.168-pve2
> corosync: 2.4.2-pve3
> libqb0: 1.0.1-1
> pve-cluster: 5.0-12
> qemu-server: 5.0-14
> pve-firmware: 2.0-2
> libpve-common-perl: 5.0-16
> libpve-guest-common-perl: 2.0-11
> libpve-access-control: 5.0-5
> libpve-storage-perl: 5.0-12
> pve-libspice-server1: 0.12.8-3
> vncterm: 1.5-2
> pve-docs: 5.0-9
> pve-qemu-kvm: 2.9.0-2
> pve-container: 2.0-14
> pve-firewall: 3.0-2
> pve-ha-manager: 2.0-2
> ksm-control-daemon: 1.2-2
> glusterfs-client: 3.8.8-1
> lxc-pve: 2.0.8-3
> lxcfs: 2.0.7-pve2
> criu: 2.11.1-1~bpo90
> novnc-pve: 0.6-4
> smartmontools: 6.5+svn4324-1
> zfsutils-linux: 0.6.5.9-pve16~bpo90
>
> ---
>
> VM:
>
> Linux backup 4.10.15-1-pve #1 SMP PVE 4.10.15-12 (Mon, 12 Jun 2017
> 11:18:07 +0200) x86_64 GNU/Linux
>
> zfsutils-linux: 0.6.5.9-pve16~bpo90
>
>
> Some hints would be very appreciated!
>
> bye
> Harald
>
>
> --
> Harald Leithner
>
> ITronic
> Wiedner Hauptstraße 120/5.1, 1050 Wien, Austria
> Tel: +43-1-545 0 604
> Mobil: +43-699-123 78 4 78
> Mail: leith...@itronic.at | itronic.at
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Yannis Milios
>> (storage server has 4x4TB SAS
>> drives in RAID10 configured with MDADM)

Have you checked if these drives are properly aligned, sometimes that can
cause low r/w performance.
Is there any particular reason you use mdadm instead of h/w raid controller?

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] btrfs in a guest

2017-07-17 Thread Yannis Milios
>
> >> Personally I'd go with zfs over btrf.
>
>> Interesting. I see that also with zfs, you can expose previous versions
via samba.

>> You prefer zfs, because..? (The "more mature" argument, or other reasons
as well..? perhaps specific to running on Qemu VM on ceph >> storage?)

I would go for ZFS for that scenario but definitely I wouldn't try to use
it on a VM. I would prefer a physical server running a linux distro and ZoL
for ZFS or maybe FreeNAS + SAMBA to expose the shares on clients. You could
also use a second server as a ZFS sync target for failover purposes..

Yannis

On Mon, Jul 17, 2017 at 12:20 PM, lists  wrote:

> Hi Lindsay,
>
> Thanks for your reply.
>
> On 17-7-2017 1:04, Lindsay Mathieson wrote:
>
>> The Samba server is a Qemu VM?
>>
> yes.
>
> The backing filesystem (Ceph) should be irrelevant to whatever filesystem
>> you use in the VM.
>>
> Yes, I realise that. I know it's possible, and btrfs and xfs also seem to
> perform (after some brief testing) similarly. But there is a lot of
> discussion about "CoW penalty".
>
> And that's why I'm asking.
>
> For what it's worth: Our ceph has xfs OSDs.
>
> So, should I worry about this CoW penalty or not really?
>
> Personally I'd go with zfs over btrf.
>>
> Interesting. I see that also with zfs, you can expose previous versions
> via samba.
>
> You prefer zfs, because..? (The "more mature" argument, or other reasons
> as well..? perhaps specific to running on Qemu VM on ceph storage?)
>
> MJ
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox DRBD9 primary/secondary setup?

2017-05-15 Thread Yannis Milios
>
>
> We can see that the vm's get replicated but we have no clue on which nodes
> is primary/secondary.
>

The resource is in Primary mode on the node where the actual VM is running
on. The rest of replicated resources are(should be) in Secondary mode.
The only time where a resource is in Primary on more than 2 nodes is during
the live migration process. Then, after a successful live migration,  one
of the resources switches back to secondary mode.You can observe this
behaviour in CLI by using drbdadm or drbd-overview commands.

>
> And if we in the webgui press Edit on drbdstorage under the storage tab
> nothing happens - is this normal?
>

Yes it's normal. PVE does not support DRBD management over the GUI. You
need to use the usual DRBD cli commands to manage the cluster.


>
> Have we misunderstood anything during the setup of Proxmox with DRBD9, and
> should we (if we can) downgrade to drbd8, to get a setup like ganeti where
> we know which node contains what.
>

PVE  dropped support for DRBD9 after the change of the licensing model for
drbdmanage. I think they are going to revert back to drbd8 in the upcoming
version of PVE(5). More info about these topics here:

https://pve.proxmox.com/wiki/DRBD9

LINBIT has a dedicated DRBD9 repository for PVE users:

https://docs.linbit.com/doc/users-guide-90/s-proxmox-install/

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Error - ovs wrong mtu

2017-04-19 Thread Yannis Milios
Hello

I found this from a previous post, not sure if the issue has been solved
now:

http://pve.proxmox.com/pipermail/pve-user/2016-November/167784.html

Yannis


On Tue, 18 Apr 2017 at 09:55, Tobias Kropf  wrote:

> Hi @ list
>
> We have a problem with our pve-ovs setup...
>
> The setup is based on:
>
> https://pve.proxmox.com/wiki/Open_vSwitch#Example_2:_Bond_.2B_Bridge_.2B_Internal_Ports
> - Example 2
>
> We use 7 pve nodes with ceph and 2x10Gbit/s ovs-bond uplink on all
> nodes. It works fine but suddenly... the ceph vlan interface update from
> mtu 9000 to mtu 1500 and ceph block request. The incoming packet size is
> than bigger as 1500 and ceph blocked requests...
>
> Have anyone the same errors with this setup?
>
> thanks
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows Server 2003 HP 6400 RAID

2017-03-24 Thread Yannis Milios
You just need to edit the config file of the vm and edit the line(s)
related to the vm disk from scsiX to ideX.

Posting vm config would help...
Generally speaking Windows are a bit painful to virtualise.


On Fri, 24 Mar 2017 at 18:48,  wrote:

>   So, i originally set these up as SCSI drives in Proxmox. Since I am
> using LVM-Thin, I don't think I can change the hard drive type? Does
> this mean that I have to create another disk and dd it in? Do I need
> to set the bs=1024?
>
> I will look into the IDE drivers.
>
> Thanks,
>
> Quoting Alessandro Briosi :
>
> > Il 24/03/2017 13:57, Hexis ha scritto:
> >> I recently did a P2V onto LVM thin from a Windows 2003 server running
> >> on an old HP Proliant with a HP 6400 U-SCSI RAID. I used dd | ssh to
> >> accomplish it, and then of=the actual lvm volume. This completed
> >> successfully, however, I cannot seem to get SCSI drivers working, so
> >> it simply boots to the windows loading screen and crashes... I am
> >> hesitant for obvious reasons to do too much tampering with the host OS
> >> pre-migration.
> >>
> >> Does anyone have experience doing this?
> >
> > Personally this is how I proceed in such cases (if the original server
> > cannot boot).
> >
> > 1. Connect the HD as IDE drives in the VM
> > 2. Try to boot windows
> > 3. If it starts then go to point 7
> > 4. If it does not start, then I start Windows with an ISO of the OS and
> > run in recovery mode. This allows you to launch a registry editor and
> > load a registry hive.
> >   an alternative is to install a temporary windows into another disk,
> > boot from there and load the registry of the server.
> > 5. In the registry I go and enable the ide drivers to start at boot
> > (HKLM\System\... look if up in the internet)
> > 6. Then I boot windows (hopefully the change did it's magic)
> > 7 I install virtio drivers and then migrate the disks to use virtio.
> >
> > That's roughly the procedure.
> > Of course this could be avoided if the original server is working by
> > merging directly the mergeide.reg file before doing the p2v conversion.
> >
> > Hope this helps,
> > Alessandro
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.comhttp://
> pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Web GUI: connection reset by peer (596)

2017-02-24 Thread Yannis Milios
In my opinion this is related to difficulties in cluster communication.Have
a look these notes:

https://pve.proxmox.com/wiki/Multicast_notes



On Fri, 24 Feb 2017 at 22:45, Uwe Sauter  wrote:

> Hi,
>
> no I didn't think about that.
>
> I now tried and restarted pveproxy afterwards but to no avail.
>
> Can you explain why you thought that this might help?
>
>
> Regards,
>
> Uwe
>
>
> Am 24.02.2017 um 21:28 schrieb Gilberto Nunes:
> > Hi
> >
> > Did you try to execute:
> >
> > pvecm updatecerts
> >
> > in every nodes???
> >
> > 2017-02-24 15:04 GMT-03:00 Uwe Sauter >:
> >
> > Hi,
> >
> > I have a GUI problem with a four node cluster that I installed
> recently. I was able
> > to follow this up to ext-all.js but I'm no web developer so this is
> where I got stuck.
> >
> > Background:
> > * four node cluster
> > * each node has two interfaces in use
> > ** eth0 is  1Gb used for management and some VM traffic
> > ** eth2 is 10Gb used for cluster synchronization, Ceph and more VM
> traffic
> > * host names are resolved via /etc/hosts
> > * let's call the nodes px-a, px-b, px-c, px-d
> > * Proxmox version 4.4-12/e71b7a74
> >
> >
> > Problem:
> > When I access the cluster via the web GUI on px-a I can view all
> info regarding px-a
> > without any problems. If I try to view infos regarding the other
> nodes I almost every
> > time I get "connection reset by peer (596)".
> > If I access the cluster GUI on px-b I can view this node's info but
> not the info of the
> > other nodes.
> >
> > I started to migrate VMs to the cluster today. Before that, when the
> cluster had no
> > VMs running, the access between nodes worked without problem.
> >
> >
> > Debugging:
> > I was able to trace this using Chrome's developer tools up to the
> point where
> > some method inside ext-all.js fails with said "connection reset by
> peer".
> >
> > Detail using pretty formatted version of ext-all.js:
> >
> > Object (?) Ext.cmd.derive("Ext.data.request.Ajax",
> Ext.data.request.Base begins at line 11370
> >
> > Method "start" begins at line 11394
> >
> > Error occurs at line 11409 "h.send(e);"
> >
> >
> > I don't know what causes h.send(e) to fail. Any suggestions what
> could cause this or how to
> > debug further is appreciated.
> >
> > Regards,
> >
> > Uwe
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com 
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user <
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
> >
> >
> >
> >
> > --
> >
> > Gilberto Ferreira
> > +55 (47) 99676-7530
> > Skype: gilberto.nunes36
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] TASK ERROR: VM quit/powerdown failed

2017-01-30 Thread Yannis Milios
Hello,

Maybe these two links can help?


https://pve.proxmox.com/wiki/Qemu-guest-agent

https://pve.proxmox.com/wiki/Acpi_kvm

Yannis



On Mon, 30 Jan 2017 at 23:47, Leonardo Dourado <
leonardo.dour...@itrace.com.br> wrote:

> Hello guys!
>
> I am trying to run a programmed backup (STOP Mode) and I believe it
> requires the shutdown of the VM (Windows Server 2008 R2)... For some reason
> it's not working, I get the message "TASK ERROR: VM quit/powerdown failed"
> when it tries to run...
>
> I have installed the VirtIO drivers on the guest (The QEMU is also
> enabled) . If the machine is off, the backup goes perfectly.
>
> Any help is very welcome!
>
> Leonardo D.
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Recovery Disaster - What is the best practice...?

2017-01-26 Thread Yannis Milios
Your question is quite generic because it depends how you have configured
your PVE, what will be the storage backends etc.
I will assume that you have one PVE server with VMs stored in a local
storage like LVM or ZFS. You have a NAS as well where you store the backups
of these VMs (full backups).NAS is configured on PVE as a  NFS target.
So if your PVE server dies, the only thing needed is to mount NAS as NFS
target on the new PVE server and just retore the backups there. That could
take some time depending on the size of VMs.

Yannis


On Thu, 26 Jan 2017 at 00:09, Leonardo Dourado <
leonardo.dour...@itrace.com.br> wrote:

> Hi All!
>
> Can someone please advise what is the best procedure in case of recovery
> disaster?
>
> My scenario is:
> I have a PVE Server running a few machines, I also have a NAS with some
> disks... I wanna point to that NAS the backup of these machines (in a way I
> can recover them in another server in case of hardware failure).
>
> I see on PVE the Backup service, I am not sure if I have to recover a
> whole VM to another server that is the proper service, it looks too simple
> (mostly snapshots).
> My plan is have Proxmox as a main server for VMs so, I have to think about
> "if my hardware fails", what can I do to move the machines to another
> server...
>
> Much appreciated,
> Leonardo D.
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] win2008 with uefi

2017-01-17 Thread Yannis Milios
Just tested it by doing P2V with  vmware converter a Win10 laptop with
UEFI+GPT+Secure boot enabled, worked fine. However didn't test it with
Dynamic disks/RAID1 setup..


On Tue, 17 Jan 2017 at 16:06, Yannis Milios <yannis.mil...@gmail.com> wrote:

> How about using VMware Converter for P2V to vmdk file(s) and then attach
> the vmdk(s) to PVE ?
>
> or by using 'SSH Migration of a Windows physical machine to a VM raw file
> directly' described in the WiKi ?
>
> (
> https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#SSH_Migration_of_a_Windows_physical_machine_to_a_VM_raw_file_directly
> )
>
>
>
> On Tue, Jan 17, 2017 at 3:43 PM, lists <li...@merit.unu.edu> wrote:
>
>
>
>
>
>
>
> On 17-1-2017 15:15, Alessandro Briosi wrote:
>
>
>
>
> Can't you simply restore from windows image backup not using uefi boot?
>
>
> Eventually you would have to start in restore mode and edit the registry
>
>
> to enable the IDE device for boot (I have done this from a dead machine
>
>
> and it worked).
>
>
>
>
>
>
>
> Well, with me the Windows Installation Restore tool complains something
> like "this machine boots using a different boot technology, i cannot
> proceed"
>
>
>
>
>
> Perhaps you did it with a newer windows version? This is Windows 2008.
>
>
>
>
>
>
>
> Don't think thay UEFI is required for booting windows 2008 anyway.
>
>
>
>
> Of course. I know that windows can boot without UEFI. That's also what my
> goal is.
>
>
>
>
>
> The problem is that I'm stuck with this bare metal UEFI & Dynamic Disks
> RAID1 windows 2008 machine that I would like to virtualise...
>
>
>
>
>
>
> MJ
>
>
> ___
>
>
> pve-user mailing list
>
>
> pve-user@pve.proxmox.com
>
>
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>
>
>
> --
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] win2008 with uefi

2017-01-17 Thread Yannis Milios
How about using VMware Converter for P2V to vmdk file(s) and then attach
the vmdk(s) to PVE ?

or by using 'SSH Migration of a Windows physical machine to a VM raw file
directly' described in the WiKi ?

(
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#SSH_Migration_of_a_Windows_physical_machine_to_a_VM_raw_file_directly
)



On Tue, Jan 17, 2017 at 3:43 PM, lists  wrote:

>
>
> On 17-1-2017 15:15, Alessandro Briosi wrote:
>
>> Can't you simply restore from windows image backup not using uefi boot?
>> Eventually you would have to start in restore mode and edit the registry
>> to enable the IDE device for boot (I have done this from a dead machine
>> and it worked).
>>
>
> Well, with me the Windows Installation Restore tool complains something
> like "this machine boots using a different boot technology, i cannot
> proceed"
>
> Perhaps you did it with a newer windows version? This is Windows 2008.
>
> Don't think thay UEFI is required for booting windows 2008 anyway.
>>
> Of course. I know that windows can boot without UEFI. That's also what my
> goal is.
>
> The problem is that I'm stuck with this bare metal UEFI & Dynamic Disks
> RAID1 windows 2008 machine that I would like to virtualise...
>
>
> MJ
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE+SPICE smartcard protocol redirection support?

2016-12-15 Thread Yannis Milios
Hello,

I'm sorry if this has been asked again in the past, but may I ask if PVE
SPICE implementation supports smartcard passthrough and if yes how can be
enabled?

For usb card readers usb redirection works but for built in ones (laptops)
smartcard protocol redirection is needed.

I'm currently evaluating PVE + SPICE as a possible VDI solution and
smartcard redirection is mandatory for the setup.

Thanks,

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] drbdmanage License change

2016-11-20 Thread Yannis Milios
Regarding drbd, is it possible to include drbd8 kernel module + userland
utilities instead which are not affected by the license change?

On Mon, 21 Nov 2016 at 06:20, Alexandre DERUMIER 
wrote:

> >>Is this an existing feature in qemu or still under development? (or
> >>planning)
>
> qemu already support block migration to remote nbd (network block device)
> server.
>
> qemu 2.8 have a new feature, COLO, which will allow HA without vm
> interruption. (continuous memory + block replication on remote node).
> I'll would like to implemented this, but first, we need to finish to
> implement live migration + live local storage migration.
>
>
> - Mail original -
> De: "Lindsay Mathieson" 
> À: "proxmoxve" 
> Envoyé: Dimanche 20 Novembre 2016 22:22:37
> Objet: Re: [PVE-User] drbdmanage License change
>
> On 21/11/2016 2:54 AM, Alexandre DERUMIER wrote:
> > I think we could manage this with qemu block replication.
>
> Very nice.
>
>
> Is this an existing feature in qemu or still under development? (or
> planning)
>
> --
> Lindsay Mathieson
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] P2V and UEFI...

2016-11-16 Thread Yannis Milios
I would use plain dd or clonezilla to backup. Then restore to vm and adjust
partitions/vdisks as needed by using gparted.



On Wednesday, 16 November 2016, Marco Gaiarin  wrote:

>
> I need to P2V a debian 8 server, installed on UEFI/GPT.
>
> A little complication born by the fact that i need to P2V in the same
> server (eg, image the server, reinstall it with proxmox, then create
> the VM), but i can move data elsewhere (to keep OS image minimal) and
> test the image with other PVE installation.
>
> Normally, i use 'mondobackup' for that, but mondo does not support UEFI
> (at least in debian).
>
>
> Also, i prefere to keep data in a second (virtual) disk, and backup
> that by other mean (bacula) so i need to ''repartition'' (better:
> reorganize data) in disks.
>
>
> So, summarizing: what tool it is better to use to do a (preferibly
> offline) image of some partition of a phisical server, respecting UEFI
> partitioning schema?
>
>
> I hope i was clear. Thanks.
>
> --
> dott. Marco Gaiarin GNUPG Key ID:
> 240A3D66
>   Associazione ``La Nostra Famiglia''
> http://www.lanostrafamiglia.it/
>   Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento
> (PN)
>   marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f
> +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
> http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] online migration broken in latest updates - "unknown command 'mtunnel'"

2016-11-11 Thread Yannis Milios
Just tested it with pve-qemu-kvm 2.7.0-6 and it works fine, thanks!

On Fri, Nov 11, 2016 at 12:28 PM, Wolfgang Bumiller <w.bumil...@proxmox.com>
wrote:

> Any chance you could compare pve-qemu-kvm 2.7.0-5 and this test build:
> <http://download2.proxmox.com/temp/pve/pve-qemu-kvm_2.7.0-6_amd64.deb> ?
>
> On Fri, Nov 11, 2016 at 12:11:27PM +, Yannis Milios wrote:
> > Not sure if it's related, but after upgrading yesterday to the latest
> > updates, Ceph snapshots take a very long time to complete and finally
> they
> > fail.
> > This happens only if the VM is running and if I check the 'include RAM'
> box
> > in snapshot window. All 3 pve/ceph nodes are upgraded to the latest
> updates.
> >
> > I have 3 pve nodes with ceph storage role on them. Below follows some
> more
> > info:
> >
> > proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
> > pve-manager: 4.3-10 (running version: 4.3-10/7230e60f)
> > pve-kernel-4.4.21-1-pve: 4.4.21-71
> > pve-kernel-4.4.19-1-pve: 4.4.19-66
> > lvm2: 2.02.116-pve3
> > corosync-pve: 2.4.0-1
> > libqb0: 1.0-1
> > pve-cluster: 4.0-47
> > qemu-server: 4.0-94
> > pve-firmware: 1.1-10
> > libpve-common-perl: 4.0-80
> > libpve-access-control: 4.0-19
> > libpve-storage-perl: 4.0-68
> > pve-libspice-server1: 0.12.8-1
> > vncterm: 1.2-1
> > pve-docs: 4.3-14
> > pve-qemu-kvm: 2.7.0-6
> > pve-container: 1.0-81
> > pve-firewall: 2.0-31
> > pve-ha-manager: 1.0-35
> > ksm-control-daemon: 1.2-1
> > glusterfs-client: 3.5.2-2+deb8u2
> > lxc-pve: 2.0.5-1
> > lxcfs: 2.0.4-pve2
> > criu: 1.6.0-1
> > novnc-pve: 0.5-8
> > smartmontools: 6.5+svn4324-1~pve80
> > zfsutils: 0.6.5.8-pve13~bpo80
> > openvswitch-switch: 2.5.0-1
> > ceph: 0.94.9-1~bpo80+1
> >
> > ceph status
> > cluster 32d19f44-fcef-4863-ad94-cb8d738fe179
> >  health HEALTH_OK
> >  monmap e3: 3 mons at {0=
> > 192.168.148.65:6789/0,1=192.168.149.95:6789/0,2=192.168.149.115:6789/0}
> > election epoch 260, quorum 0,1,2 0,1,2
> >  osdmap e740: 6 osds: 6 up, 6 in
> >   pgmap v2319446: 120 pgs, 1 pools, 198 GB data, 51642 objects
> > 393 GB used, 2183 GB / 2576 GB avail
> >  120 active+clean
> >   client io 4973 B/s rd, 115 kB/s wr, 35 op/s
> >
> >
> >
> > On Fri, Nov 11, 2016 at 7:05 AM, Thomas Lamprecht <
> t.lampre...@proxmox.com>
> > wrote:
> >
> > > On 11/10/2016 10:35 PM, Lindsay Mathieson wrote:
> > >
> > >> On 11/11/2016 7:11 AM, Thomas Lamprecht wrote:
> > >>
> > >>> Are you sure you upgraded all, i.e. used:
> > >>> apt update
> > >>> apt full-upgrade
> > >>>
> > >>
> > >> Resolved it thanks Thomas - I hadn't updated the *destination* server.
> > >>
> > >>
> > >
> > > makes sense, should have been made sense a few days ago this, would
> not be
> > > too hard to catch :/
> > >
> > > anyway, for anyone reading this:
> > > When upgrading qemu-server to version 4.0.93 or newer you should
> upgrade
> > > all other nodes pve-cluster package to version 4.0-47 or newer, else
> > > migrations to those nodes will not work - as we use a new command to
> detect
> > > if we should send the traffic over a separate migration network.
> > >
> > > cheers,
> > > Thomas
> > >
> > >
> > >
> > >
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > ___
> > pve-devel mailing list
> > pve-de...@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] online migration broken in latest updates - "unknown command 'mtunnel'"

2016-11-11 Thread Yannis Milios
Not sure if it's related, but after upgrading yesterday to the latest
updates, Ceph snapshots take a very long time to complete and finally they
fail.
This happens only if the VM is running and if I check the 'include RAM' box
in snapshot window. All 3 pve/ceph nodes are upgraded to the latest updates.

I have 3 pve nodes with ceph storage role on them. Below follows some more
info:

proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-10 (running version: 4.3-10/7230e60f)
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-47
qemu-server: 4.0-94
pve-firmware: 1.1-10
libpve-common-perl: 4.0-80
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-14
pve-qemu-kvm: 2.7.0-6
pve-container: 1.0-81
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
openvswitch-switch: 2.5.0-1
ceph: 0.94.9-1~bpo80+1

ceph status
cluster 32d19f44-fcef-4863-ad94-cb8d738fe179
 health HEALTH_OK
 monmap e3: 3 mons at {0=
192.168.148.65:6789/0,1=192.168.149.95:6789/0,2=192.168.149.115:6789/0}
election epoch 260, quorum 0,1,2 0,1,2
 osdmap e740: 6 osds: 6 up, 6 in
  pgmap v2319446: 120 pgs, 1 pools, 198 GB data, 51642 objects
393 GB used, 2183 GB / 2576 GB avail
 120 active+clean
  client io 4973 B/s rd, 115 kB/s wr, 35 op/s



On Fri, Nov 11, 2016 at 7:05 AM, Thomas Lamprecht 
wrote:

> On 11/10/2016 10:35 PM, Lindsay Mathieson wrote:
>
>> On 11/11/2016 7:11 AM, Thomas Lamprecht wrote:
>>
>>> Are you sure you upgraded all, i.e. used:
>>> apt update
>>> apt full-upgrade
>>>
>>
>> Resolved it thanks Thomas - I hadn't updated the *destination* server.
>>
>>
>
> makes sense, should have been made sense a few days ago this, would not be
> too hard to catch :/
>
> anyway, for anyone reading this:
> When upgrading qemu-server to version 4.0.93 or newer you should upgrade
> all other nodes pve-cluster package to version 4.0-47 or newer, else
> migrations to those nodes will not work - as we use a new command to detect
> if we should send the traffic over a separate migration network.
>
> cheers,
> Thomas
>
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Cache Tiering

2016-10-10 Thread Yannis Milios
>>...but there is one deal breaker for us and thats snapshots - they are
incredibly >> slow to restore.

You can try to clone instead of rolling back an image to the snapshot. It's
much faster and the recommended method by official Ceph documentation.

http://docs.ceph.com/docs/jewel/rbd/rbd-snapshot/#rollback-snapshot
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Missing offline node Proxmox 4

2016-09-21 Thread Yannis Milios
Sorry forgot the wiki link :)

https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node




On Wed, Sep 21, 2016 at 1:11 PM, Yannis Milios <yannis.mil...@gmail.com>
wrote:

> This process is described in detail here:
>
>
> On Wed, Sep 21, 2016 at 12:29 PM, Thomas Lamprecht <
> t.lampre...@proxmox.com> wrote:
>
>>
>> On 09/21/2016 12:35 PM, Bart Lageweg | Bizway wrote:
>>
>>> Thanks. It's working!
>>> Only it is still existing in the webinterface (already restart
>>> pve-cluster and moved /etc/pve/nodes/nodename and reload browser etc)
>>>
>>
>> Hmm, does
>>
>> cat /etc/pve/.members
>>
>> also still lists it?
>>
>> If not try restarting the pveproxy service also :)
>>
>> cheers,
>> Thomas
>>
>>
>>
>>
>>>
>>> -Oorspronkelijk bericht-
>>> Van: pve-user [mailto:pve-user-boun...@pve.proxmox.com] Namens Thomas
>>> Lamprecht
>>> Verzonden: woensdag 21 september 2016 12:03
>>> Aan: pve-user@pve.proxmox.com
>>> Onderwerp: Re: [PVE-User] Missing offline node Proxmox 4
>>>
>>> On 09/21/2016 11:37 AM, Bart Lageweg | Bizway wrote:
>>>
>>>> Hi,
>>>>
>>>> I want to delete an offline node from a Proxmox 4 cluster.
>>>> Node is not listed in pvecm nodes since it is offline.
>>>>
>>>> How to delete?
>>>>
>>> You can delete the node just fine with:
>>> pvecm delnode 
>>>
>>> Use the name of the offline node to delete it.
>>>
>>> If you cannot remember the offline nodes name see the output from:
>>>
>>> cat /etc/pve/.members
>>>
>>> There should be all configured node listed.
>>>
>>>
>>> After deleting the node from the cluster ensure that the deleted node
>>> does not comes up again in its old state - where it thinks it belongs still
>>> to the cluster.
>>> Easiest way to solve that is reinstalling it - or starting it up while
>>> not connected to the network (where the other nodes are) and remove
>>> corosync config.
>>>
>>> cheers,
>>> Thomas
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Missing offline node Proxmox 4

2016-09-21 Thread Yannis Milios
This process is described in detail here:


On Wed, Sep 21, 2016 at 12:29 PM, Thomas Lamprecht 
wrote:

>
> On 09/21/2016 12:35 PM, Bart Lageweg | Bizway wrote:
>
>> Thanks. It's working!
>> Only it is still existing in the webinterface (already restart
>> pve-cluster and moved /etc/pve/nodes/nodename and reload browser etc)
>>
>
> Hmm, does
>
> cat /etc/pve/.members
>
> also still lists it?
>
> If not try restarting the pveproxy service also :)
>
> cheers,
> Thomas
>
>
>
>
>>
>> -Oorspronkelijk bericht-
>> Van: pve-user [mailto:pve-user-boun...@pve.proxmox.com] Namens Thomas
>> Lamprecht
>> Verzonden: woensdag 21 september 2016 12:03
>> Aan: pve-user@pve.proxmox.com
>> Onderwerp: Re: [PVE-User] Missing offline node Proxmox 4
>>
>> On 09/21/2016 11:37 AM, Bart Lageweg | Bizway wrote:
>>
>>> Hi,
>>>
>>> I want to delete an offline node from a Proxmox 4 cluster.
>>> Node is not listed in pvecm nodes since it is offline.
>>>
>>> How to delete?
>>>
>> You can delete the node just fine with:
>> pvecm delnode 
>>
>> Use the name of the offline node to delete it.
>>
>> If you cannot remember the offline nodes name see the output from:
>>
>> cat /etc/pve/.members
>>
>> There should be all configured node listed.
>>
>>
>> After deleting the node from the cluster ensure that the deleted node
>> does not comes up again in its old state - where it thinks it belongs still
>> to the cluster.
>> Easiest way to solve that is reinstalling it - or starting it up while
>> not connected to the network (where the other nodes are) and remove
>> corosync config.
>>
>> cheers,
>> Thomas
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Yannis Milios
Do you use a single disk or a raid array as a backup storage?

What's the output of: 'cat /proc/mounts | grep ext4'   ?



On Tue, Sep 13, 2016 at 9:35 AM, miguel gonzalez  wrote:

> Sorry i forgot. I use local storage and ext4 for the vms and backups.
>
> Before in 3.4 I had ext3.
>
> Many thanks
>
> Dietmar Maurer  wrote:
>
> >>   I have realized backups are taking three times than before. I used to
> >> get 30 Mb/s as average for a backup and now I get around 10 Mb/s. The
> >> performance seems to drop after start.
> >
> >Also, what kind of storage/fs do you use for VM images and backup storage?
> >Maybe you simply use other mount option now?
> >
> >NOTE: newer kernels have other default mount options ...
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Restoring VM on ZFS-thin storage

2016-09-13 Thread Yannis Milios
Can't answer directly your question since I'm not aware of PVE backup
internals however as a workaround you could try to:

If your zfs volumes (where vms reside) have compression enabled, you could
reclaim unused space by:
- running sdelete on windows vms
- creating a zero filled file via dd on linux and then deleting it.

Zfs compression should do its job after that.



On Monday, 12 September 2016, Mikhail  wrote:

> Hello,
>
> Right now I'm moving virtual machines from old 3.x node (storage model:
> LVM) to new 4.x node. The new node has ZFS RAID10 setup over 4x4TB local
> disks. My storage for virtual machines has ZFS Thin Provisioning enabled
> and it actually works for newly created virtual machines - in other
> words I can give 100GB disk to machine, and only actually used space
> will be taken from available ZFS pool.
>
> This, however, does not work for restored from backups virtual machines.
> I'm doing simple backup-restore procedure (backup on 3.x node and
> restore on 4.x node to ZFS), and what I see is that if there's 100GB
> disk in backup then it takes 100GB on restore (even thought actually
> there's only 5GB used). Does that mean I won't be able to take advantage
> of ZFS Thin Provisioning when restoring vms from backups?
>
> regards,
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Yannis Milios
If your backup target is a NFS server, try to mount it in vers=3 instead of
4.




On Monday, 12 September 2016, Miguel González 
wrote:

> Hi,
>
>   I have a software RAID of 2 Tb HDs. I have upgraded from 3.4 to 4.2
> and migrated the VMs that I had.
>
>   I have realized backups are taking three times than before. I used to
> get 30 Mb/s as average for a backup and now I get around 10 Mb/s. The
> performance seems to drop after start.
>
>   102: Sep 11 15:17:50 INFO: status: 0% (98828288/274877906944), sparse
> 0% (3125248), duration 3, 32/31 MB/s
> 102: Sep 11 15:20:45 INFO: status: 1% (2767192064/274877906944), sparse
> 0% (425738240), duration 178, 15/12 MB/s
> 102: Sep 11 15:25:39 INFO: status: 2% (5500174336/274877906944), sparse
> 0% (691355648), duration 472, 9/8 MB/s
> 102: Sep 11 15:33:58 INFO: status: 3% (8252162048/274877906944), sparse
> 0% (1068974080), duration 971, 5/4 MB/s
> 102: Sep 11 15:42:18 INFO: status: 4% (11327242240/274877906944), sparse
> 0% (2668666880), duration 1471, 6/2 MB/s
> 102: Sep 11 15:43:29 INFO: status: 5% (13744734208/274877906944), sparse
> 1% (4710879232), duration 1542, 34/5 MB/s
> 102: Sep 11 15:52:00 INFO: status: 6% (16493248512/274877906944), sparse
> 2% (5582757888), duration 2053, 5/3 MB/s
> 102: Sep 11 16:00:07 INFO: status: 7% (19305791488/274877906944), sparse
> 2% (6777946112), duration 2540, 5/3 MB/s
> 102: Sep 11 16:01:16 INFO: status: 8% (21993553920/274877906944), sparse
> 3% (9289428992), duration 2609, 38/2 MB/s
> 102: Sep 11 16:13:49 INFO: status: 9% (24741281792/274877906944), sparse
> 3% (9826746368), duration 3362, 3/2 MB/s
> 102: Sep 11 16:15:50 INFO: status: 10% (27600945152/274877906944),
> sparse 4% (12201222144), duration 3483, 23/4 MB/s
> 102: Sep 11 16:16:06 INFO: status: 11% (30475550720/274877906944),
> sparse 5% (15024570368), duration 3499, 179/3 MB/s
> 102: Sep 11 16:16:24 INFO: status: 12% (33080868864/274877906944),
> sparse 6% (17548828672), duration 3517, 144/4 MB/s
> 102: Sep 11 16:16:39 INFO: status: 13% (35794845696/274877906944),
> sparse 7% (20204036096), duration 3532, 180/3 MB/s
> 102: Sep 11 16:18:12 INFO: status: 14% (38648938496/274877906944),
> sparse 8% (22802001920), duration 3625, 30/2 MB/s
> 102: Sep 11 16:19:03 INFO: status: 15% (41313632256/274877906944),
> sparse 9% (25259786240), duration 3676, 52/4 MB/s
> 102: Sep 11 16:19:19 INFO: status: 16% (44355354624/274877906944),
> sparse 10% (28259188736), duration 3692, 190/2 MB/s
> 102: Sep 11 16:19:27 INFO: status: 17% (46843101184/274877906944),
> sparse 11% (30695702528), duration 3700, 310/6 MB/s
> 102: Sep 11 16:19:31 INFO: status: 18% (49512448000/274877906944),
> sparse 12% (8658816), duration 3704, 667/6 MB/s
> 102: Sep 11 16:20:07 INFO: status: 19% (52250279936/274877906944),
> sparse 13% (35924496384), duration 3740, 76/4 MB/s
> 102: Sep 11 16:27:03 INFO: status: 20% (54986539008/274877906944),
> sparse 13% (36837548032), duration 4156, 6/4 MB/s
> 102: Sep 11 16:38:20 INFO: status: 21% (57730924544/274877906944),
> sparse 13% (37141852160), duration 4833, 4/3 MB/s
> 102: Sep 11 16:54:05 INFO: status: 22% (60478914560/274877906944),
> sparse 13% (37384355840), duration 5778, 2/2 MB/s
> 102: Sep 11 17:06:38 INFO: status: 23% (63270092800/274877906944),
> sparse 13% (37755158528), duration 6531, 3/3 MB/s
> 102: Sep 11 17:18:38 INFO: status: 24% (65971683328/274877906944),
> sparse 13% (38161530880), duration 7251, 3/3 MB/s
> 102: Sep 11 17:27:59 INFO: status: 25% (68727472128/274877906944),
> sparse 14% (38863835136), duration 7812, 4/3 MB/s
> 102: Sep 11 17:34:45 INFO: status: 26% (71475658752/274877906944),
> sparse 14% (39950573568), duration 8218, 6/4 MB/s
> 102: Sep 11 17:43:49 INFO: status: 27% (74227646464/274877906944),
> sparse 14% (40587501568), duration 8762, 5/3 MB/s
> 102: Sep 11 17:47:03 INFO: status: 28% (76968230912/274877906944),
> sparse 15% (42354982912), duration 8956, 14/5 MB/s
> 102: Sep 11 17:50:04 INFO: status: 29% (79811444736/274877906944),
> sparse 16% (44452319232), duration 9137, 15/4 MB/s
> 102: Sep 11 17:51:48 INFO: status: 30% (82502615040/274877906944),
> sparse 16% (46625193984), duration 9241, 25/4 MB/s
> 102: Sep 11 17:54:04 INFO: status: 31% (85239201792/274877906944),
> sparse 17% (48626528256), duration 9377, 20/5 MB/s
> 102: Sep 11 17:55:56 INFO: status: 32% (87972380672/274877906944),
> sparse 18% (50729697280), duration 9489, 24/5 MB/s
> 102: Sep 11 17:57:52 INFO: status: 33% (90723909632/274877906944),
> sparse 19% (52973953024), duration 9605, 23/4 MB/s
> 102: Sep 11 18:00:44 INFO: status: 34% (93464952832/274877906944),
> sparse 20% (54998253568), duration 9777, 15/4 MB/s
> 102: Sep 11 18:04:31 INFO: status: 35% (96289161216/274877906944),
> sparse 20% (56889286656), duration 10004, 12/4 MB/s
> 102: Sep 11 18:06:28 INFO: status: 36% (98957787136/274877906944),
> sparse 21% (58669006848), duration 10121, 22/7 MB/s
> 102: Sep 11 18:12:24 INFO: status: 37% 

Re: [PVE-User] Proxmox + ZFS: Performance issues

2016-04-25 Thread Yannis Milios

Hi Ralf,

Are both hard drives exactly the same model?

I've noticed that your drive uses 512byte sector size (instead of 4K 
sector size which is the current trend).


In that case, is your pool properly aligned to ashift=9 ?

More info here: 
http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks


Regards,

Yannis

On 04/25/2016 09:38 PM, Ralf wrote:

On 04/25/2016 10:14 PM, Lindsay Mathieson wrote:

On 25/04/2016 11:43 PM, Ralf wrote:

some more random analysis:
I used atop to debug the problem.

Turned out, that disks are 100% busy and the average io time is beyond
all hope. ZFS seems to read/write/arrange things in a random and not in
a sequential way.


What does "iotop -P --only" show?

That dd is using 100% of io. Ram usage is 5/10 GiB at the moment, so
more than enough free ram.

I suspect lack of ram is your problem.

Why do I need so much RAM for mirror raid? All ram-intensive features
are turned off.

   Ralf


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] stmfadm on Linux ZFS storage

2016-03-08 Thread Yannis Milios
What about using virtualized OmniOS on that storage node with hdds
passthru-ed to the OmniOS VM. Could that be an option?

I presume that this will overcome your nic driver issue.

Regards,

On Tuesday, 8 March 2016, Mikhail  wrote:

> Answering to myself - it looks like I need IET iscsi provider to use ZFS
> over iSCSI from Linux storage (according to
> https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI).
>
> Now the problem is with building iscsitarget on Linux from
> iscsitarget-dkms package - it looks like it won't build on my current
> kernel which is Linux 4.3.0-0.bpo.1-amd64 x86_64.
>
> That's some bad news for me..
>
> On 03/08/2016 09:18 PM, Mikhail wrote:
> > Hello,
> >
> > I recently deployed new cluster with Storage being set on Debian Jessie
> > (8.0) with ZFS. Storage is shared between nodes using ZFS iSCSI feature.
> > However, I cannot create new VMs, creation fails with the following task
> > error:
> >
> >
> >
> > bash: /usr/sbin/stmfadm: No such file or directory
> > TASK ERROR: create failed - command '/usr/bin/ssh -o 'BatchMode=yes' -i
> > /etc/pve/priv/zfs/192.168.4.1_id_rsa root@192.168.4.1 
> /usr/sbin/stmfadm
> > create-lu -p 'wcd=true' -p 'guid=600144fed7ca7a4428f848f71594a1bc'
> > /dev/zvol/rdsk/rpool/vm-1101-disk-1' failed: exit code 127
> >
> > There's no "stmfadm" command available on my Linux storage. It looks
> > like this command is available only on Solaris based OSes.
> >
> > Initially I wanted to run OmniOS as an OS on my storage, but apparently
> > there's no support for X550 10GigE network cards in OmniOS-stable, so
> > this forced me to use Linux instead - it has support X550 Intel nics in
> > 4.x kernel.
> >
> > Any suggestions?
> >
> > Mikhail.
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com 
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] stmfadm on Linux ZFS storage

2016-03-08 Thread Yannis Milios
What about using virtualized OmniOS on that storage node with hdds
passthru-ed to the OmniOS VM. Could that be an option?

I presume that this will overcome your nic driver issue.

Regards,

On Tuesday, 8 March 2016, Mikhail  wrote:

> Answering to myself - it looks like I need IET iscsi provider to use ZFS
> over iSCSI from Linux storage (according to
> https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI).
>
> Now the problem is with building iscsitarget on Linux from
> iscsitarget-dkms package - it looks like it won't build on my current
> kernel which is Linux 4.3.0-0.bpo.1-amd64 x86_64.
>
> That's some bad news for me..
>
> On 03/08/2016 09:18 PM, Mikhail wrote:
> > Hello,
> >
> > I recently deployed new cluster with Storage being set on Debian Jessie
> > (8.0) with ZFS. Storage is shared between nodes using ZFS iSCSI feature.
> > However, I cannot create new VMs, creation fails with the following task
> > error:
> >
> >
> >
> > bash: /usr/sbin/stmfadm: No such file or directory
> > TASK ERROR: create failed - command '/usr/bin/ssh -o 'BatchMode=yes' -i
> > /etc/pve/priv/zfs/192.168.4.1_id_rsa root@192.168.4.1 
> /usr/sbin/stmfadm
> > create-lu -p 'wcd=true' -p 'guid=600144fed7ca7a4428f848f71594a1bc'
> > /dev/zvol/rdsk/rpool/vm-1101-disk-1' failed: exit code 127
> >
> > There's no "stmfadm" command available on my Linux storage. It looks
> > like this command is available only on Solaris based OSes.
> >
> > Initially I wanted to run OmniOS as an OS on my storage, but apparently
> > there's no support for X550 10GigE network cards in OmniOS-stable, so
> > this forced me to use Linux instead - it has support X550 Intel nics in
> > 4.x kernel.
> >
> > Any suggestions?
> >
> > Mikhail.
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com 
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZVOL question / confusion.

2015-12-07 Thread Yannis Milios

Hi Muhammad,

This should clear up things a bit (taken from Oracle ZFS Admin guide):

"A clone is a writable volume or file system whose initial contents are 
the same as the dataset from which it was created. As with snapshots, 
creating a clone is nearly instantaneous and initially consumes no 
additional disk space. In addition, you can snapshot a clone.


Clones can only be created from a snapshot. When a snapshot is cloned, 
an implicit dependency is created between the clone and snapshot. Even 
though the clone is created somewhere else in the dataset hierarchy, the 
original snapshot cannot be destroyed as long as the clone exists."


http://docs.oracle.com/cd/E18752_01/html/819-5461/gbcxz.html

Regards,
Yannis

On 12/07/2015 11:59 AM, Muhammad Yousuf Khan wrote:
i have been playing with Proxmgather for testing. i have created a 
snapshot of a VM with ZFS command. i have also cloned that snapshot 
successfully. original on disk size of a zvol was. 28GB and snapshot 
size was few MBs.



king-tank/local-vm/vm-101-disk-1   131G   893G 28.1G  -
king-tank/local-vm/vm-101-disk-1@data-1  20.4M  -  28.1G

now i have create a clone of a snapshot. vm-110-disk-1

king-tank/local-vm/vm-110-disk-18K   789G  28.1G -

now my question is first when i ran the clone command it just clone 
the snapshot in few seconds.
how ever i was expecting the whole VM data should be cloned which was 
which is 28GB and that process should have taken some time.


doest that mean clone just means making a copy of the snapshot (20MB)  
and not the whole machine (actual 28GB) data it self?


if i want to use this clone and add it to the proxmox api for further 
use. then how could i do it?


your help will be highly appreciated

Thanks,
Yousuf




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] virtio net limiter issue

2015-11-09 Thread Yannis Milios

Hi,

I remember that I had issues with virtio  on previous pfsense versions.

Specifically the traffic shaping was not working correctly but on the 
latest versions it has been corrected.


The best place to ask though is on pfsense forum or mailing list.



On 11/09/2015 07:27 PM, Luis G. Coralle wrote:

Hello all.

I have a kvm virtualized pfsense 2.2.4 amd64 on Proxmox 3.3-1 with 
virtio bus disk and virtio network devices.
In pfsense settings, I have two limiters 1 MB each, to limit up and 
down LAN respectively.
My speed tests not work properly. After changing network devices 
viritio to Intel e1000, the limiters are working properly.

Someone had this problem?

Thank you

--
Luis G. Coralle


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Installation question

2015-02-22 Thread Yannis Milios
Dietmar is right.The purpose of installer is just to install O/S.
Particulary in ZFS case there are some many parameters you have to consider
before creating your first data pool that simply the installer cannot
include them all.
Install proxmox on your first ssd disk and then create your data pool
carefully.There are a lot of articles on the internet for fine tuning ZFS
that you can follow.
ZFS commands are simple to follow (you should get familiar with them
anyway) but powerful enough to do a mistake.The most important is to plan
before doing anything because many options you will make are irreversible.

Cheers!
On Feb 22, 2015 8:59 AM, Dietmar Maurer diet...@proxmox.com wrote:

  We have not been able to use a standard Proxmox install to select to
  install the OS onto SSD when the RAID controller is connected at
  installation time. Our workaround is to have the controller disconnected
  and then later to format the RAID array manually and then to link this
  as pve_data. A bit ugly I find.

 The purpose of the installer is to install the OS.

 After installation, you can add and configure other storage locations.

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Is it possible to use qemu Livebackup feature withProxmox?

2015-01-17 Thread Yannis Milios

 Now I am able to create live snapshots as many as I want, as frequently
 as I want and I can send incremental backups over the network. In this
 case the on-site backup is free (in time and in storage), remote backup
 is quick (just send the difference between snapshots over the network).

I use zfs on linux for years in several servers, even desktops, but one
 can try btrfs, too, which seems also has a lot of good features.


I agree with Istvan. ZFS is feature rich (fast
snapshots,compression,deduplication,zfs send/recv) and works very well for
backups, both for on-site and off-site backups.I use it for some months now
with great success.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] net type NAT DHCP

2014-11-26 Thread Yannis Milios
Hello

Can't answer how does dhcp server on nat works but If you don't need
internet access why you just not create a dummy vmbr interface on
/etc/network/interfaces and connect your vms on that?
It's the same with host only networking on vmware.In that way you can
setup an internal dhcp server in your vm which will provide ips on the
other vms on the same dummy network.



On Wed, Nov 26, 2014 at 1:59 AM, Tonči Stipičević to...@suma-informatika.hr
 wrote:

  Hello to all,

 So, I find pretty usefull using NAT eth type for my test lab.
 But how is this dhcp service and pool being manged ?
 I bought both proxmox ebooks yesterday and found nothing about :-)

 When my VM is connected to the NAT network it gets 10.0.2.x IP (or
 something like that ) and appropriate gateway and even dns server that
 makes internet browsing working

 I would like to change IP pool and network and even shutdown dhcp service
 (vm is dhcp server) but being not able to do that

 in /etc/network/interfaces  there is no such definition and even no
 firewall commands for nat ...

 thank you very much in advance

 and




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Uncompressed backups are compressed?

2014-11-13 Thread Yannis Milios
VMA is the new backup format since PVE 2.3.It is uncompressed and it should
reflect the actual data occupied inside your vm (not raw file size). Gzip
and Lzo are used for compression of this file.

http://pve.proxmox.com/wiki/VMA




On Thu, Nov 13, 2014 at 5:57 PM, Chris Murray chrismurra...@gmail.com
wrote:

 Hi,



 I may be missing the point here, but when choosing to backup a VM with
 compression = none, instead of LZO or GZIP, I’d expect an uncompressed file
 to be produced?



 I’m trying to backup to a device which will perform block-level dedupe,
 but I guess that won’t work too well when e.g. a 32GB RAW file turns into a
 15.9GB .vma file. Or is alignment preserved despite the difference in file
 size?



 Thank you,

 Chris

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NTFS/Windows Server corruption after successful live storage migration

2014-11-05 Thread Yannis Milios
hello,

Never tried online storage migration but what happens if you do the
following:

1. create a windows vm in raw format, on local disk storage(not nfs mount).

2. Start installing updates on windows and initiate online storage
migration to the nfs mount in qcow2 format.

Do you experience the same issue?

referance: https://pve.proxmox.com/wiki/Storage_Migration
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox forum hijacked?

2014-09-09 Thread Yannis Milios
To Proxmox devs: Please at the forum at tis section because it looks like
someone hijacked it:

forum.proxmox.com/forums/13-What-Virtual-Appliances-do-you-want-to-see

Sent by mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox forum hijacked?

2014-09-09 Thread Yannis Milios
it was before...but now it seems that somebody fixed it.
On Sep 9, 2014 9:01 PM, sebast...@debianfan.de sebast...@debianfan.de
wrote:

 Which thread in the forum do you mean?



 ***
 http://www.lkg-nw.de
 Evangelische Gemeinschaft Neu Wulmstorf

 Am 09.09.2014 15:28, schrieb Yannis Milios:


 To Proxmox devs: Please at the forum at tis section because it looks like
 someone hijacked it:

 forum.proxmox.com/forums/13-What-Virtual-Appliances-do-you-want-to-see 
 http://forum.proxmox.com/forums/13-What-Virtual-
 Appliances-do-you-want-to-see

 Sent by mobile



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox is lying to me??

2014-08-15 Thread Yannis Milios
Your reported disk size: 319G
and virtual size:160G
So total disk size is 319G and now occupied 160G.Seems normal. du reports
the actual size and ls reports the max disk size.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Selfimage

2014-08-14 Thread Yannis Milios
maybe this can help? http://goo.gl/9vRy0Y
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] P2V with SelfImage

2014-08-12 Thread Yannis Milios
You can use ide as your main disk when you first boot the win2008R2 vm and
then add a secondary virtio small disk (1gb).Windows then will detect
the new hardware and will prompt you to install the virtio driver which you
will
have mounted as cdrom.Install virtio drivers, shutdown vm and change it to
virtio disk.Finally remove the second small virtio disk.
This way it should work.

Yannis Milios

Systems Administrator
Mob. +30 6932-657-029
Tel.   +30 211-800-1230
E-mail. yannis.mil...@gmail.com





On Tue, Aug 12, 2014 at 3:41 PM, Gilberto Nunes gilberto.nune...@gmail.com
wrote:

 Hi

 I'm performe a P2V with SelfImage...
 I'm try to do this with a MS Windows 2008 R2...
 Everything is gone ok, but I realize that I wanna use VirtIO as a Hard
 Disk,but the original VM doesn't have such drivers installed... I meant, do
 not have Virtio Windows Drivers
 I try install the sys packages but without success...
 Is there a way to install VirtIO with some installer? EXE or MSI or
 whatever...

 Thank you



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Selfimage

2014-08-12 Thread Yannis Milios
why don't you try to boot from win2k8 dvd and try to fix mbr or try to see
if data is there?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox and IBM StorWize V3700

2014-07-20 Thread Yannis Milios
As long as it supports only iscsi and not nfs then snapshots through
proxmox I think would not be possible.
If it had NFS support then you could live snapshot by using qcow2 format
vms.
On Jul 20, 2014 5:37 PM, Gilberto Nunes gilberto.nune...@gmail.com
wrote:

 Thanks for all answers, but I meant make snapshot through Proxmox Web
 Interface...




 2014-07-20 10:53 GMT-03:00 THe_ZiPMaN flavio-...@zipman.it:

 On 07/20/2014 01:32 AM, Gilberto Nunes wrote:
  Hi
 
  I'm about to install Proxmox lastest version in a customer and this
  customer has a IBM Storwize V3700...
 
  So I am collecting some data about this storage... Mainly if it has
  hability to make snapshots on ISCSI protocol...
 
  Somebody knows something about it??

 It can make flash copies which are snapshots or clones of LUNS. Using
 them is really simple and you can join multiple flash copies into
 consistency groups, which is really helpful if you want to keep parts of
 the same application on multiple LUNS. Putting all them in the same
 consistency group allows to keep a synchronous snapshot of all the luns
 at the same time, giving you sure application consistency (I think this
 is the most important missing feature of LVM).

 Then you can map the snapshot targets to the backup server and make
 backup from them.

 The only problem is how to take the snapshots... you can schedule them
 on the Storewize software, but if you need to take them in a software
 controlled manner then you have to buy FlashCopy Manager (a bit
 expensive) or prepare scripts and launch them via ssh (not too difficult
 but prone to errors if you don't know exactly how to deal with SVC
 internals and CLI).

 See this video for a presentation.
 http://www.youtube.com/watch?v=MXWgGWjBzG4
 It's for the V7000 but it applies seamlessy to V3700 for this specific
 feature.


 --
 Flavio Visentin

 A computer is like an air conditioner,
 it stops working when you open Windows
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




 --
 Gilberto Ferreira

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox and IBM StorWize V3700

2014-07-19 Thread Yannis Milios
It looks like it has a technology called IBM Flashcopy which I think is
similar to snapshots.
It supports up to 64 targets on built in license wich can be upgraded up to
2040 targets.
Check the link below:

http://www-03.ibm.com/systems/storage/disk/storwize_v3700/features.html
On Jul 20, 2014 2:33 AM, Gilberto Nunes gilberto.nune...@gmail.com
wrote:

 Hi

 I'm about to install Proxmox lastest version in a customer and this
 customer has a IBM Storwize V3700...

 So I am collecting some data about this storage... Mainly if it has
 hability to make snapshots on ISCSI protocol...

 Somebody knows something about it??

 Thanks

 --
 Gilberto Ferreira

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Fwd: Re: Fwd: Snapshot

2014-07-10 Thread Yannis Milios
  I cannot change disk format...

That is normal.On lvm volume you can only use raw format.
However this does not prevent you from doing lvm based snapshots.
Did you create that vm and still does not show you Snapshot option??
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fwd: Snapshot

2014-07-10 Thread Yannis Milios
So we are confusing LVM Snapshots with Live snapshots don't you think?

*Yes,  of course that is the case.. :)*

My doubt is if I deploy my software storage under Ubuntu or Debian or even
in CentOS, whatever SO, if I get the same behavior like real storage.

*No, iscsi target on Ubuntu or whatever linux distro cannot provide you
live snapshots as real storage do. But you can do something similar if
you use technologies like Ceph or ZFS at storage side.Both can provide
live snapshots.*
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Windows 7 Frozen after Migration

2014-07-04 Thread Yannis Milios
hello

Yes, I had the same problem and asked the forum.It is a known issue with
Spice.As a workaround try to close remote-viewer before live migrating vm
or try to use a different display adapter (vmware or default) and see what
happens.
On Jul 4, 2014 9:16 PM, Gilberto Nunes gilberto.nune...@gmail.com wrote:

 I try it now, and I saw that the problem occurs when I install Spice Space
 Guest Tool..

 Here the message:

 Jul 04 15:11:48 ERROR: VM 100 not running
 Jul 04 15:11:48 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes'
 root@10.90.90.96 qm resume 100 --skiplock' failed: exit code 2
 Jul 04 15:11:48 Waiting for spice server migration
 Jul 04 15:11:50 ERROR: migration finished with problems (duration 00:00:15)
 TASK ERROR: migration problems




 2014-07-04 15:11 GMT-03:00 Gilberto Nunes gilberto.nune...@gmail.com:

 Hi...

 I have deploy a scenario here with 3 nodes...
 All machine is regular PC Core i5 2310...
 I install Windows 7 with Virtio Driver to HD and Network interface...
 I am using Conroe as CPU to VM...

 I am also have installed Spice Drivers.

 When I attempt to performe a migration from node01 to node02, everything
 ok...
 But when I try to migrate from node2 to node3 and again to node10, I
 experience system frozen
 Someone did same behavior and others scenario?

 Thanks

 --
 Gilberto Ferreira




 --
 Gilberto Ferreira

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Taking Snapshots of the Proxmox host

2014-03-28 Thread Yannis Milios
http://pve.proxmox.com/wiki/Live_Snapshots

http://pve.proxmox.com/wiki/Storage_Model
On Mar 28, 2014 2:43 PM, Ikenna Okpala m...@ikennaokpala.com wrote:

 Hi,
 Is it possible to take snapshots of the proxmox host ?
 Can you please share technics that they exist for this.

 Regards

 --
 Ikenna

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] openiscsi with Proxmox

2014-01-02 Thread Yannis Milios
Hello

I have followed these guides to build a two node ha drbd cluster + iscsi,
but not for production use.
Link1 http://wiki.skytech.dk/images/4/44/Ha-iscsi.pdf Link2
Openfiler is discontinued.

Yannis Milios
 --
Systems Administrator
Mob. 0030 6932-657-029
Tel.   0030 211-800-1230
E-mail. yannis.mil...@gmail.com





On Thu, Jan 2, 2014 at 10:10 AM, Muhammad Yousuf Khan sir...@gmail.comwrote:

 is there anyone have used openiscsi with Proxmox in production?
 for example. openfiler or manual openiscsi?
 what is your recommendation for the backend SAN/NAS box other then freeBSD
 or FreeNAS as both lagging DRBD and i want to setup an HA with DRBD.
 any suggestion would be highly appreciated.

 Thanks
 MYK

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Configuration Only Backups

2013-12-04 Thread Yannis Milios
maybe this can help you:

http://forum.proxmox.com/threads/7674-Restore-Proxmox-KVM-image-to-another-Proxmox-server

Yannis Milios
 --
Systems Administrator
Mob. 0030 6932-657-029
Tel.   0030 211-800-1230
E-mail. yannis.mil...@gmail.com





On Wed, Dec 4, 2013 at 12:43 AM, Richard Laager rlaa...@wiktel.com wrote:

 Is it possible to backup just the configuration files for VMs? The disk
 images are already on shared storage.

 Then how do I restore such backups to another cluster, which may have
 those VMIDs already in use. In other words, cluster A has VMIDs 101 and
 102. Cluster B has different VMs with the same VMIDs 101 and 102. I
 backup cluster A. When something happens to it and I need to restore on
 cluster B, how do I do that?

 --
 Richard

 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Backup failure: 'query-backup' failed on a single node

2013-09-17 Thread Yannis Milios
I did some tests on 2 different NFS,SMB shares and I had the same very slow
backup performance.
Then I reverted from *pve-kernel-2.6.32-23-pve *to *pve-kernel-2.6.32-20-pve
*and the backup
performance got back to normal speed.

Yannis Milios
 --
Systems Administrator
Mob. 0030 6932-657-029
Tel.   0030 211-800-1230
E-mail. yannis.mil...@gmail.com





On Mon, Sep 16, 2013 at 10:45 AM, Yannis Milios yannis.mil...@gmail.comwrote:

 Hello list,

 I have a single node running a single vm (win2k3).
 It was working fine on Proxmox 2.3.I upgraded this box to 3.1 last week by
 following:
 http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0.
 All went smooth, except that backup is not completed for some reason.
 I have a local nfs mount on a NAS device which I use as a backup target:

 root@proxmox1:~# df -h
 Filesystem Size  Used Avail Use% Mounted on
 udev10M 0   10M   0% /dev
 tmpfs  194M  1.5M  192M   1% /run
 /dev/mapper/pve-root28G   16G   11G  60% /
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  387M   22M  366M   6% /run/shm
 /dev/mapper/pve-data65G  9.8G   55G  16% /var/lib/vz
 /dev/sda1  495M  123M  348M  27% /boot
 /dev/fuse   30M   16K   30M   1% /etc/pve
 *192.168.0.203:/VOLUME1/PUBLIC  442G  198G  244G  45% /mnt/pve/nfs1*
 *
 *
 The error I am receiving is:

 *VM 101 qmp command 'query-backup' failed - client closed connection*
 *
 *
 What I've noticed is that if even if I invoke a manual backup job, the
 process starts but it takes forever and then stops with the above message.
 During this time, if I ping the node I get very high response time and
 some times it is near inaccessible.
 I did a test to write a 100mb file at nfs mount:

 *root@proxmox1:~# dd if=/dev/zero of=/mnt/pve/nfs1/test.raw bs=1M
 count=100*
 *100+0 records in*
 *100+0 records out*
 *104857600 bytes (105 MB) copied, 340.931 s, 308 kB/s*
 *
 *
 Seems that for some reason the machine has a hard time communicating with
 NAS device.The strange thing is that before upgrade there was no problem.
 Could it be a nic driver issue? I'm providing some more info if could
 anyone help:


 *root@proxmox1:~# pveversion -v*
 *proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)*
 *pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)*
 *pve-kernel-2.6.32-20-pve: 2.6.32-100*
 *pve-kernel-2.6.32-16-pve: 2.6.32-82*
 *pve-kernel-2.6.32-17-pve: 2.6.32-83*
 *pve-kernel-2.6.32-18-pve: 2.6.32-88*
 *pve-kernel-2.6.32-23-pve: 2.6.32-109*
 *lvm2: 2.02.98-pve4*
 *clvm: 2.02.98-pve4*
 *corosync-pve: 1.4.5-1*
 *openais-pve: 1.1.4-3*
 *libqb0: 0.11.1-2*
 *redhat-cluster-pve: 3.2.0-2*
 *resource-agents-pve: 3.9.2-4*
 *fence-agents-pve: 4.0.0-1*
 *pve-cluster: 3.0-7*
 *qemu-server: 3.1-1*
 *pve-firmware: 1.0-23*
 *libpve-common-perl: 3.0-6*
 *libpve-access-control: 3.0-6*
 *libpve-storage-perl: 3.0-10*
 *pve-libspice-server1: 0.12.4-1*
 *vncterm: 1.1-4*
 *vzctl: 4.0-1pve3*
 *vzprocps: 2.0.11-2*
 *vzquota: 3.1-2*
 *pve-qemu-kvm: 1.4-17*
 *ksm-control-daemon: 1.1-1*
 *glusterfs-client: 3.4.0-2*


 *root@proxmox1:~# tail /var/log/kern.log*
 *Sep 16 02:35:11 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:18:47 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:19:35 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:20:05 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:20:35 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:20:59 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:21:41 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:22:05 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:22:41 proxmox1 kernel: r8169 :02:00.0: eth0: link up*
 *Sep 16 10:23:11 proxmox1 kernel: r8169 :02:00.0: eth0: link up*












___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Port Forwarding

2013-08-27 Thread Yannis Milios
Have you defined the ip of your router as a default gateway on the vm's
network configuration?
Is there a chance that a firewall is enabled on this vm? Can you ping the
ip of the vm from your router?
On Aug 27, 2013 7:45 PM, Keith Clark keithcl...@waterloosubstop.com
wrote:

 On 13-08-27 12:37 PM, Marco Gabriel - inett GmbH wrote:

 from the outside means a port forward from your router to the
 destination machine?

 That means, you need to adapt your port forwarding rule to the virtual
 machine, else your packets will not reach it.

 Marco


 -Ursprüngliche Nachricht-
 Von: 
 pve-user-bounces@pve.proxmox.**compve-user-boun...@pve.proxmox.com[mailto:
 pve-user-bounces@pve.**proxmox.com pve-user-boun...@pve.proxmox.com]
 Im Auftrag von Keith Clark
 Gesendet: Dienstag, 27. August 2013 18:34
 An: pve-user@pve.proxmox.com
 Betreff: Re: [PVE-User] Port Forwarding

 On 13-08-26 07:41 PM, Paul Gray wrote:

 On 08/26/2013 06:38 PM, Keith Clark wrote:

 I've just installed an Ubuntu server machine under Proxmox and need
 to have access to port 25565.  Do I need to set that up in Proxmox,
 or will my port forwarding function in my router do that?

 Are you using NAT or bridge?

 If you're using a bridge, your router will do the magic...if you're
 doing NAT, then ... you might want to consider the bridge instead.


  I'm using bridge and it still does not seem to be working for me.  I
 can access the port within my local network, but the port remains closed to
 the outside.  If I use that port on another standard desktop running
 Ubuntu, the port opens just to the outside just fine.

 __**_
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-**bin/mailman/listinfo/pve-userhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

  I mean that from a remote location I can reach the desired port on a
 standard desktop computer running Ubuntu, after I've forwarded that port
 through my modem.

 When I try the same thing to a virtual machine running on my proxmox
 server, I cannot get through, after I've forwarded that port to the proxmox
 server.

 __**_
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-**bin/mailman/listinfo/pve-userhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] LVM Group and VMDK

2013-02-08 Thread Yannis Milios
Hello,

I have tried migrating successfully in the past from .vmdk to lvm by using
this guide:

http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#VMware_to_Proxmox_VE_.28KVM.29

What I did actually is giving the following command to migrate directly
vmdk to lvm:

dd if=vm.vmdk of=/dev/VG/vm-102-disk-1





On Thu, Feb 7, 2013 at 5:14 PM, Fernando Sierra
fernando.sie...@ciemat.eswrote:

  Sorry,
 I thought that forum and list was differents.

 When I create a new VM in LVM Group, it's raw format only. I'm searching
 at proxmox system and find the vm-disk at /dev/LVMGroup/vm-100-disk-1.

 If I launch #ls -la /dev/LVMGroup/ it takes a symbolic link to /dev/dm-6.
 Then I copy vm.raw to /dev/dm-6 and it works. But is the correct
 way?

 What do you think?

 Thanks

 El 07/02/13 16:07, Martin Maurer escribió:

  Pls do not ask the same in the forum and in mailing list. This just
 doubles work for all others reading it and does not lead to a faster answer.
 

 ** **

 Martin

 ** **

 *From:* pve-user-boun...@pve.proxmox.com [
 mailto:pve-user-boun...@pve.proxmox.com pve-user-boun...@pve.proxmox.com]
 *On Behalf Of *Fernando Sierra
 *Sent:* Donnerstag, 07. Februar 2013 14:52
 *To:* pve-user@pve.proxmox.com
 *Subject:* [PVE-User] LVM Group and VMDK

 ** **

 Hi,

 I have a LVMGroup in a proxmox-cluster and I wanto to import from vmware.
 I convert from .vmdk to qcow2 (or raw) with qemu-img.
 But when I create a new VM in the LVM I only could used raw format and I
 don't know where could I find the disk for change with import disk.

 Somebody knows how could I import a vmdk to proxmox in a LVM group.

 Thanks!

 --
 *Fernando Sierra Pajuelo**
 **System Administrator / Researcher**
 **at CETA-Ciemat, TRUJILLO, SPAIN*

  Confidencialidad: Este mensaje y sus ficheros
 adjuntos se dirige exclusivamente a su destinatario y puede contener
 información privilegiada o confidencial. Si no es vd. el destinatario
 indicado, queda notificado de que la utilización, divulgación y/o copia sin
 autorización está prohibida en virtud de la legislación vigente. Si ha
 recibido este mensaje por error, le rogamos que nos lo comunique
 inmediatamente respondiendo al mensaje y proceda a su destrucción.
 Disclaimer: This message and its attached files is intended exclusively for
 its recipients and may contain confidential information. If you received
 this e-mail in error you are hereby notified that any dissemination, copy
 or disclosure of this communication is strictly prohibited and may be
 unlawful. In this case, please notify us by a reply and delete this email
 and its contents immediately.  


 ___
 pve-user mailing 
 listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] win2000 sp4 guest does not start with kvm enabled option?

2012-12-27 Thread Yannis Milios
Hi all,

I have a standalone machine with proxmox 2.2 installed for testing purposes.
I tried to p2v 3 win2000 servers which I have in production successfully by
using the steps on the wiki.
Although I have the following strange behavior:

All 3 guest machines stop loading at the first splash screen which says:
Windows 2000 Starting up...
They don't give any BSOD, just stay there forever.
If I disable the KVM option from web interface and start the guests again
they load normally but very slowly though.

Is this the expected behavior with windows 2000? Tried also disabling acpi
with no results...
I have to mention that I don't have this problem with XP or Win2003 guests.

Any suggestion is appreciated.

Thank you
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Only one lun shows up on a ha iscsi storage.

2012-11-19 Thread Yannis Milios
Hello all,

I have followed this guide (
http://www.linbit.com/fileadmin/tech-guides/ha-iscsi.pdf) to build a two
node active/passive storage cluster by using drbd,pacemaker,iet iscsi.The
cluster works correctly and I can connect to the iscsi target and the two
luns (lun0,lun1) inside it from a win7 pc with iscsi initiator.
Now, I have also a two node ProxMox cluster which I want to connect to this
storage by using iscsi target.
I am using version  2.2-30.The problem is that when I add the iscsi target
from Datacenter - Storage menu I can only see only one of the two luns
(lun0) inside the target.
What could be the problem? I didn't had such an issue with an openfiler san
that I had tried before.

Thank you
Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user