discards will only be issued if
both the # storage and kernel provide support.
issue_discards = 0
"
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir datanom net
https://pgp.key-serve
sure it is SSD? I don't recollect that WD has produced WD blue
as SSD.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
mir datanom net
https://pgp.key-server.io/pks/lookup?search=0xE501F51C
mir m
?
>
Try searching for issue_discards in /etc/lvm/lvm.conf
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11
a test
configuration.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3
--- Begin Message ---
On Mon, 1 Apr 2019 13:06:27 -0400
David Lawley wrote:
> are backups made with 3.4 (via backup at the Datacenter options)
> compatible with 5.3 PVE, to restore from?
>
I depends whether the backups are VM's or containers.
--
Hilsen/Regards
Michael Rasmussen
Get
spice
bind *:3128
option tcpka
balance source
server esx1 :3128 check
server esx2 :3128 check
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http:/
On Tue, 4 Sep 2018 12:50:49 -0300
Gilberto Nunes wrote:
> Yes. I use the default lzo.
>
To be able to compare you need to use gzip.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datano
On Tue, 4 Sep 2018 12:24:18 -0300
Gilberto Nunes wrote:
>
> With pigz and bwlimit it's not suppose to increase the performance or
> I miss something?
>
Without pigz do you then use gzip or the default lzo? Otherwise you
compare apples to pairs.
--
Hilsen/Regards
Michael Rasmu
and would be
> better to have a native integration, at least at the API level.
>
The MAC address is configurable for a VM to match a IP <-> MAC relation
with fixed addresses.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11
Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup?op
stribution which Proxmox is build upon is
not designed to be used from a USB stick.
See more of M.2 here: https://en.wikipedia.org/wiki/M.2
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http
Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup?op
to upgrade to latest kernel release
r151022i if you have HBA's based on LSI SAS >= 2300 (using mr_sas
driver)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/l
bringing vms down - this
> simplifies storage migration a lot in live environment!
>
Remember to get it here: http://www.omniosce.org/
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http
away things like MDADM and LVM
> this time and replacing them with ZFS for storage purposes.
>
You mentioned before that you hoped to use Omnios. Latest stable
now supports your nics.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks
017
> State : active, checking
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : near=2
> Chunk Size : 512K
>
Try do read here:
http://dennisfleurbaaij.blogspot.dk/2013/01/setting-up-linux-mdad
The former is for HA vm's the latter for non HA vm's
On June 20, 2017 6:19:36 PM GMT+02:00, Uwe Sauter
wrote:
>Hi all,
>
>usually when I update my PVE cluster I do it in a rolling fashion:
>1) empty one node from running VMs
>2) update & reboot that node
>3) go to next
gt; root@pve:/etc/apt/sources.list.d# apt-get update
> Hit:1 http://ftp.us.debian.org/debian stretch InRelease
Proxmox 5 is not released yet so you wont find it in enterprise repo.
It is only available in repo stretch test:
deb http://download.proxmox.com/debian/pve stretch pvetest
--
Hilsen/R
00,hwaddr=3A:66:32:39:38:65,ip=dhcp,type=veth
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mi
at your proxmox nodes are connected
through a trunk port the rest can wait.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras o
On Tue, 4 Apr 2017 16:58:37 +0200
Guillaume <prox...@shadowprojects.org> wrote:
> They support vlan between servers in the vrack.
>
Then you need to have a trunk port instead of an access port in the
switch where the servers are connected.
--
Hilsen/Regards
Michael Rasmussen
G
nt proxmox nodes the switch must
support vlan so you should ask OVH support whether vlans between
servers are supported.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.
ou manually disable some cpu flags
in the config file for your VM's. As I recall it this was also the case
somewhere 2.x or maybe early 3.x.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://
8307.91
> Virtualization:AMD-V
> L1d cache: 16K
> L1i cache: 64K
> L2 cache: 2048K
> L3 cache: 8192K
> NUMA node0 CPU(s): 0-7
>
I need to see the cpu flags so could you do on both proxmox hosts:
cat /proc/cpuinfo |gr
from the VM (if it is Linux):
lscpu
If VM is *BSD: dmesg -a |grep -i features
If VM is Windows ask somebody else.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/l
On Fri, 10 Feb 2017 12:16:07 -0600
Gerald Brandt <g...@majentis.com> wrote:
>
> No errors on migration and processor type is KVM 64
>
Try qemu 64 instead.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pk
Latest pve kernel has a fix for a serious oom killer bug. I would try upgrading
your kernel before anything else.
On February 4, 2017 3:16:05 PM GMT+01:00, Michele Bonera
wrote:
>On 04/02/2017 12:35, Alwin Antreich wrote:
>
>> Hi Michele,
>>
>> On 02/04/2017 10:44 AM,
datacenters.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks
s disabled on
ext3 by default so I guess this means you manually add barrier=1 to
all your ext3 mount points?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup
too aggressive. My
> impression is that with ext3 I got better performance.
>
You can achieve the same performance and security level with ext4 as
you did with ext3 by using this mount option: barrier=0
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
htt
hang if the slave or the master is down
which effectively means you have a single point of failure. Adding an
extra slave makes you resilient to one node down.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
new implementation will support multipath. As this
is developed in my spare time progress is not a high as it could be.
Alternatively you could look at this:
http://www.napp-it.org/doc/downloads/z-raid.pdf
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp
On Tue, 22 Nov 2016 18:04:39 +
Dhaussy Alexandre <adhau...@voyages-sncf.com> wrote:
> Le 22/11/2016 à 18:48, Michael Rasmussen a écrit :
> > Have you tested your filter rules?
> Yes, i set this filter at install :
>
> global_filter = [ "r|sd[b-z].*|", &qu
Have you tested your filter rules?
On November 22, 2016 6:12:27 PM GMT+01:00, Dhaussy Alexandre
<adhau...@voyages-sncf.com> wrote:
>
>Le 22/11/2016 à 17:56, Michael Rasmussen a écrit :
>> On Tue, 22 Nov 2016 16:35:08 +
>> Dhaussy Alexandre <adhau...@voyages-sncf
target from Qnap NAS.
Block scanning from all
# other block devices.
filter = [ "a|ata-OCZ-AGILITY3_OCZ-QMZN8K4967DA9NGO.*|",
"a|scsi-36001405e38e9f02ddef9d4573db7a0d0|", "r|.*|" ]
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael ras
A long shot. Do you have a hardware watchdog enabled in bios?
On November 11, 2016 4:28:09 PM GMT+01:00, Dhaussy Alexandre
wrote:
>> Do you have a hint why there is no messages in the logs when watchdog
>> actually seems to trigger fencing ?
>> Because when a node
but I need migrate the running VM from PVE 4.3 to PVE
> 4.2.
>
AFAIK this is not possible since pve 4.2 uses an older version of qemu.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
acpi-support-base is sufficient.
On November 3, 2016 3:53:33 PM GMT+01:00, Karsten Becker
wrote:
>Hi,
>
>I also got this prob when setting up our new Jessie based VMs... the
>solution is simple.
>
>The package acpid on Debian just installs the daemon. But not the
On Sun, 30 Oct 2016 08:21:34 +1000
Lindsay Mathieson <lindsay.mathie...@gmail.com> wrote:
> Any particular reason why the webui only supports active-backup, balance-slb
> and balance-tcp bonds?
>
>
AFAIK these are the bonding modes available in OVS bonds.
--
Hilsen/Regards
the use of a
RAID controller, then yes. If you care for your data a BBU is
absolutely vital and will give you a noticeable performance boost.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir
Is it possible to switch to 802.3ad bond mode?
On October 26, 2016 11:12:06 AM GMT+02:00, "Szabolcs F."
wrote:
>Hi Lutz,
>
>my bondXX files look like this: http://pastebin.com/GX8x3ZaN
>and my corosync.conf : http://pastebin.com/2ss0AAEr
>
>Mutlicast is enabled on my
On Sat, 22 Oct 2016 13:28:06 +0200
"sebast...@debianfan.de" <sebast...@debianfan.de> wrote:
>
> What's up now ?
>
Read this from CheckPoint support:
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails==sk43772
--
Hilsen/Regar
using network backed storage you could still have lots of free
CPU and RAM but still see VM's running in slow motion due to network
congestion.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom n
On Mon, 17 Oct 2016 15:58:35 +0200
Dominik Csapak <d.csa...@proxmox.com> wrote:
> so cpu is the node average
Why average on not total? If this is supposed to be a cluster wide
dashboard average gives no mening.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael
On Mon, 17 Oct 2016 16:17:48 +0200
Michael Rasmussen <m...@miras.org> wrote:
> I would say 5 out of 6 cores is in use so 83,33 % CPU usage in the
> cluster.
>
Forgot to mention: For people coming from VmWare this makes sense since
that is how vSphere cluster client displays
On Mon, 17 Oct 2016 16:14:37 +0200
Dominik Csapak <d.csa...@proxmox.com> wrote:
> On 10/17/2016 04:04 PM, Michael Rasmussen wrote:
> > On Mon, 17 Oct 2016 15:58:35 +0200
> > Dominik Csapak <d.csa...@proxmox.com> wrote:
> >
> >> so cpu is the n
On Wed, 5 Oct 2016 17:35:28 +0200
Michael Rasmussen <m...@miras.org> wrote:
> My biggest issue with the current functionality is that I sometimes
> forgets a VM is HA enabled so pressing 'Shutdown' to be able to do
> maintenance has the unwanted side effect of a reboot. Maybe
wanted side effect of a reboot. Maybe the
shutdown and stop button could be configurable in cluster.cfg defaulting
to current functionality with the option of having:
shutdown: HA-stop -> shutdown
stop: HA-stop -> stop
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmus
re is that 'Stop' and 'Shutdown' behaves
differently whether used on a HA enabled VM or not.
For a non-HA:
Shutdown: Clean shutdown
Stop: Unclean shutdown
For a HA:
Shutdown: Reboot
Stop: Clean shutdown
So either non-HA misses the reboot option or the HA misses the unclean
shutdown option.
--
Hilsen/
t
> exist. Turns out it does,
> for some reason it's just hidden.
>
The reason it is hidden is because it is not stable yet.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom ne
. You can simply add ver=4 under your storage configuration in
storage.cfg
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras
Omniti.com offers enterprise support for Omnios.
On September 15, 2016 12:24:23 PM GMT+02:00, Dimitris Beletsiotis
wrote:
>I am aware of a cloud provider that is using storpool and has
>encountered a lot of problems with storage availability resulting in
On Thu, 8 Sep 2016 10:10:47 -0500
Hexis <lists@hexis.consulting> wrote:
> Apparently it does! Thanks. What exactly does that do?
>
https://en.wikipedia.org/wiki/Non-uniform_memory_access
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http:/
Does unchecking numa support in the vm help?
On September 8, 2016 4:41:03 PM GMT+02:00, Hexis wrote:
>I have had a Linux Mint VM running on ProxMox VE for about 4 months,
>for
>the past 2 months it has been powered down and not in use. A couple of
>updates later I tried
On Tue, 26 Jul 2016 23:42:10 +0200
Florent B <flor...@coppint.com> wrote:
>
> Is it expected ?
>
For Windows it is.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
htt
On Wed, 6 Jul 2016 05:12:58 -0300
Gilberto Nunes <gilberto.nune...@gmail.com> wrote:
> hum... I miss this part too... Why create a bridge over a bond???
>
That is the preferred way in Proxmox.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmus
255.255.255.0
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
PS. it is a bad design to assign an address directly to the bond.
Instead create bridges over this bond and assign addresses to the
bridges.
--
Hilsen/Regards
Michael Rasmussen
Get my pub
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Your test is completely useless. It does not proof anything.
On June 17, 2016 9:06:45 AM GMT+02:00, haoyun wrote:
>First of all,thank you for answering.
>the result is : it is not vm's problem,it is raid card problem,raid
>card
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Have you tried changing the MAC?
On May 27, 2016 9:03:48 AM GMT+02:00, Lindsay Mathieson
wrote:
>On 27 May 2016 at 00:56, Alwin Antreich
>wrote:
>> may you please post your CT config, as I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Split brain. Do you use two_node option in corosync?
On May 23, 2016 5:24:45 AM GMT+02:00, haoyun wrote:
>hello everyone~
>my pve cluster with shared storage,
>I run a vm in this physical machine,and I active this volumes in
st=0, time=0.020ms
>
Is /etc/hosts identical on all nodes?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http:
odel of the switch?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup
On Sun, 22 May 2016 15:47:59 +0200
Daniel Eschner <dan...@linux-nerd.de> wrote:
> Mhh
>
> have corosync problem with bonding maybe?
>
Looks more like multicast problem to me.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp
ilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/loo
On Tue, 17 May 2016 17:55:25 +0200
Eneko Lacunza <elacu...@binovo.es> wrote:
>
> We're having trouble connecting a SATA to USB3 dock with kernel 4.4.6-1 .
> Details:
>
Is it a USB 3.1 device?
USB 3.1 is first supported from kernel 4.6.
--
Hilsen/Regards
Michael Rasmus
IGMP
> querier.)
>
What kind of switch do you have?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http:
storage (the storage will be treated as a local disk on the proxmox
host).
- --
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
sk on the proxmox
host).
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.ed
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
I would try setting disk controler in proxmox to sata.
On April 1, 2016 11:06:30 AM GMT+02:00, Edgardo Ghibaudo
wrote:
>Il 31/03/2016 13:43, Emmanuel Kasper ha scritto:
>> On 03/31/2016 12:39 PM, Edgardo Ghibaudo wrote:
>>> Hi Emmanuel,
>>> In VMware5, the
ietd.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup
n recommend omnios.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup
his has been
found and should come to a repo near you soon;-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http:/
MB/s. The rest is simpel math ;-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pk
ailed: exit code 1
>
You are stumbling into an old unresolved missing feature in qemu-img.
See this thread:
http://pve.proxmox.com/pipermail/pve-devel/2014-March/010517.html
Bottom line: You cannot use host group since qemu-img does not submit
initiator-name to libiscsi.
--
Hilsen/Regard
On Sat, 20 Feb 2016 17:18:06 +0100
Christian Kivalo <ml+pve-u...@valo.at> wrote:
> Isn't there a way in napp-it to manage snapshots? i haven't used it so can't
> say.
>
'Snapshots' tab -> 'Create Datasnap'
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
mich
se standard ZFS commands.
http://www.datadisk.co.uk/html_docs/sun/sun_zfs_cs.htm
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir
SI storage?
Use the VM's snapshot tab
> Is there some CLI for Proxmox or some command?
All the standard CLI commands in Proxmox works with your new storage.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
h
is also maintained in Core OS. Complete cli available so no fiddling around
with config files. (my add-on :-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks
omnios here: http://lists.omniti.com/mailman/listinfo/omnios-discuss
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp
e for Omnios (and napp-it):
http://www.napp-it.org/doc/downloads/napp-it.pdf
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir mir
, don't you agreed?
>
I have not tried Ubuntu, or any Linux for that matter, as ZFS storage
server so I cannot give any advice. Why not try Omnios? (the cost is as
Ubuntu - only your own precious time ;-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
htt
ive you:
Random read I/0: ca. 4000 iops
Random write I/O: ca. 1200-1500 iops
Add to this: Life migration, life snapshots, linked clones.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http
dy?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup?op
_Tweaks
Default mount options for ext4 in debian 8.2: type ext4
(rw,noatime,data=ordered)
You could try: rw,noatime,data=ordered,barrier=1
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http:
: Sub-process /usr/bin/dpkg returned an error code (1)
>
>
Try from CLI:
sudo dpkg --configure -a
sudo apt-get dist-upgrade (Just to be sure)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom
and if
this backup succeeds it will delete the oldest backup ensuring there is
only 2 backups at any given time.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371
will start backup concurrently if the
VM's which is to be backed up is running on different Proxmox host. To
overcome this problem you should make a scheduled backup for each
proxmox host running on a different time scale so that there at any
time only runs a scheduled backup of one VM.
--
Hilsen/Reg
On Wed, 4 Nov 2015 18:29:52 +
Ikenna Okpala <m...@ikennaokpala.com> wrote:
> I already tried disabling firewalld
>
> systemctl disable firewalld
>
> Problem still there..
>
Could it by any chance be related to a duplicated MAC address?
--
Hilsen/Regards
Michael
On Wed, 4 Nov 2015 17:40:30 +
Ikenna Okpala <m...@ikennaokpala.com> wrote:
>
> Can anyone give me Idea what I may be doing wrong?
>
Local firewall aktive in Centos7?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:1
On Mon, 2 Nov 2015 20:16:14 -0200
Gilberto Nunes <gilberto.nune...@gmail.com> wrote:
> Mergeide.reg perhaps???
>
See:
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Physical_server_to_Proxmox_VE_.28KVM.29_using_Clonezilla_Live_CDs
--
Hilsen/Regards
Michael Rasmu
esults and how to fix it?!?!
>
> Thanks for any help
>
Isn't you suppose to run a script which resets the hardware settings?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.m
What cache settings do you have for the disks?
On October 29, 2015 2:09:33 PM CET, Gilberto Nunes
wrote:
>Disk is virtio, alright!
>Because I need live migration and HA.
>With IDE/SATA there is no way to do that, AFAIK!
>
>2015-10-29 11:07 GMT-02:00 Luis G. Coralle
d kernel modules which is supposed to hook into
the network stack the same place. What you discover is that when
migrating between a linux bridge and an openvswitch node the running vm
talks either language but not both at the same time (so to speak)
--
Hilsen/Regards
Michael Rasmussen
Get my publ
Remember softlocks is dangerous and can cause data loss.
On October 27, 2015 2:58:25 PM CET, Gilberto Nunes
wrote:
>Now the VM seems to doing well...
>
>I put some limits on the Virtual HD ( Thanks Dmitry), mount NFS with
>soft
>and proto=udp, and right now stress
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir miras org
http://pgp.mit.edu:11371/pks/lookup
This is only a problem with 32-bit VM's
On September 29, 2015 1:16:54 PM CEST, Dmitry Petuhov
wrote:
>Hello, all.
>
>There's question arouse. Does KVM in PVE have some limitation on disk
>size? I've seen reports that with qcow2 there are problems with >2TB.
>Are there
esses
> there, no hostnames (this still needs changing...).
>
And IPv6?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir mi
;
I would personally never virtualize my edge routers but I use it for
virtualized networks inside Proxmox. I have good experience by using
pfSense as a virtualized router for virtualized networks inside Proxmox.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
h
all new VM's while cache on the VM level is affecting only
the specific VM. What you can see in napp-it is the cache setting on
iSCSI level while the cache setting on VM level is only used as an
option when starting the VM in proxmox.
I hope the above explanation makes sense?
--
Hilsen/Regards
Mic
1 - 100 of 206 matches
Mail list logo