like iscsi,
and thus should not be mounted by two nodes at the same time).
I agree. Should I add another parameter for this? If yes, should this be
default to auto-import, or not?
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel
- Moved the zpool_import method of zfs_request() to it's own pool_request
function
- activate_storage() is now using zfs list to check if the zpool is imported
- pool import only the configured pool, not all the accessible pools
Signed-off-by: Adrian Costin adr...@goat.fish
---
PVE/Storage
Please remove the private key here!
I guess it wasn't necessary. I've removed it and everything seems to work.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
-ssl.pem contains the following:
-BEGIN CERTIFICATE-
[My Cert]
-END CERTIFICATE-
-BEGIN CERTIFICATE-
[Intermediate cert]
-END CERTIFICATE-
-BEGIN RSA PRIVATE KEY-
[Private key]
-END RSA PRIVATE KEY-
Best regards,
Adrian Costin
an extra get to
find our the node name.
Is this a bug or it¹s the intended behaviour?
I¹m running the latest packages from pvetest, but I¹ve tested the same
with the latest from pve-no-subscription as well.
Best regards,
Adrian Costin
___
pve-devel
In the git version:
- If libiscsi1 was renamed to libiscsi2, then it needs a Replace:
libiscsi1 or it won't install correctly
- pve-qemu-kvm needs to depend on libiscsi2 rather then libiscsi1
Just my findings when trying to compile qemu 2.0 with the latest libiscsi.
Best regards,
Adrian Costin
remains:
scsihw: lsi
scsihw: lsi53c810
scsihw: pvscsi
using scsihw: virtio-scsi-pci or megasas SeaBIOS detects the drive and
can start grub from it.
Maybe we need an updated SeaBIOS as well?
Best regards,
Adrian Costin
___
pve-devel mailing list
pve
Infiniband
which degrades performance. I've tested SRP on our network and it has
at least 100% improvement over the current solution.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman
Already in git:
commit 082e79f35b2f7b75862dc3014fb7de8e65fa76c6
Sorry, I didn't see if. It's not visible here:
https://git.proxmox.com/?p=pve-storage.git;a=summary
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http
monster though;-)
It's not as fast as VirtIO, but it's definitely better the IDE.
maybe can you try with qemu 2.0 ?
(I can built it for you if you want).
I can definitely test with qemu 2.0. Are there packages available?
Best regards,
Adrian Costin
,romfile=,mac=82:D3:92:5F:29:5A,netdev=net0,bus=pci.0,addr=0x12,id=net0
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
2.0.873-3
amd64High performance, transport independent iSCSI
implementation
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
local SCSI drive works.
Best regards,
Adrian Costin
--
# pveversion -v
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
on an
already installed VM (using VirtIO or IDE which is then switched to
SCSI).
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
.
I'vee just tried MegaRAID and it doesn't work either. Basically I've
tried all the SCSI HW and the same think happens.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve
want to take the performance penalty of running with IDE.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
to keep all the images inside a secondary
zfs.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
this.
This is the only problem. I've been using this config in production
with no other issues for a while. VMs start, disk creation / deletion
works fine (if only one disk).
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http
I was using version 3.0-19. I've manually applied the diff from git
and indeed it fixes the problem.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Also no crashes here:
Xeon 5500 and 5600 series, and Xeon E3-12XX.
Best regards,
Adrian Costin
On Thu, Apr 24, 2014 at 12:28 AM, Adrian Costin adrian.cos...@gmail.com wrote:
Also no crashes here:
Xeon 5500 and 5600 series, and Xeon E3-12XX.
Best regards,
Adrian Costin
On Thu, Apr 24
SSH stack and running zfs/zpool command directly.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
ca install zfs on Proxmox by simply adding
the Ubuntu PPA and doing apt-get install ubuntu-zfs.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
I need to find a bug in the ZFSPlugin.
The plugin should be named something like RemoteZFSPlugin as it would
be confused with a local zfs plugin.
I've already made one and I'm thinking of submitting it after a bit
more testing (if people are interested that is).
Best regards,
Adrian Costin
the config file.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
regards,
Adrian Costin
On Wed, Apr 17, 2013 at 3:29 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 17.04.2013 14:17, schrieb Dietmar Maurer:
Nowhere ;-) how about just return the counter values for the correct
tap device
through API?
So it is basically:
1.) a wrapper
25 matches
Mail list logo