Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs

2015-02-12 Thread Adrian Costin
 
 AFAIK having a setting to control wether auto-import of pool is desirable 
 would be a plus. As in some situations the import/export of the pool is 
 controlled by any other means, and an accidental pool of the pool may be a 
 destructive action (ie. when the pool maybe from a shared medium like iscsi, 
 and thus should not be mounted by two nodes at the same time).

I agree. Should I add another parameter for this? If yes, should this be 
default to auto-import, or not?


Best regards,
Adrian Costin

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs

2015-02-12 Thread Adrian Costin
- Moved the zpool_import method of zfs_request() to it's own pool_request 
function
- activate_storage() is now using zfs list to check if the zpool is imported
- pool import only the configured pool, not all the accessible pools

Signed-off-by: Adrian Costin adr...@goat.fish
---
 PVE/Storage/ZFSPoolPlugin.pm | 39 +++
 1 file changed, 27 insertions(+), 12 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..3fc2978 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -149,16 +149,27 @@ sub zfs_request {
 
 $timeout = 5 if !$timeout;
 
-my $cmd = [];
+my $cmd = ['zfs', $method, @params];
 
-if ($method eq 'zpool_list') {
-   push @$cmd, 'zpool', 'list';
-} else {
-   push @$cmd, 'zfs', $method;
-}
+my $msg = '';
+
+my $output = sub {
+my $line = shift;
+$msg .= $line\n;
+};
+
+run_command($cmd, outfunc = $output, timeout = $timeout);
+
+return $msg;
+}
+
+sub zpool_request {
+my ($class, $scfg, $timeout, $method, @params) = @_;
+
+$timeout = 5 if !$timeout;
+
+my $cmd = ['zpool', $method, @params];
 
-push @$cmd, @params;
- 
 my $msg = '';
 
 my $output = sub {
@@ -428,12 +439,16 @@ sub volume_rollback_is_possible {
 sub activate_storage {
 my ($class, $storeid, $scfg, $cache) = @_;
 
-my @param = ('-o', 'name', '-H');
+my @param = ('-o', 'name', '-H', $scfg-{'pool'});
+
+my $text = zfs_request($class, $scfg, undef, 'list', @param);
 
-my $text = zfs_request($class, $scfg, undef, 'zpool_list', @param);
- 
 if ($text !~ $scfg-{pool}) {
-   run_command(zpool import -d /dev/disk/by-id/ -a);
+   my ($pool_name) = $scfg-{pool} =~ /([^\/]+)/;
+
+   my @import_params = ('-d', '/dev/disk/by-id/', $pool_name);
+
+   zpool_request($class, $scfg, undef, 'import', @import_params);
 }
 return 1;
 }
-- 
1.9.3 (Apple Git-50)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Spice problems with intermediate certificates

2014-08-26 Thread Adrian Costin
 Please remove the private key here!

I guess it wasn't necessary. I've removed it and everything seems to work. 

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Spice problems with intermediate certificates

2014-08-25 Thread Adrian Costin
 Anybody successfully using StartCom Certification Authority with
 StartCom Class 2 Primary Intermediate Server CA?

Yes. I've done the following:

1. Private key in: /etc/pve/local/pve-ssl.key and /etc/pve/pve-www.key
2. Intermediate cert file in: /etc/pve/pve-root-ca.pem
3. /etc/pve/local/pve-ssl.pem contains the following:

-BEGIN CERTIFICATE-
[My Cert]
-END CERTIFICATE-
-BEGIN CERTIFICATE-
[Intermediate cert]
-END CERTIFICATE-
-BEGIN RSA PRIVATE KEY-
[Private key]
-END RSA PRIVATE KEY-


Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] strange API behaviour for vncproxy

2014-08-15 Thread Adrian Costin
Hi,

I¹m trying to create a VNC ticket using the API and have encountered
strange behaviour regarding the Œlocalhost¹ alias for the node name.

# pvesh create /nodes/localhost/qemu/924/vncproxy
no such VM ('924')
# pvesh create /nodes/ro4/qemu/924/vncproxy
200 OK
{
   cert : ³...,
   port : 5904,
   ticket : ³...,
   upid : ³UPID:ro4:...,
   user : root@pam
}



This is happening for both openvz and qemu. All other commands are
accessible by using localhost.

I¹m running Proxmox as a stand-alone server and it¹s much more convenient
to access all the data using Œlocalhost¹ rather then doing an extra get to
find our the node name.

Is this a bug or it¹s the intended behaviour?

I¹m running the latest packages from pvetest, but I¹ve tested the same
with the latest from pve-no-subscription as well.

Best regards,
Adrian Costin


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] libiscsi / pve-qemu-kvm conflict

2014-05-08 Thread Adrian Costin
In the git version:

- If libiscsi1 was renamed to libiscsi2, then it needs a Replace:
libiscsi1 or it won't install correctly
- pve-qemu-kvm needs to depend on libiscsi2 rather then libiscsi1

Just my findings when trying to compile qemu 2.0 with the latest libiscsi.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-05 Thread Adrian Costin
 I have send 2 patchs, to update libiscsi  and after qemu-kvm.

 can you test them ?

I can confirm that using the 2 packages (libiscsi and qemu-kvm) solves
the guest kernel hanging when loading modules problem.

The SeaBIOS problem that doesn't recognize the disks with the
following drivers still remains:

scsihw: lsi
scsihw: lsi53c810
scsihw: pvscsi

using scsihw: virtio-scsi-pci or megasas SeaBIOS detects the drive and
can start grub from it.

Maybe we need an updated SeaBIOS as well?

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] zfs plugin improvements

2014-05-05 Thread Adrian Costin
Hi,

I would like to make the following improvements to the zfs plugin. I
would appreciate any comments:

1. add a parameter to storage.cfg called sparse which would create
sparse zvols if set to true. This can default to false for
compatibility with the current version

2. add SRP support.

I was thinking SRP could be added as a separate iscsiprovider even
though the protocol is not actually iSCSI.

SRP is SCSI RDMA Protocol which is supported by Infiniband hardware
(and some 10G Ethernet adapters). We're currently running an
Infiniband network for storage and using iSCSI over IP over Infiniband
which degrades performance. I've tested SRP on our network and it has
at least 100% improvement over the current solution.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] zfs plugin improvements

2014-05-05 Thread Adrian Costin
 Already in git:
 commit 082e79f35b2f7b75862dc3014fb7de8e65fa76c6

Sorry, I didn't see if. It's not visible here:

https://git.proxmox.com/?p=pve-storage.git;a=summary

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-05 Thread Adrian Costin
 But I think you shoulg go to virtio-scsi for best performance anyway.

That's exactly what I intend to do.

In the meantime I've successfully tested Windows 7 and Windows 2008
which all work fine.

 Have tested with CentOS-6.5 and I can confirm that it works with virtio
 and megaraid. No speed monster though;-)

It's not as fast as VirtIO, but it's definitely better the IDE.

 maybe can you try with qemu 2.0 ?
 (I can built it for you if you want).

I can definitely test with qemu 2.0. Are there packages available?

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-04 Thread Adrian Costin
I did in the first email:

# qm showcmd 1476
/usr/bin/kvm -id 1476 -chardev
socket,id=qmp,path=/var/run/
qemu-server/1476.qmp,server,nowait -mon
chardev=qmp,mode=control -vnc
unix:/var/run/qemu-server/1476.vnc,x509,password -pidfile
/var/run/qemu-server/1476.pid -daemonize -name X -smp
sockets=1,cores=2 -nodefaults -boot menu=on -vga cirrus -cpu
kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 1024 -device
piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device
usb-tablet,id=tablet,bus=uhci.0,port=1 -device
virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive
file=iscsi://10.37.64.2/iqn.2014-04.net.XXX.storage2/8,if=none,id=drive-scsi0,cache=writeback,aio=native
-device 
scsi-generic,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100
-netdev 
type=tap,id=net0,ifname=tap1476i0,script=/var/lib/qemu-server/pve-bridge,vhost=on
-device 
virtio-net-pci,romfile=,mac=82:D3:92:5F:29:5A,netdev=net0,bus=pci.0,addr=0x12,id=net0

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-04 Thread Adrian Costin
 this is strange, I never have had any problem with scsi in guest (virtio or 
 lsi), and zfs plugin.

Can you tell me what versions you use? On my server it's:

# dpkg -l | grep iscsi
ii  libiscsi11.9.0-1
amd64iSCSI client shared library
ii  open-iscsi   2.0.873-3
amd64High performance, transport independent iSCSI
implementation

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-03 Thread Adrian Costin
Hi guys,

I'm trying to install a KVM guest using a SCSI drive and the installer
always hungs around when it loads the scsi modules. I've tried CentOS,
Debian, Ubuntu with no luck.

I've also tried to install the guest using VirtIO or IDE and switch to
SCSI after the fact with the same result.

I'm running the latest packages with the 2.6.32-29-pve kernel and
pve-qemu-kvm 1.7-8 and using the ZFS Plugin which passes a iscsi url
to the KVM command. I've tried both default and VirtIO SCSI HW.

Running the same guest with VirtIO, SATA or IDE works without any
problems and also running a qcow2 local SCSI drive works.


Best regards,
Adrian Costin

--

# pveversion  -v
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

# cat /etc/pve/qemu-server/1476.conf
balloon: 0
boot: cd
bootdisk: scsi0
cores: 2
hotplug: 1
memory: 1024
name: X
net0: virtio=82:D3:92:5F:29:5A,bridge=vmbr0
ostype: l26
scsi0: kvm:vm-1476-disk-2,cache=writeback,size=60G
scsihw: virtio-scsi-pci
sockets: 1

# qm showcmd 1476
/usr/bin/kvm -id 1476 -chardev
socket,id=qmp,path=/var/run/qemu-server/1476.qmp,server,nowait -mon
chardev=qmp,mode=control -vnc
unix:/var/run/qemu-server/1476.vnc,x509,password -pidfile
/var/run/qemu-server/1476.pid -daemonize -name X -smp
sockets=1,cores=2 -nodefaults -boot menu=on -vga cirrus -cpu
kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 1024 -device
piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device
usb-tablet,id=tablet,bus=uhci.0,port=1 -device
virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive
file=iscsi://10.37.64.2/iqn.2014-04.net.XXX.storage2/8,if=none,id=drive-scsi0,cache=writeback,aio=native
-device 
scsi-generic,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100
-netdev 
type=tap,id=net0,ifname=tap1476i0,script=/var/lib/qemu-server/pve-bridge,vhost=on
-device 
virtio-net-pci,romfile=,mac=82:D3:92:5F:29:5A,netdev=net0,bus=pci.0,addr=0x12,id=net0
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-03 Thread Adrian Costin
 What is the contents of your storage.cfg
 What kind of ZFS storage do you use? Linux, FreeBSD or Solaris based?

OmniOS with Comstar

..
zfs: kvm
blocksize 64k
target iqn.2014-04.net..storage2
pool storage/kvm
iscsiprovider comstar
portal 10.37.64.2
content images
..

 Will the VM boot if you use VirtIO or IDE?

Yes

 Is the SCSI drive available in the Seabios prompt? (Hit F12 when you
 see the Seabios logo in the console)

Yes. Grub starts and loads the kernel and initrd. The VM hungs when
loading the SCSI drivers. This happens in the installer and also on an
already installed VM (using VirtIO or IDE which is then switched to
SCSI).


Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-03 Thread Adrian Costin
 Are your proxmox host using test repo? Mine is using the enterprise
 repo.

Yes, I'm using pve-no-subscription, but I was using 2.6.32-28-pve
previously with the same problem. I just upgraded thinking it will
solve the problem. I'll try to downgrade pve-qemu-kvm to see if that
fixes the problem.

I'vee just tried MegaRAID and it doesn't work either. Basically I've
tried all the SCSI HW and the same think happens.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] KVM guest hangs with SCSI drive (ZFS)

2014-05-03 Thread Adrian Costin
 Well, we are facing two kinds of bugs:
 1) Seabios does not detect scsi disks if hw-controller is anything but
 MegaRAID or VirtIO.
 2) Using scsi disks and any hw-controller installation stops when the
 installer tries to initiate hardware. Eventually Proxmox stops the VM.

I've tested all the 2.6.32 pve-kernels back to 2.6.32-23-pve and the
3.10.0 kernel and the standard debian 3.2.0-4 kernel with pve-qemu-kvm
1.7-8, 1.7-6, 1.7-4 and 1.4-17 (all combinations) with the same
result.

Also using qcow2 on local ext4 (probably LVM too) both problems are
not present. SeaBIOS does recognize the drive with LSI and the VM
boots and works fine regardless of the SCSI controller.


My goal is to run the VMs using the VIRTIO SCSI controller to take
advantage of the TRIM/DISCARD feature with sparse/compressed ZVOLs.
I've tested the discard=on feature using IDE and works just fine, but
I don't want to take the performance penalty of running with IDE.


Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] zfs plugin: creating 2 disks

2014-05-03 Thread Adrian Costin
Small bug when creating a second disk for the same VM on the same repo:

command '/usr/bin/ssh -o 'BatchMode=yes' -i
/etc/pve/priv/zfs/10.37.64.2_id_rsa root@10.37.64.2 zfs create -s -b
64k -V 33554432k storage/kvm/vm-1476-disk-1' failed: exit code 1 (500)

Basically it doesn't find that vm-1476-disk-1 already exists. Running
the command manually gives:

cannot create 'storage/kvm/vm-1476-disk-1': dataset already exists

This seems to be related to the fact that I'm using:

pool storage/kvm

in storage.cfg. If I use just the pool name it correctly increments
the ID, however I have need to keep all the images inside a secondary
zfs.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] zfs plugin: creating 2 disks

2014-05-03 Thread Adrian Costin
 Yes, seems to remember that nested pools are not supported. I will see
 if I can find some time to look into this. I should think you could
 overcome this missing feature by creating a target which points
 directly to the nested pool. Purely speculating since I haven't tested
 this.

This is the only problem. I've been using this config in production
with no other issues for a while. VMs start, disk creation / deletion
works fine (if only one disk).

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] zfs plugin: creating 2 disks

2014-05-03 Thread Adrian Costin
I was using version 3.0-19. I've manually applied the diff from git
and indeed it fixes the problem.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-kernel 2.6.32-28 crashes..?

2014-04-23 Thread Adrian Costin
Also no crashes here:

Xeon 5500 and 5600 series, and Xeon E3-12XX.

Best regards,
Adrian Costin


On Thu, Apr 24, 2014 at 12:28 AM, Adrian Costin adrian.cos...@gmail.com wrote:
 Also no crashes here:

 Xeon 5500 and 5600 series, and Xeon E3-12XX.

 Best regards,
 Adrian Costin


 On Thu, Apr 24, 2014 at 12:21 AM, Lindsay Mathieson
 lindsay.mathie...@gmail.com wrote:
 On Wed, 23 Apr 2014 04:44:06 PM Alexandre DERUMIER wrote:
 Do you have notice something on your side ?


 No crashes here, two servers, CPU Xeon E5-2620
 --
 Lindsay
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] ZFSOnLinux Support

2013-10-29 Thread Adrian Costin
Hi,

Since the new ZFS Plugin is out, I was thinking of supporting local
ZFS via ZFSOnLinux.

ZoL works fine on Debian and we already use it in production.

This can be implemented in the same ZFS plugin, by checking if this is
a local ZFS (maybe check if host is 'localhost') and skip the entire
SSH stack and running zfs/zpool command directly.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Fw: Generic ZFS rest API

2013-05-21 Thread Adrian Costin
 The problem with zfs on linux is that it is fuse based mening big cpu overhead
 and bad io performance.

zfsonlinux is not fuse based!

The licensing problem only restricts distributing the zfs code with
the GPL linux kernel. If the user is installing the zfs code himself
there's no conflict. You ca install zfs on Proxmox by simply adding
the Ubuntu PPA and doing apt-get install ubuntu-zfs.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Debug in gui

2013-05-21 Thread Adrian Costin
 I need to find a bug in the ZFSPlugin.

The plugin should be named something like RemoteZFSPlugin as it would
be confused with a local zfs plugin.

I've already made one and I'm thinking of submitting it after a bit
more testing (if people are interested that is).

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Disk removed from config but not named unused

2013-04-20 Thread Adrian Costin
 Do you have a re-producible test case?
 Ah i got it. That VM / Disk had a parent snapshot...
 mhm but even than shouldn't it be listed as unused0?

I've noticed this as well when I was playing with snapshots. If the
disk has a snapshot, it won't be placed as unusedX and is completely
removed from the config file.

Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Count monthly traffic

2013-04-17 Thread Adrian Costin
I'm sorry to dropping in, but isn't Proxmox already counting traffic in
it's internal RRD? Couldn't the rrddata API call be used to retrieve the
data in the RRD externally and process it to count the total bandwidth used
in 1 month?

Maybe I've misunderstood what the issue is here...



Best regards,
Adrian Costin


On Wed, Apr 17, 2013 at 3:29 PM, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag wrote:

 Am 17.04.2013 14:17, schrieb Dietmar Maurer:
  Nowhere ;-) how about just return the counter values for the correct
 tap device
  through API?
 
  So it is basically:
  1.) a wrapper from netX to correct tap
  2.) query tap counter inout / output values
  3.) allow to query this through API
 
  So it is at least possible to implement traffic account in external
 software. You
  jsut have to query the API every X seconds and detect resets yourself.
 It acts
  than basically like SNMP traffic counters in switches.
 
  sounds reasonable.

 Then let's go this way. It's much simpler than adding RRD.

 So the question is should this be a completely new call or do you want
 to add a new hash key to  sub vmstatus { ?

 this could be

 my $netdev = PVE::ProcFSTools::read_proc_net_dev();
 foreach my $dev (keys %$netdev) {
 next if $dev !~ m/^tap([1-9]\d*)i/;
 my $vmid = $1;
 my $d = $res-{$vmid};
 next if !$d;

 $d-{netout} += $netdev-{$dev}-{receive};
 $d-{netin} += $netdev-{$dev}-{transmit};
 }

 converted to:

 my $netdev = PVE::ProcFSTools::read_proc_net_dev();
 foreach my $dev (keys %$netdev) {
 next if $dev !~ m/^tap([1-9]\d*)i(\d+)/;
 my $vmid = $1;
 my $netid = $2;
 my $d = $res-{$vmid};
 next if !$d;

 $d-{netout} += $netdev-{$dev}-{receive};
 $d-{netin} += $netdev-{$dev}-{transmit};
 $d-{traffic}{'net'.$netid}{netout} = $netdev-{$dev}{receive};
 $d-{traffic}{'net'.$netid}{netin} = $netdev-{$dev}{transmit};
 }


 Stefan
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel