Nice save. And thanks for the detailed info.
On Thursday, June 18, 2020, Lindsay Mathieson
wrote:
> Clean nautilous install I setup last week
>
> * 5 Proxmox nodes
> o All on latest updates via no-subscription channel
> * 18 OSD's
> * 3 Managers
> * 3 Monitors
> * Cluster Heal good
>
Hi Marco
The physical nic type is irrelevant. If the guest doesn't see the realteks
then most likely it's driver issue in the guest..
I'd try to get the using cards under than realtek issue address in opnsense
Brian
On Thursday, January 2, 2020, Bertorello, Marco
wrote:
> Dear PVE Us
Hello
You would need to be a bit more verbose if you expect help.
Version of proxmox?
What panics? Host, guest?
Guest os?
Server hardware?
Disks?
Any logs?
As much relevant info as as you can provide...
On Tuesday, October 29, 2019, Humberto Jose De Sousa via pve-user <
Have a mon that runs somewhere that isn't either of those rooms.
On Friday, September 13, 2019, Fabrizio Cuseo wrote:
> Hello.
> I am planning a 6 hosts cluster.
>
> 3 hosts are located in the CedA room
> 3 hosts are located in the CedB room
>
> the two rooms are connected with a 2 x 10Gbit
You will have to go to latest 3.x then to 4.x then to 5.x - upgrade
should be fine. But if its quicker to backup VMS and restore and some
downtime is acceptable ( downtime with upgrade on single box anyway )
then that maybe easier option for you.
On Thu, Dec 27, 2018 at 3:04 PM Gerald Brandt
Thanks for the responses to an essentially off-topic post.
Best Regards,
---
Brian Sidebotham
On Fri, 14 Sep 2018 at 19:07, Josh Knight wrote:
> This looks to be expected. The operational state is provided by the
> kernel/driver for the interface. For these virtual interfaces
a-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
---
Brian Sidebotham
Wanless Systems Limited
e: brian@wanless.systems
Its really not a great idea because the larger drives will tend to
get more writes so your performance won't be as good as all the same
size where the writes will be distributed more evenly.
On Wed, Aug 22, 2018 at 8:05 PM Gilberto Nunes
wrote:
>
> Hi there
>
>
> It's possible create a Ceph
Crikey thats an old version - ubcleand seems to be something to do with openVZ
On Sat, Mar 31, 2018 at 8:00 PM, F00b 4rch wrote:
> Hi all,
>
> Does someone know what does ubcleand process ?
> I see it running on old proxmox 3 and I can't find any info on it on
> man/net.
>
HI Gregor,
I think you will need to edit VM config at /etc/pve/qemu-server to do this.
On Mon, Feb 26, 2018 at 4:50 AM, Gregor Burck wrote:
> Hi,
>
> I was able to rescue an image from a damaged pve host with dd (see other
> threat)
>
> I've defined a VM a and B. Cause B
Hi Mike,
I haven't installed luminous yet but if they are doing what they did
with previous packages then they're just using standard Ceph repo. The
source code will be https://github.com/ceph/ceph/tree/v12.0.0
if you replace 12.0.0 with the exact version of luminous currently in
the repo and
Thanks,
Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
I was able to use the GUI to add a USB device and subsequently mount it on a
guest QEMU Virtual Machine, but those same options are not present in the web
UI for containers, so I found some related documentation here:
https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines
I followed
you could probably cat /etc/network/interfaces.d/* >
/etc/network/interfaces as a horrible hack.
On Fri, May 19, 2017 at 1:03 PM, Eugen Mayer wrote:
> Hallo,
>
> due to the nature of deploying with chef and configuring my network,
> interfaces, bridges there,
Would make sense to have TASK WARN maybe - that would certainly make
you check backup the output. Or perhaps try to delete the disk first,
if that fails then do nothing else in the task.
Are you using KRBD? Its usually the RBD being mounted with the kernel
module on a different box than the one
This is probably by design..
On Thu, Dec 15, 2016 at 11:48 AM, Marco Gaiarin wrote:
>
> Sorry, i came back to this topic because i've done some more tests.
>
> Seems that 'pveceph' tool have some trouble creating OSD with journal
> on ''nonstandard'' partition, for examply on a
Don't install the licence until you're fully comfortable that you have
everything working the way you want it and you won't have any issue!
You can use the non sub repo for as long as you need.
On Sat, Nov 19, 2016 at 2:38 PM, Marcel van Leeuwen
wrote:
> Hmmm, also true.
Hi Marcel,
Its all explained here https://pve.proxmox.com/wiki/Package_Repositories
Cheers
On Sat, Nov 19, 2016 at 11:14 AM, Marcel van Leeuwen
wrote:
> Yeah, I agree it’s normally not necessary to do re-installs. The reason I did
> I was messing with remote NFS
Hi Mikhail
The guest that is running - what type of controller / cache?
Thanks
On Tue, Nov 15, 2016 at 10:05 PM, Mikhail <m...@plus-plus.su> wrote:
> On 11/16/2016 12:43 AM, Brian :: wrote:
>> What type of disk controller and what caching mode are you using?
>
> The st
What type of disk controller and what caching mode are you using?
On Tue, Nov 15, 2016 at 9:36 PM, Mikhail <m...@plus-plus.su> wrote:
> On 11/16/2016 12:33 AM, Brian :: wrote:
>> 90.4 MB/s isn't that far off.
>
> Hello,
>
> Yes, but I'm only able to get these results
Ignore my reply - just reread the thread fully :)
NFS should work just fine.. no idea why you are seeing those lousy speeds.
On Tue, Nov 15, 2016 at 9:33 PM, Brian :: <b...@iptel.co> wrote:
> 90.4 MB/s isn't that far off.
>
>
> On Tue, Nov 15, 2016 at 5:25 PM, Mikhail <m..
90.4 MB/s isn't that far off.
On Tue, Nov 15, 2016 at 5:25 PM, Mikhail wrote:
> On 11/15/2016 06:09 PM, Gerald Brandt wrote:
>> I don't know if it helps, but I always switch to NFSv4.
>
> Thanks for the tip. This did not help. I also tried with various caching
> options
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/
24 hours down so far. Can't wait to read the RFO
On Wed, Oct 12, 2016 at 4:21 PM, Karsten Becker
wrote:
> Hi,
>
> I ncan confirm that I was not able to call the documentation
Hi Lindsay
I think with clusters with VM type workload and at the scale that
proxmox users tend to build < 20 OSD servers that cache tier is adding
layer of complexity that isn't going to payback. If you want decent
IOPS / throughput at this scale with Ceph no spinning rust allowed
anywhere :)
Jeasuss - someone got out of the bed on the wrong side today!
I've just been working on something that had had me stuck in the 4.3
UI for the past 48 hours on and off.
Personally I like it but thats just my opinion - and I did give the
guys feedback after I upgraded.
I'm sure some things can be
Hi Alexandre,
If guests are linux you could try use the scsi driver with discard enabled
fstrim -v / then may make the unused space free on the underlying FS then.
I don't use LVM but this certainly works with other types of storage..
On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
Thanks Fabian.
On Fri, Sep 30, 2016 at 11:51 AM, Fabian Grünbichler
<f.gruenbich...@proxmox.com> wrote:
> On Fri, Sep 30, 2016 at 11:36:56AM +0100, Brian :: wrote:
>> Hi guys
>>
>> This doesn't seem to work for me..
>>
>> I get blank screen in disks se
Hi guys
This doesn't seem to work for me..
I get blank screen in disks section of gui.
The command you use in Diskmange.pm translates to:
/usr/sbin/smartctl -a -f brief /dev/device
If I run that for /usr/sbin/smartctl -a -f brief /dev/sda I get tons
of info about the device so that works.
Anyone looked at it or considered using it with Proxmox?
https://storpool.com/
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Its pretty straight forward. If you have 6TB and you are using size =
3 in ceph - you have 2TB (give or take) of usable storage.
On Fri, May 27, 2016 at 9:27 PM, Daniel Eschner wrote:
> Hi all,
>
> i am playing with Ceph and some thinks i dont understand.
> The Proxmox
forget a thing:
> The guests drive live in an zfs store on the raid systems.
> Maybe it's important to know this.
>
> I'm thinking of switching to ubuntu virtual kernel, but i'm not quite sure
> if it will help or break even more stuff.
>
> greetings,
> chris
>
>
> A
Hi Mohamed
10Gbps or faster at a minimum or you will have pain. Even using 4
nodes with 4 spinner disks in each node and you will be maxing out
1Gbps network. For any backfills or adding new OSDs you don't want to
be waiting on 1Gbps ethernet speeds.
Dedicated 10Gbps network for ceph
seemingly It's not an error, it says that the CPU doesn't support
performance counters.
Qemu doesn't support them - if you don't need them, don't worry about it
On Sun, Apr 17, 2016 at 6:04 PM, sebast...@debianfan.de
wrote:
> Hello,
>
> while starting a debian 8 KVM,
get this patch into the pve 4 kernel?
https://forum.proxmox.com/threads/kernel-ipmi_si-ipmi_si-0-could-not-set-the-global-enables-0xcc.24920/
Thanks,
Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo
so causes rsync
--one-filesystem to treat it as a separate filesystem.*
So today I have my KVM images run off an ext4 disk but my lxc containers
are on my main btrfs raid10.
Others may have better insight.
Brian
___
pve-user mailing list
pve-user
changed with the USB support in the newer versions? Is that
documentation obsolete and maybe a new method of doing this? Or is this a
bug that was introduced in a newer version?
Thanks,
Brian
___
pve-user mailing list
pve-user@pve.proxmox.com
http
CentOS 6 isn't EOL until 2020 and there is some reluctance in our group
around some of the changes in RHEL/Cent 7.
Thanks,
Brian
On Wed, Apr 15, 2015 at 1:27 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Yes, I do have the udev rules in place and when I increase the memory in
the web gui
memory but it doesn't seem to take it with the way Proxmox handles it at
boot.
Thanks!
Brian
On Sun, Apr 12, 2015 at 11:18 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
do you have the udev rules
/lib/udev/rules.d/80-hotplug-cpu-mem.rules (not sure about centos path)
SUBSYSTEM
Thanks everyone for the replies. I've played with the LVM Filters before
but it didn't occur to me to do that in this scenario. I'll give that a
shot!
Thanks,
Brian
On Sun, Apr 12, 2015 at 11:26 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
I have had a customer with same
for the follow up!
Brian
On Sun, Mar 8, 2015 at 12:16 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
I have updated the wiki last month about memory cpu hotplug
https://pve.proxmox.com/wiki/Hotplug_%28qemu_disk,nic,cpu,memory%29
- Mail original -
De: Brian Hart brianh
to it at startup.
Has anybody else seen this? Is there something special about the hotplug
that this is expected that I'm not aware of?
Thanks,
Brian Hart
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
with OVS and the other without
and they were able to talk for me without issue. My problem came down to
an MTU mismatch between the nodes.
Brian
On Tue, Jan 27, 2015 at 2:40 AM, Sten Aus sten@eenet.ee wrote:
Hi
I have a problem when implementing Open vSwitch. When I configure my one
node
ones so I won't be leaving it as a two node setup for long. Thanks for the
advice!
Brian
On Mon, Jan 19, 2015 at 1:01 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
maybe it's multicast filtering related.
OVS don't support igmp snooping (multicast filtering), and also don't
find a better opportunity to upgrade to 2.1?
Thanks,
Brian Hart
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
44 matches
Mail list logo