I'm running a pair of Debian wheezy machines A and B with Xen 4.1 and a
wheezy-backport kernel 3.14.7 to have drbd 8.4.3.
Both machines are interconnected with a dedicated 1GB Intel 82574L
(e1000e) drbd link, mtu=9216, storage is a fast battery-backed caching
RAID controller each.
There are 3 drbd
Am 04.07.14 14:27, schrieb Lars Ellenberg:
> On Mon, Jun 30, 2014 at 04:25:01PM +0200, Andreas Pflug wrote:
>> I'm running a pair of Debian wheezy machines A and B with Xen 4.1 and a
>> wheezy-backport kernel 3.14.7 to have drbd 8.4.3.
>> Both machines are interconnected
Am 17.07.14 15:00, schrieb Lars Ellenberg:
> On Wed, Jul 16, 2014 at 01:02:11PM -0400, Ian MacDonald wrote:
>> Lars,
>>
>> thankyou for the useful insight, I made some other observations and have a
>> few questions below.
>>
>> cheers,
>> Ian
>>
>> On my list of test scenarios is to test with some
Sounds like you want to run Raid1 (drbd) over Raid0 (ssd+md?). This is
more fragile than Raid0 over Raid1, so you might consider implementing
Raid0 using DM over multiple drbd devices, each mirroring a single ssd.
Regards,
Andreas
On 07/23/14 17:45, Robinson, Eric wrote:
We have many DRBD cl
Am 16.04.15 um 04:42 schrieb Digimer:
> I ran into an odd problem today that turned out to be a network issue. I
> couldn't find much when I was googling the error, so here is a post for
> the archives.
>
> ...
>
> Turns out, I had set the MTU=9000 on the network interfaces (bonded
> pairs) but had
Running two Debian 9.3 machines, directly connected via 10GBit on-board
X540 10GBit, with 15 drbd devices.
When running a 4.14.2 kernel (from sid) or a 4.13.13 kernel (from
stretch-backports), I see several "Wrong magic value 0x4c414245 in
protocol version 101" per day issued by the secondary, wit
Running two Debian 9.3 machines, directly connected via 10GBit on-board
X540 10GBit, with 15 drbd devices.
When running a 4.14.2 kernel (from sid) or a 4.13.13 kernel (from
stretch-backports), I see several "Wrong magic value 0x4c414245 in
protocol version 101" per day issued by the secondary, wi
Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
> On Tue, Jan 09, 2018 at 03:36:34PM +0100, Lars Ellenberg wrote: >> On Mon,
> Dec 25, 2017 at 03:19:42PM +0100, Andreas Pflug wrote: >>>
Running two Debian 9.3 machines, directly connected via 10GBit >>>
on-board >
Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
> On Tue, Jan 09, 2018 at 03:36:34PM +0100, Lars Ellenberg wrote:
>> On Mon, Dec 25, 2017 at 03:19:42PM +0100, Andreas Pflug wrote:
>>> Running two Debian 9.3 machines, directly connected via 10GBit on-board
>>>
>>&g
Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
> On Tue, Jan 09, 2018 at 03:36:34PM +0100, Lars Ellenberg wrote:
>> On Mon, Dec 25, 2017 at 03:19:42PM +0100, Andreas Pflug wrote:
>>> Running two Debian 9.3 machines, directly connected via 10GBit on-board
>>>
>>&g
Am 15.01.18 um 16:37 schrieb Andreas Pflug:
> Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
>> On Tue, Jan 09, 2018 at 03:36:34PM +0100, Lars Ellenberg wrote:
>>> On Mon, Dec 25, 2017 at 03:19:42PM +0100, Andreas Pflug wrote:
>>>> Running two Debian 9.3 machines, di
Am 01.02.18 um 13:34 schrieb Lars Ellenberg:
> On Tue, Jan 23, 2018 at 07:14:13PM +0100, Andreas Pflug wrote:
>> Am 15.01.18 um 16:37 schrieb Andreas Pflug:
>>> Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
>>>> On Tue, Jan 09, 2018 at 03:36:34PM +0100, Lars Ellenbe
Am 08.02.18 um 11:30 schrieb Andreas Pflug:
> Am 01.02.18 um 13:34 schrieb Lars Ellenberg:
>> On Tue, Jan 23, 2018 at 07:14:13PM +0100, Andreas Pflug wrote:
>>> Am 15.01.18 um 16:37 schrieb Andreas Pflug:
>>>> Am 09.01.18 um 16:24 schrieb Lars Ellenberg:
>>&g
Using DRBD 9.0.17-1: How can I advise a diskless node to use the second
interface?
I have two networks (no routing between them): the first has admin
traffic, and is used to register the nodes with the controller, which is
located on that network only. The second network is used for drbd
traffic,
Am 10.05.19 um 15:35 schrieb Andreas Pflug:
> Using DRBD 9.0.17-1: How can I advise a diskless node to use the second
> interface?
>
> I have two networks (no routing between them): the first has admin
> traffic, and is used to register the nodes with the controller, which is
&g
efNic to the DfltDisklessStorPool, it is hidden
> pool used by default for creating diskless resource.
>
> Cheers!
>
> - kvaps
>
> On Fri, May 10, 2019 at 3:54 PM Andreas Pflug
> wrote:
>>
>>
>> Am 10.05.19 um 15:35 schrieb Andreas Pflug:
>>> Using DR
Am 14.06.19 um 11:58 schrieb Robert Altnoeder:
> On 6/14/19 1:13 AM, Jagdish kumarDaram wrote:
>> Hi,
>>
>> What are the selinux parameters that should be used for drbd_t on
>> centOS 7.x?
>
> Parameters? Do you mean the type enforcement rules involving the drbd_t
> type/domain?
> That would proba
Hi!
I run a 4-way drbd9 installation, with some 12 resources defined. 3
hosts have disks, the forth is diskless.
For hardware replacement, on of the disk equipped hosts was down for 15
hours or so. When switching back on, there are multiple messages on the
console:
drbd vm-100-disk-1/0 drbd1000:
There are some posts to this lists from 18. and 19. which don't have any
reply yet.
Am 18.06.19 um 12:56 schrieb Andreas Pflug:
> Hi!
>
> I run a 4-way drbd9 installation, with some 12 resources defined. 3
> hosts have disks, the forth is diskless.
>
> For hardware repl
Reading the docs, it is not clear to me when on-io-error is triggered,
i.e. if it depends on disk-timeout or not.
If it does depend on disk-timeout, the warning about post-near-mortem
DMA transfers corrupting buffers will probably apply, thus leaving
call-local-io-error forcing a reboot the only s
Just wondering about the version number:
Why not name this linstor-proxmox release with a major number of 6, to
indicate its compatibility with PVE6.
Regards,
Andreas
Am 07.08.19 um 16:15 schrieb dridders-drbdu...@dridders.de:
> Same for me on PVE6, no issues found, on/offline migration of disk
Am 25.10.19 um 15:39 schrieb kvaps:
> Hi, today we've got an alert that some resources on three nodes become
> to Outdated state.
> Nothing was changed, issue occurred suddenly and is currently persisting.
>
> drbd version: 9.0.19-1
> kernel version: 4.15.18-12-pve
>
> The weird thing is that som
Under Proxmox 5.4 with latest linstor, I shutdown two VMs on a diskless
node. The disk resources stay stuck in the "DELETING" state, even after
reboot; drbdadm status doesn't show these resources.
Another VM that I live-migrated away from that node didn't leave remainders.
How can I resolve those
Am 25.10.19 um 18:25 schrieb Andreas Pflug:
> Under Proxmox 5.4 with latest linstor, I shutdown two VMs on a diskless
> node. The disk resources stay stuck in the "DELETING" state, even after
> reboot; drbdadm status doesn't show these resources.
> Another VM that I l
When rebooting a machine in a 3-way redundant drbd9.0.20-1 on Linux
5.0.21 (Proxmox 6.0-11), some resources fail to resync, either stuck as
"standalone" or at 97.3% or so.
Resource generation is done by linstor 1.2.0
I have:
hostD1 - secondary
hostD2 - secondary
hostD3 - secondary, being rebooted
With linstor-proxmox 4.1.0 on proxmox 6.0.11, I have two drbd storages
defined.
When selecting one of those drbd storages in the pve management
interface, the Usage gauge will regularly flip between the correct
display and the other storages display. Same with the empty/full icon in
the device tre
Am 15.11.19 um 11:32 schrieb Roland Kammerer:
> On Wed, Nov 13, 2019 at 04:57:16PM +0100, Andreas Pflug wrote:
>> With linstor-proxmox 4.1.0 on proxmox 6.0.11, I have two drbd storages
>> defined.
>>
>> When selecting one of those drbd storages in the pve management
&g
I was wondering why my disk was filling up, and found that
rest-access.log was 135MB big, with information I'll probably never need.
Is there a way to disable the access log?
Regards,
Andreas
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user ma
The NOTICE "Intentionally removing diskless assignment (vm-nnn-disk-n)
on (nodeX)" is printed whenever a VM is migrated away from a diskless
node (PVE6.1). This is no error and just the way the plugin is supposed
to work, still it is printed as "Error: migration problems" marked red
and thus alarmi
Am 18.12.19 um 11:15 schrieb Roland Kammerer:
> On Wed, Dec 18, 2019 at 08:06:23AM +0100, Andreas Pflug wrote:
>> The NOTICE "Intentionally removing diskless assignment (vm-nnn-disk-n)
>> on (nodeX)" is printed whenever a VM is migrated away from a diskless
>> node
Am 18.12.19 um 12:55 schrieb Roland Kammerer:
> On Wed, Dec 18, 2019 at 12:19:45PM +0100, Andreas Pflug wrote:
>> Could not delete diskless resource vm-105-disk-3 on ,
>> because:
>> [{ "ret_code":53739522,
>>"message":"Node: , Resource: vm
Am 18.12.19 um 14:02 schrieb Roland Kammerer:
> On Wed, Dec 18, 2019 at 01:16:02PM +0100, Andreas Pflug wrote:
>> The migration appears fully sucessful to me, drbdadm status shows
>> UpToDate on all nodes and Primary on the MigrationTargetHost only as
>> expected.
>
>
Am 19.12.19 um 16:08 schrieb Denis:
> Thank you for your reply Gianni and all for reading.
>
> I realized that internet on my nodes was limited only to standard ports
> out, then drbd-dkms failed.
> I solved my problem opening all ports out.
I wonder if Linbit could setup a reverse proxy for SPAA
Am 17.02.20 um 18:51 schrieb Daniel Ulman:
> Hi everyone,
>
>
>
> I have a proxmox cluster with 3 nodes and linstor on each node.
>
>
>
> I want to change my old HDD disks to SSD.
>
> The SSDs are a bit smaller than HDDs.
>
> What is the correct procedure to migrate VM from HDDs to SSDs?
Sounds like you're exploring ways to shoot yourself in the foot. Why not
follow best practices already mentioned here.
Regards
Andreas
Am 20.02.20 um 13:36 schrieb Daniel Ulman:
> New approach on the same topic.
>
>
>
> At the moment, I added a new SSD disk only in a single node (Because
> th
Am 16.03.20 um 12:36 schrieb Roland Kammerer:
> Dear PVE users,
>
> This is a feature release that will bring 1 very useful feature, but
> also breaks (very legacy) compatibility with older plugins.
>
> - The feature:
> What this does is to try to create the DRBD resource diskfully if it is
> pos
Am 17.03.20 um 15:51 schrieb Roland Kammerer:
> On Tue, Mar 17, 2020 at 03:33:55PM +0100, Andreas Pflug wrote:
>> I see redundancy still used in the current documentation at
>> https://www.linbit.com/linstor-setup-proxmox-ve-volumes/ , so it appears
>> to me as being swit
I complained about the ever-growing rest-access.log earlier, and was
noted "there will be some config options to disable access log" in the
next version. And yes, the next version had rest_access_log_mode in the
linstor.toml-example file.
My linstor.toml shows:
[logging]
level = "info"
# make
@Volker the community will be very happy if you provide a consistent and
reliable patch.
Regards,
Andreas
Am 20.05.21 um 15:46 schrieb Dr. Volker Jaenisch:
> Sorry Ronald,
>
> On 20.05.21 15:09, Roland Kammerer wrote:
>> On Thu, May 20, 2021 at 01:54:13PM +0100, Yanni M. wrote:
>>> I second this
I'm running a Proxmox cluster with 3 disk nodes and 3 diskless nodes
with drbd 9.1.1. The disk nodes have storage on md raid6 (8 ssds each)
with a journal on an optane device.
Yesterday, the whole cluster was severely impacted when one node had
write problems. There is no indication for any hardwa
ents
>
> So if you have a ko-count set, this should be fixed.
> Or it is something completely different... ;)
>
> Cheers,
> Rene
>
> On Thu, May 27, 2021 at 1:25 PM Andreas Pflug <mailto:pgad...@pse-consulting.de>> wrote:
>
> I'm running
Am 27.05.21 um 17:55 schrieb Joel Colledge:
>> No ko-count set, so apparently something different...
>
> ko-count is enabled by default (with value "7"). Have you explicitly
> disabled it? Your description does sound very similar to the issue
> that has been fixed as Rene mentioned.
I can confirm
In the process of upgrading a Proxmox cluster from 6.4 to 7, I
encountered a failure of linstor which prevents me from proceeding.
First, I upgraded all nodes to the latest linstor 1.14.0, and made sure
that linstor node list shows all nodes Online.
Next, I upgraded all nodes to the latest 6.4 pv
Hi DRBD-users,
while upgrading a cluster from 9.1.2 to 9.1.3, I see storage nodes
getting out of sync:
The resources are stored on three 9.1.2 nodes, used by a 9.1.2 node.
Migrating the Proxmox-VM way from that node, I observed storage nodes
becoming inconsistent and SyncTargets. This happened on
I'm running three nodes with DRBD9.1.18, two diskfull, one diskless, and
Linstor 1.25.1.
One of the storage nodes had a problem, after several reboots drbdadm
status showed "diskless" on the node. drbdadm down res ; drbdadm up res
resolved the situation, the status is UpToDate Secondary as expec
45 matches
Mail list logo