Hi
Thanks for this
What I wanted to do was move an old iSCSI appliance to the same VLAN as the
new iSCSI appliance to stop storage migration traffic having to traverse a
firewall. I ended up just trunking both VLANs to the same interface.
Thanks again.
On Thu, Oct 26, 2017 at 1:49 AM, Adam
On Tue, Oct 24, 2017 at 6:19 PM, Colin Coe wrote:
> Hi all
>
> Running RHV 4.1.6, I need to move an iSCSI appliance to a new subnet.
> There are VMs that I need to keep on this appliance.
>
> What would be the best way to point RHV at the new IP address?
>
If you can
Hi,
it worked!
for reference...
on host where vm is running...
multipath -r
shows old size so I rescan both paths...
echo 1 > /sys/class/scsi_device/2\:0\:1\:62/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:0\:62/device/rescan
then...
multipath -r
...shows new size
now I get lun_id
Have a question in regards to storage performance. I have a gluster replica
3 volume that we are testing for performance. In my current configuration
is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in Raid 10 with a
256k stripe. My 2nd server is configured with 4X6TB (3.5 Inch
Le 25/10/2017 à 13:59, Alexander Wels a écrit :
On Wednesday, October 25, 2017 4:02:45 AM EDT Nathanaël Blanchet wrote:
Le 25/10/2017 à 06:16, Idan Shaby a écrit :
I am glad that it works for you now!
If you got anymore question, please don't hesitate to ask.
So... here is an new bug : I
On Wednesday, October 25, 2017 4:02:45 AM EDT Nathanaël Blanchet wrote:
> Le 25/10/2017 à 06:16, Idan Shaby a écrit :
> > I am glad that it works for you now!
> > If you got anymore question, please don't hesitate to ask.
>
> So... here is an new bug : I imported an existing iso domain, and no
>
On 10/25/2017 03:30 PM, Matthias Leopold wrote:
we're also using cinder from openstack ocata release.
the point is
a) we didn't upgrade, but started from scratch with ceph 12
b) we didn't test all of the new features in ceph 12 (eg. EC pools for
RBD devices) in connection with cinder yet
On Tue, Oct 24, 2017 at 5:19 AM, Juan Pablo Lorier wrote:
> Thanks Yedidyah for your reply. The FQDN is correct:
>
> /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf:ENGINE_FQDN=ovirt01.tecnica.tnu.com.uy
>
> I managed to connect to the admin portal using a ssh tunnel,
Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin:
On 10/24/2017 07:26 PM, Matthias Leopold wrote:
yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt
Hypervisor Hosts, which we're installed with CentOS 7 and Ceph
upstream repos, not oVirt Node (for this exact purpose).
> On 23 Oct 2017, at 12:57, Kapetanakis Giannis
> wrote:
>
> Hi,
>
> I'm in the process of upgrading to 4.1 from 3.6.7 (centos 6)
> The default cluster runs in 3.5 compatibility mode.
>
> I cannot add a new Centos 7 host:
> 2017-10-23 13:50:08,359 ERROR
>
On Wed, Oct 25, 2017 at 9:40 AM, Luca 'remix_tj' Lorenzetto
wrote:
> Hello,
>
> i'm planning to create a big standardization playbook for my
> environment to ensure that all the required configs (networks, hosts,
> host's nics and networks) are correctly set up.
>
Le 25/10/2017 à 06:16, Idan Shaby a écrit :
I am glad that it works for you now!
If you got anymore question, please don't hesitate to ask.
So... here is an new bug : I imported an existing iso domain, and no
images display under the subtab...
Regards,
Idan
On Tue, Oct 24, 2017 at 8:40 PM,
Hello,
i'm planning to create a big standardization playbook for my
environment to ensure that all the required configs (networks, hosts,
host's nics and networks) are correctly set up.
Since i don't want to test against a running setup, i'd like to spawn
a transient one for testing.
I
13 matches
Mail list logo