Re: [ovirt-users] Ovirt node ready for production env?

2017-07-20 Thread Vinícius Ferrão
Hello Lionel,

Production ready it's definitely yes. Red Hat even sells RHV-H, which is the 
same thing as oVirt Node. But keep in mind one thing: it's an appliance. So 
modifications on the appliance isn't really supported. As far as I know oVirt 
Node is based on imgbase and updates/security are done through yum. But when an 
update is made everything is rewritten. So you will lose your modifications if 
you install additional packages on oVirt Node.

The host is stateless, so you don't really need to backup it, the core is 
running on hosted engine.

About the other questions, I can't add anything since I'm new to oVirt too. 
Perhaps someone could complete my answer.

V.

Sent from my iPhone

> On 20 Jul 2017, at 03:59, Lionel Caignec  wrote:
> 
> Hi,
> 
> i'm did not test myself so i prefer asking before use it 
> (https://www.ovirt.org/node/). 
> Is ovirt node can be used for production environment ? 
> Is it possible to add some software on host (ex: backup tools, ossec,... )? 
> How does work security update, is it managed by ovirt? or can i plug ovirt 
> node on spacewalk/katello?
> 
> 
> Sorry for my "noobs question"
> 
> Regards
> --
> Lionel 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread FERNANDO FREDIANI
The proposed seems to be something interesting but is manual and 
susceptible to errors. I would much rather if this would come out of the 
box as it is VMware ESXi.


A 'squashfs' type of image boots up and runs completely in memory. Any 
logging is written and rotated also in memory which keeps only a certain 
recent period of logs necessary for quick trobleshooting. Whoever wants 
more than that can easily set a rsyslog server to collect and keep the 
logs for a longer period. With this, only the modified Node 
configuration is written in the SD Card/USB Stick when it changes which 
is not often which makes it a reliable solution.


I personally have a Linux + libvirt solution installed and running in a 
USB Stick that does exactlly this (writes up all the logs in memory) and 
it has been running for 3+ years without any issues.


Fernando


On 20/07/2017 03:54, Lionel Caignec wrote:

Ok thank you,

for now i'm not so advanced on architecture design i'm just thinking of what 
can i do.

Lionel

- Mail original -
De: "Yedidyah Bar David" 
À: "Lionel Caignec" 
Cc: "users" 
Envoyé: Jeudi 20 Juillet 2017 08:03:50
Objet: Re: [ovirt-users] ovirt on sdcard?

On Wed, Jul 19, 2017 at 10:16 PM, Lionel Caignec  wrote:

Hi,

i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's 
possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
this kind of storage?

Perhaps provide some more details about your plans?

The local disk is normally used only for standard OS-level stuff -
mostly logging. If you put /var/log on NFS/iSCSI/whatever, I think
you should not expect much other local writing.
Didn't test this myself.

People are doing many other things, including putting all of the
root filesystem on remote storage. There are many options, depending
on your hardware, your existing infrastructure, etc.

Best,


Thanks

--
Lionel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread Ravishankar N



On 07/20/2017 03:42 PM, yayo (j) wrote:


2017-07-20 11:34 GMT+02:00 Ravishankar N >:



Could you check if the self-heal daemon on all nodes is connected
to the 3 bricks? You will need to check the glustershd.log for that.
If it is not connected, try restarting the shd using `gluster
volume start engine force`, then launch the heal command like you
did earlier and see if heals happen.


I've executed the command on all 3 nodes (Know is enougth only one) , 
after that the "heal" command report elements between 6 and 10 ... 
(sometime 6, sometime 8, sometime 10)



Log on glustershd.log don't say anything :


But it does  say something. All these gfids of completed heals in the 
log below are the for the ones that you have given the getfattr output 
of. So what is likely happening is there is an intermittent connection 
problem between your mount and the brick process, leading to pending 
heals again after the heal gets completed, which is why the numbers are 
varying each time. You would need to check why that is the case.

Hope this helps,
Ravi



/[2017-07-20 09:58:46.573079] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal]
0-engine-replicate-0: Completed data selfheal on
e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1  sinks=2/
/[2017-07-20 09:59:22.995003] I [MSGID: 108026]
[afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81/
/[2017-07-20 09:59:22.999372] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal]
0-engine-replicate-0: Completed metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81. sources=[0] 1  sinks=2/


If it doesn't, please provide the getfattr outputs of the 12 files
from all 3 nodes using `getfattr -d -m . -e hex
//gluster/engine/brick//path-to-file` ?


*/NODE01:/*
/getfattr: Removing leading '/' from absolute path names/
/# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x0012/
/trusted.bit-rot.version=0x090059647d5b000447e9/
/trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9/
/
/
/getfattr: Removing leading '/' from absolute path names/
/# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x000e/
/trusted.bit-rot.version=0x090059647d5b000447e9/
/trusted.gfid=0x676067891f344c1586b8c0d05b07f187/
/
/
/getfattr: Removing leading '/' from absolute path names/
/# file:

gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x0055/
/trusted.bit-rot.version=0x090059647d5b000447e9/
/trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7/
/trusted.glusterfs.shard.block-size=0x2000/

/trusted.glusterfs.shard.file-size=0x000c80d4f229/
/
/
/getfattr: Removing leading '/' from absolute path names/
/# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x0007/
/trusted.bit-rot.version=0x090059647d5b000447e9/
/trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a/
/
/
/getfattr: Removing leading '/' from absolute path names/
/# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x/
/trusted.bit-rot.version=0x0f0059647d5b000447e9/
/trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3/
/trusted.glusterfs.shard.block-size=0x2000/

/trusted.glusterfs.shard.file-size=0x00100800/
/
/
/getfattr: Removing leading '/' from absolute path names/
/# file: gluster/engine/brick/__DIRECT_IO_TEST__/
/trusted.afr.dirty=0x/
/trusted.afr.engine-client-1=0x/
/trusted.afr.engine-client-2=0x/

Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-20 Thread Eli Mesika
Hi

Please attach full engine.log

On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke 
wrote:

> Hi,
> i currently have a problem with changing one of our clusters to
> compatibility
> version 4.0.
> The Log shows a NullPointerException after several successful vms:
> 2017-07-19 11:19:45,886 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand]
> (default task-31) [1acd2990] Error during ValidateFailure.:
> java.lang.NullPointerException
> at
> org.ovirt.engine.core.bll.UpdateVmCommand.validate(
> UpdateVmCommand.java:632)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.CommandBase.internalValidate(
> CommandBase.java:886)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:391)
> [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
> [bll.jar:]
> .
>
> On other Clusters with the exect same configuration the change to 4.0 was
> successfull without a problem.
> Turning off the cluster for the change is also not possible because of
> >1200
> Vms running on it.
>
> Does anyone have an idea what to do, or that to look for?
>
> Thanks
> Marcel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-20 Thread Karli Sjöberg
Den 20 juli 2017 13:29 skrev Arman Khalatyan :Hi,Can some one share an experience with dynamic creating and removing VMs based on the load?Currently I am just creating with the python SDK a clone of the apache worker, are there way to copy some config files to the VM before starting it ?E.g. Puppet could easily swing that sort of job. If you deploy also Foreman, it could automate the entire procedure. Just a suggestion/KThanks,Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-20 Thread Arman Khalatyan
Hi,
Can some one share an experience with dynamic creating and removing VMs
based on the load?
Currently I am just creating with the python SDK a clone of the apache
worker, are there way to copy some config files to the VM before starting
it ?

Thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread yayo (j)
2017-07-20 11:34 GMT+02:00 Ravishankar N :

>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals happen.
>
>
I've executed the command on all 3 nodes (Know is enougth only one) , after
that the "heal" command report elements between 6 and 10 ... (sometime 6,
sometime 8, sometime 10)


Log on glustershd.log don't say anything :

*[2017-07-20 09:58:46.573079] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
sources=[0] 1  sinks=2*
*[2017-07-20 09:59:22.995003] I [MSGID: 108026]
[afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81*
*[2017-07-20 09:59:22.999372] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=[0] 1  sinks=2*




> If it doesn't, please provide the getfattr outputs of the 12 files from
> all 3 nodes using `getfattr -d -m . -e hex */gluster/engine/brick/*
> path-to-file` ?
>
>
*NODE01:*
*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0012*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x000e*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x676067891f344c1586b8c0d05b07f187*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0055*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x000c80d4f229*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0007*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x*
*trusted.bit-rot.version=0x0f0059647d5b000447e9*
*trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x00100800*

*getfattr: Removing leading '/' from absolute path names*
*# file: gluster/engine/brick/__DIRECT_IO_TEST__*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x*
*trusted.gfid=0xf05b97422771484a85fc5b6974bcef81*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0001*
*trusted.bit-rot.version=0x0f0059647d5b000447e9*
*trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x00100800*

*getfattr: Removing leading '/' from absolute 

Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread Ravishankar N


On 07/20/2017 02:20 PM, yayo (j) wrote:

Hi,

Thank you for the answer and sorry for delay:

2017-07-19 16:55 GMT+02:00 Ravishankar N >:


1. What does the glustershd.log say on all 3 nodes when you run
the command? Does it complain anything about these files?


No, glustershd.log is clean, no extra log after command on all 3 nodes


Could you check if the self-heal daemon on all nodes is connected to the 
3 bricks? You will need to check the glustershd.log for that.
If it is not connected, try restarting the shd using `gluster volume 
start engine force`, then launch the heal command like you did earlier 
and see if heals happen.


If it doesn't, please provide the getfattr outputs of the 12 files from 
all 3 nodes using `getfattr -d -m . -e hex 
//gluster/engine/brick//path-to-file` ?


Thanks,
Ravi


2. Are these 12 files also present in the 3rd data brick?


I've checked right now: all files exists in all 3 nodes

3. Can you provide the output of `gluster volume info` for the
this volume?



/Volume Name: engine/
/Type: Replicate/
/Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
/Status: Started/
/Snapshot Count: 0/
/Number of Bricks: 1 x 3 = 3/
/Transport-type: tcp/
/Bricks:/
/Brick1: node01:/gluster/engine/brick/
/Brick2: node02:/gluster/engine/brick/
/Brick3: node04:/gluster/engine/brick/
/Options Reconfigured:/
/nfs.disable: on/
/performance.readdir-ahead: on/
/transport.address-family: inet/
/storage.owner-uid: 36/
/performance.quick-read: off/
/performance.read-ahead: off/
/performance.io-cache: off/
/performance.stat-prefetch: off/
/performance.low-prio-threads: 32/
/network.remote-dio: off/
/cluster.eager-lock: enable/
/cluster.quorum-type: auto/
/cluster.server-quorum-type: server/
/cluster.data-self-heal-algorithm: full/
/cluster.locking-scheme: granular/
/cluster.shd-max-threads: 8/
/cluster.shd-wait-qlength: 1/
/features.shard: on/
/user.cifs: off/
/storage.owner-gid: 36/
/features.shard-block-size: 512MB/
/network.ping-timeout: 30/
/performance.strict-o-direct: on/
/cluster.granular-entry-heal: on/
/auth.allow: */

  server.allow-insecure: on





Some extra info:

We have recently changed the gluster from: 2 (full
repliacated) + 1 arbiter to 3 full replicated cluster



Just curious, how did you do this? `remove-brick` of arbiter
brick  followed by an `add-brick` to increase to replica-3?


Yes


#gluster volume remove-brick engine replica 2 
node03:/gluster/data/brick force *(OK!)*


#gluster volume heal engine info *(no entries!)*

#gluster volume add-brick engine replica 3 
node04:/gluster/engine/brick *(OK!)*


*After some minutes*

[root@node01 ~]#  gluster volume heal engine info
Brick node01:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node02:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0

Thanks,
Ravi


Another extra info (I don't know if this can be the problem): Five 
days ago A black out has suddenly shut down the networks switch (also 
gluster network) of node 03 and 04 ... But I don't know this problem 
is in place after this black out


Thank you!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Engine Reports oVirt >= 4

2017-07-20 Thread Shirly Radco
Hi Victor,

oVirt reports is not available since v4.0.
The oVirt DWH is available, can be queried by external reporting solutions
that support sql queries.
We are currently working on a the new metrics store solution for oVirt.

Please see:

http://www.ovirt.org/develop/release-management/features/engine/metrics-store/
http://www.ovirt.org/develop/release-management/features/engine/metrics-store-collected-metrics

Best regards,

--

SHIRLY RADCO

BI SOFTWARE ENGINEER,

Red Hat Israel 

sra...@redhat.com
 
 


On Wed, Jul 19, 2017 at 6:57 AM, Victor José Acosta Domínguez <
vic.a...@gmail.com> wrote:

> Hello everyone, quick question, is ovirt-engine-reports deprecated on
> oVirt >= 4?
>
> Because i don't find ovirt-engine-reports package on oVirt's 4 repo.
>
>
> Regards
>
> Victor Acosta
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread yayo (j)
Hi,

Thank you for the answer and sorry for delay:

2017-07-19 16:55 GMT+02:00 Ravishankar N :

1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>

No, glustershd.log is clean, no extra log after command on all 3 nodes


> 2. Are these 12 files also present in the 3rd data brick?
>

I've checked right now: all files exists in all 3 nodes


> 3. Can you provide the output of `gluster volume info` for the this volume?
>


*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: node01:/gluster/engine/brick*
*Brick2: node02:/gluster/engine/brick*
*Brick3: node04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead: on*
*transport.address-family: inet*
*storage.owner-uid: 36*
*performance.quick-read: off*
*performance.read-ahead: off*
*performance.io-cache: off*
*performance.stat-prefetch: off*
*performance.low-prio-threads: 32*
*network.remote-dio: off*
*cluster.eager-lock: enable*
*cluster.quorum-type: auto*
*cluster.server-quorum-type: server*
*cluster.data-self-heal-algorithm: full*
*cluster.locking-scheme: granular*
*cluster.shd-max-threads: 8*
*cluster.shd-wait-qlength: 1*
*features.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **

  server.allow-insecure: on





>
> Some extra info:
>>
>> We have recently changed the gluster from: 2 (full repliacated) + 1
>> arbiter to 3 full replicated cluster
>>
>
> Just curious, how did you do this? `remove-brick` of arbiter brick
> followed by an `add-brick` to increase to replica-3?
>
>
Yes


#gluster volume remove-brick engine replica 2 node03:/gluster/data/brick
force *(OK!)*

#gluster volume heal engine info *(no entries!)*

#gluster volume add-brick engine replica 3 node04:/gluster/engine/brick
*(OK!)*

*After some minutes*

[root@node01 ~]#  gluster volume heal engine info
Brick node01:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node02:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0



> Thanks,
> Ravi
>

Another extra info (I don't know if this can be the problem): Five days ago
A black out has suddenly shut down the networks switch (also gluster
network) of node 03 and 04 ... But I don't know this problem is in place
after this black out

Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread Arsène Gschwind

Hi Lionel,

I'm running such a setup since about 4 month without any problem so far, 
on Cisco UCS Blades.


rgds,
Arsène


On 07/19/2017 09:16 PM, Lionel Caignec wrote:

Hi,

i'm planning to install some new hypervisors (ovirt) and i'm wondering if it's 
possible to get it installed on sdcard.
I know there is write limitation on this kind of storage device.
Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
this kind of storage?

Thanks

--
Lionel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  | http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt node ready for production env?

2017-07-20 Thread Lionel Caignec
Hi,

i'm did not test myself so i prefer asking before use it 
(https://www.ovirt.org/node/). 
Is ovirt node can be used for production environment ? 
Is it possible to add some software on host (ex: backup tools, ossec,... )? 
How does work security update, is it managed by ovirt? or can i plug ovirt node 
on spacewalk/katello?


Sorry for my "noobs question"

Regards
--
Lionel 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread Lionel Caignec
Ok thank you,

for now i'm not so advanced on architecture design i'm just thinking of what 
can i do.

Lionel

- Mail original -
De: "Yedidyah Bar David" 
À: "Lionel Caignec" 
Cc: "users" 
Envoyé: Jeudi 20 Juillet 2017 08:03:50
Objet: Re: [ovirt-users] ovirt on sdcard?

On Wed, Jul 19, 2017 at 10:16 PM, Lionel Caignec  wrote:
> Hi,
>
> i'm planning to install some new hypervisors (ovirt) and i'm wondering if 
> it's possible to get it installed on sdcard.
> I know there is write limitation on this kind of storage device.
> Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
> this kind of storage?

Perhaps provide some more details about your plans?

The local disk is normally used only for standard OS-level stuff -
mostly logging. If you put /var/log on NFS/iSCSI/whatever, I think
you should not expect much other local writing.
Didn't test this myself.

People are doing many other things, including putting all of the
root filesystem on remote storage. There are many options, depending
on your hardware, your existing infrastructure, etc.

Best,

>
> Thanks
>
> --
> Lionel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on sdcard?

2017-07-20 Thread Yedidyah Bar David
On Wed, Jul 19, 2017 at 10:16 PM, Lionel Caignec  wrote:
> Hi,
>
> i'm planning to install some new hypervisors (ovirt) and i'm wondering if 
> it's possible to get it installed on sdcard.
> I know there is write limitation on this kind of storage device.
> Is it a viable solution? there is somewhere some tuto about tuning ovirt on 
> this kind of storage?

Perhaps provide some more details about your plans?

The local disk is normally used only for standard OS-level stuff -
mostly logging. If you put /var/log on NFS/iSCSI/whatever, I think
you should not expect much other local writing.
Didn't test this myself.

People are doing many other things, including putting all of the
root filesystem on remote storage. There are many options, depending
on your hardware, your existing infrastructure, etc.

Best,

>
> Thanks
>
> --
> Lionel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users