Hi there,
The performance issue was caused by a failed OS drive on one of the
storage nodes. Here is a link [1] to the thread at ceph-ansible ML
with useful tips on using the 'fio' to test storage devices, in case
anyone is interested.
Thank you very much to all.
Best regards,
Cody
[1]
http
for PoC with minimal usages and with no
sign of CPU/RAM starvation during the test.
On the software side, it is running Queens release. The ceph-ansible
version is 3.1.6 and is using filestore with non-collocated setup.
Best regards,
Cody
On Mon, Nov 26, 2018 at 9:13 AM John Fulton wrote:
>
>
frame on the storage network VLAN yet, but think the performance
should not be this bad with MTU 1500. Something must be wrong. Any
suggestions for debugging?
Thank you very much.
Best regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http
Thank you John. I really appreciate your help!
Best regards,
Cody
On Tue, Oct 23, 2018 at 12:39 PM John Fulton wrote:
>
> On Tue, Oct 23, 2018 at 12:22 PM Cody wrote:
> >
> > Hi John,
> >
> > Thank you so much for the explanation.
> >
> > Now
deployment? Do I also
need to include those changes in an environment file to reflect the
latest status quo when it comes to add new OSD nodes? I guess the
answer to it would be yes, but just wish to be sure on it.
Thank you very much.
Best regards,
Cody
On Tue, Oct 23, 2018 at 8:20 AM John Fulton
, and
created an EC pool with a custom EC rule. Do I need to count in all of
those?
Thank you very much.
Best regards,
Cody
On Mon, Oct 22, 2018 at 7:03 AM John Fulton wrote:
>
> No, I don't see why it would hurt the existing settings, provided you
> continue to pass the CRUSH data environm
by increasing the CephStorageCount, would that mess up the
existing settings?
Thank you very much.
Best regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users
To unsubscribe: users-unsubscr
Does it look right?
Thank you very much and have a good weekend!
Best regards,
Cody
On Sat, Oct 13, 2018 at 3:33 AM John Fulton wrote:
>
> On Saturday, October 13, 2018, Cody wrote:
>>
>> Hi everyone,
>>
>> Is it possible to define CRUSH placement rules and apply to d
RUSH placement
rules using device-class and apply the corresponding rules when create
pools. But, I don't know how to do so with TripleO.
Could someone shed light on this?
Best regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http://lists.rd
log : http://paste.openstack.org/show/731932/
/var/log/httpd/placement_wsgi_access.log (with repetitive entries/patterns
shortened): http://paste.openstack.org/show/731933/
/var/log/httpd/placement_wsgi_error.log :
http://paste.openstack.org/show/731934/
Thank you very much.
Best reg
Hello Steven,
I just tested the method as you provided and it worked well.
parameter_defaults:
ComputeExtraConfig:
nova::cpu_allocation_ratio: 10.0
nova::ram_allocation_ratio: 1.0
Thank you so much!
Best regards,
Cody
On Thu, Oct 11, 2018 at 6:27 AM Steven Hardy wrote
://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/container_image_prepare.html#prepare-environment-containers
Thank you to all.
Best regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman
For the migration test, I used instances (cirros) without any volume
attached, since Cinder was not available.
On Tue, Oct 9, 2018 at 9:28 AM Cody wrote:
> Hi Tzach,
>
> Thank you very much for verifying and reporting the bug.
>
> As I moved on to deploy with Ceph, the Cinder
after I address
this issue [1]. Other than that, the Cinder volume is the only major issue
for now.
[1] https://lists.rdoproject.org/pipermail/dev/2018-October/008934.html
Thank you,
Cody
On Mon, Oct 8, 2018 at 3:44 PM Tzach Shefi wrote:
> Hey Cody,
>
> The bad news, after our emai
fencing and unfence
resources. Any helps would be greatly appreciated.
Thank you,
Cody
On Wed, Oct 3, 2018 at 11:46 AM Cody wrote:
> Hi everyone,
>
> My cluster is deployed with both Controller and Instance HA. The
> deployment completed without errors, but I noticed something strange fro
Waiting for fence-down flag to be cleared
Waiting for fence-down flag to be cleared
...
So I guess something may be wrong with fencing, but I have no idea what
caused it and how to fix it. Any helps/suggestions/opinions would be
greatly appreciated. Thank you very much.
Regards.
Cody
Hi Michele,
It is great to know the latest progress on this subject. I am
currently running a 3-controller cluster in Queens. Later I will test
Rocky and will experiment on this new feature!
Thank you to all of you for the wonderful works!
Best regards,
Cody
Regards,
Cody
On Fri, Sep 21, 2018
For every Pacemaker managed service, the number of nodes is limited
to either one or three.
Is that right?
[1]
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/chap-planning_your_overcloud#sect-Planning_Node_Deployment_Roles
Reg
[1]
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/roles#sect-Composable_Services-Guidelines
Regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/list
someone walk me through the traffic flow in this case? I really
appreciate your help!
[1]
https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/network_isolation.html#using-the-native-vlan-for-floating-ips
Regards,
Cody
___
users
20 matches
Mail list logo