Yes, happened to me too. Simply changing the URL fixed the problem.
On Fri, Jul 8, 2016 at 6:55 PM, Martin Palma wrote:
> It seems that the packages "ceph-release-*.noarch.rpm" contain a
> ceph.repo pointing to the baseurl
> "http://ceph.com/rpm-hammer/rhel7/$basearch; which
Just for giggles I tried the rolling upgrade to 10.2.2 again today.
This time I rolled mon.0 and osd.0 first while keeping the mds servers
up and then rolled them before moving on to the other three. No
assertion failure this time since I guess I always had an mds active. I
wonder if I will
On Sat, Jul 9, 2016 at 1:20 AM, Pisal, Ranjit Dnyaneshwar
wrote:
> Hi All,
>
>
>
> I am in process of adding new OSDs to Cluster however after adding second
> node Cluster recovery seems to be stopped.
>
>
>
> Its more than 3 days but Objects degraded % has not improved
Hi,
We have a smallish Ceph cluster for RBD images. We use snapshotting for local
incremental backups. I would like to start sending some of these snapshots to
an external cloud service (likely Amazon) for disaster recovery purposes.
Does anyone have advice on how to do this? I suppose I
I even tried with bare .raw file but error is still the same
016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total memory:
There is no Ceph plugin for VMware (and I think you need at least an Enterprise
license for storage plugins, much $$$).
The "VMware" way to do this without the plugin would be to have a VM running on
every host serving RBD devices over iSCSI to the other VMs (the way their
storage applicances
Hi,
does anyone have experience how to connect vmware with ceph smart ?
iSCSI multipath does not really worked well.
NFS could be, but i think thats just too much layers in between to have
some useable performance.
Systems like ScaleIO have developed a vmware addon to talk with it.
Is there
I posted a bunch of the more recent numbers in the specs. Had some down time
and had a bunch of SSD's lying around and just curious if any were hidden
gems... Interestingly, the Intel drives seem to not require the write cache
off, while other drives had to be "forced" off using the hdparm -W0
[root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
rbd_user=cinder
rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb
On 07/08/2016 11:59 AM, Manuel Lausch wrote:
> hi,
>
> In the last days I do play around with ceph jewel on debian Jessie and
> CentOS 7. Now I have a question about systemd on this Systems.
>
> I installed ceph jewel (ceph version 10.2.2
> (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian
hi,
In the last days I do play around with ceph jewel on debian Jessie and
CentOS 7. Now I have a question about systemd on this Systems.
I installed ceph jewel (ceph version 10.2.2
(45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian Jessie and
prepared some OSDs. While playing around I
Hi Kees,
I regenerated the UUID as per your suggestion.
Now i have same UUID in host1 and host2.
I could create volumes and attach them to existing VMs.
I could create new glance images.
But still finding the same error while instance launch via GUI.
2016-07-08 11:23:25.067 86007 INFO
Hi All,
I am in process of adding new OSDs to Cluster however after adding second node
Cluster recovery seems to be stopped.
Its more than 3 days but Objects degraded % has not improved even by 1%.
Will adding further OSDs help improve situation or is there any other way to
improve recovery
Thanks for the pointer, I didn't know the answer, but now I do, and
unfortunately, XenServer is relying on the kernel module. It's
surprising that their latest release XenServer 7 which was released on
the 6th of July is only using kernel 3.10 ... I guess since it is based
upon CentOS 7 and
Hi Dzianis,
On Thu, Jun 30, 2016 at 4:03 PM, Dzianis Kahanovich wrote:
> Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global
> stop/restart everything oneshot.
>
> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby
>
> Now after
Hi Kees,
Thanks for your help!
Node 1 controller + compute
-rw-r--r-- 1 root root63 Jul 5 12:59 ceph.client.admin.keyring
-rw-r--r-- 1 glance glance 64 Jul 5 14:51 ceph.client.glance.keyring
-rw-r--r-- 1 cinder cinder 64 Jul 5 14:53 ceph.client.cinder.keyring
-rw-r--r-- 1 cinder
Hi,
I'd recommend generating an UUID and use it for all your compute nodes.
This way, you can keep your configuration in libvirt constant.
Regards,
Kees
On 08-07-16 16:15, Gaurav Goyal wrote:
>
> For below section, should i generate separate UUID for both compte hosts?
>
Hi,
> Ah, Thanks Micha, That makes sense. I'll see if I can dig up another server
to build on OSD server. Sadly, XenServer is not tolerant of new kernels.
> Do you happen to know if there is a dkms package of RBD anywhere? I might be
able to build the latest RBD against the 3.10 kernel that
Hi Gaurav,
Have you distributed your Ceph authentication keys to your compute
nodes? And, do they have the correct permissions in terms of Ceph?
K.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
Thanks i could restore my cinder service.
But while trying to launch an instance, i am getting same error.
Can you please help me to know what am i doing wrong?
2016-07-08 09:28:31.368 31909 INFO nova.compute.manager
[req-c56770a7-5bab-426b-b763-7473254c6410
Hi!
Our little dev ceph cluster (nothing fancy; 3x1 OSD with 100GB each, 3x monitor
with radosgw) takes over 20 minutes to delete ca. 44000 small objects (<1GB in
total).
Deletion is done by listing objects in blocks of 1000 and then deleting them in
one call for each block; each deletion of
On Fri, Jul 8, 2016 at 8:01 AM, Goncalo Borges
wrote:
> Hi Brad, Patrick, All...
>
> I think I've understood this second problem. In summary, it is memory
> related.
>
> This is how I found the source of the problem:
>
> 1./ I copied and adapted the user application
On 07/07/2016 08:06 PM, Rahul Talari wrote:
I am trying to use Ceph in Docker. I have built the ceph/base and
ceph/daemon DockeFiles. I am trying to deploy a Ceph monitor according
to the instructions given in the tutorial but when I execute the command
without KV store and type:
sudo docker ps
Hi,
We are running Openstack Liberty with a Ceph Jewel backend for glance,
cinder, and nova.
Creating a new instance booting from volume works fine, but resizing this
fails with the following:
error opening image 9257fcc2-94b5-4c3f-950a-eadee03550a6_disk at snapshot
None, error code 500.
Full
It seems that the packages "ceph-release-*.noarch.rpm" contain a
ceph.repo pointing to the baseurl
"http://ceph.com/rpm-hammer/rhel7/$basearch; which does not exist. It
should probably point to "http://ceph.com/rpm-hammer/el7/$basearch;.
- Martin
On Thu, Jul 7, 2016 at 5:57 PM, Martin Palma
Hi,
As far as I know, this is exactly the problem why the new tunables where
introduced, If you use 3 Replicas with only 3 hosts, crush sometimes doesn't
find a solution to place all pgs.
If you are really stuck with bobtail turntables, I can think of 2 possible
workarounds:
1. Add another
Thanks Jason,
I think I'm going to start with a bash script which SSH's into the machine to
check if the process has finished writing and then calls the fsfreeze as I've
got time constraints to getting this working. But I will definitely revisit
this and see if there is something I can create
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Zoltan Arnold Nagy
> Sent: 08 July 2016 08:51
> To: Christian Balzer
> Cc: ceph-users ; n...@fisk.me.uk
> Subject: Re: [ceph-users] multiple
So I've narrowed this down a bit further, I *think* this is happening
during bucket listing - I started a radosgw process with increased logging,
and killed it as soon as I saw the RSS jump. This was accompanied by a ton
of logs from 'RGWRados::cls_bucket_list' printing out the names of the
files
Hi Christian,On 08 Jul 2016, at 02:22, Christian Balzer wrote:Hello,On Thu, 7 Jul 2016 23:19:35 +0200 Zoltan Arnold Nagy wrote:Hi Nick,How large NVMe drives are you running per 12 disks?In my current setup I have 4xP3700 per 36 disks but I feel like I couldget by with 2… Just
Thank you everyone, I just tested and verified the ruleset and applied
it so some pools. Worked like a charm!
K.
On 06-07-16 19:20, Bob R wrote:
> See http://dachary.org/?p=3189 for some simple instructions on testing
> your crush rule logic.
___
Hello,
You only need a create a pool and authentication in Ceph for cinder.
Your configuration should be like this (This is an example configuration
with Ceph Jewel and Openstack Mitaka):
[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool =
Hi Brad, Patrick, All...
I think I've understood this second problem. In summary, it is memory
related.
This is how I found the source of the problem:
1./ I copied and adapted the user application to run in another
cluster of ours. The idea was for me to understand the application
Hi Gaurav,
The following snippets should suffice (for Cinder, at least):
> [DEFAULT]
> enabled_backends=rbd
>
> [rbd]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = cinder-volumes
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
>
34 matches
Mail list logo