mgr(same at every mgr node)
mgr seems to be listen and ceph mgr is running but at dashboard:
No active ceph-mgr instance is currently running the dashboard. A failover
may be in progress. Retrying in 5 seconds...
Regards, I
--
--
####
Yes, you are rigth.
El dom., 18 mar. 2018 12:13, Egoitz Aurrekoetxea <ego...@sarenet.es>
escribió:
> Hi Iban,
>
>
> But that... I assume should only will tell you who is accessing a
> resource... not basically locking one to the other... isn't it?
>
>
> Cheer
Hi Egoitz,
I think, I did something similar using different ceph pool keys for each
pool.
Regards, I
El sáb., 17 mar. 2018 12:46, Egoitz Aurrekoetxea
escribió:
> Good morning,
>
>
> Does some kind of config param exist in Ceph for avoid two hosts accesing
> to the same vm
/sdm1
Regards, I
2017-11-20 14:34 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Hi Wido,
> The disk was empty, I checked that there were no remapped pgs, before
> run ceph-disk prepare. Re-run ceph-disk again?
>
> Regards, i
>
> El lun., 20 nov. 2017 1
Hi Wido,
The disk was empty, I checked that there were no remapped pgs, before run
ceph-disk prepare. Re-run ceph-disk again?
Regards, i
El lun., 20 nov. 2017 14:12, Wido den Hollander <w...@42on.com> escribió:
>
> > Op 20 november 2017 om 14:02 schreef Iban Cab
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get=0xD9DF0B3D6C8C08AC
Bertrand Russell:*"El problem
/rados/deployment/ceph-deploy-osd/
Regards, I
2017-11-14 10:58 GMT+01:00 Dai Xiang <xiang@sky-data.cn>:
> On Tue, Nov 14, 2017 at 10:52:00AM +0100, Iban Cabrillo wrote:
> > Hi Dai Xiang,
> > There is no OSD available at this moment in your cluste, then you can't
&g
20e=up:creating}, 1 up:standby
> > > osd: 0 osds: 0 up, 0 in
> > >
> > > data:
> > > pools: 2 pools, 128 pgs
> > > objects: 0 objects, 0 bytes
> > > usage: 0 kB used, 0 kB / 0 kB avail
> > > pgs: 100.000% pgs un
Hi,
Did you configure de ceph user, and add it to sudoers?
Cheers,I
El dom., 9 abr. 2017 13:33, Zeeshan Haider
escribió:
> Hello guys,
> I been trying to build a basic cluster for ceph on aws ubuntu ec2 instance
> but on moitor creation i see the status like this
>
gt; Many Thanks.
>
> On Mar 13, 2017 7:32 PM, "Iban Cabrillo" <cabri...@ifca.unican.es> wrote:
>
>> HI Yair,
>> This is my conf:
>>
>> [client.rgw.cephrgw]
>> host = cephrgw01
>> rgw_frontends = "civetweb port=8080s ssl_certifica
tended recipient, be
>>> aware that any
>>> > review, reliance, disclosure, copying, distribution or use
>>> of the
>>> contents
>>> > of this message with
Hi,
Are you sure ceph-disk is installed on target machine?
Regards, I
El mié., 1 mar. 2017 14:38, gjprabu escribió:
> Hi All,
>
> Anybody faced similar issue and is there any solution on this.
>
> Regards
> Prabu GJ
>
>
> On Wed, 01 Mar 2017 14:21:14
Hi,
Could I reinstall the server and try only to activate de OSD again
(without zap and prepare)?
Regards, I
2017-02-24 18:25 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> HI Eneko,
> yes the three mons are up and running.
> I do not have any other servers to plu
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Iban Cabrillo Bartolome
Instituto d
.es>:
> Hi Iban,
>
> Is the monitor data safe? If it is, just install jewel in other servers
> and plug in the OSD disks, it should work.
>
> El 24/02/17 a las 14:41, Iban Cabrillo escribió:
>
> Hi,
> We have a serious issue. We have a mini cluster (jewel versi
is
corrupted.
Is there any way to solve this situation?
Any Idea will be great!!
Regards, I
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY
any change on this parameter with the new jewel version
(ceph-radosgw-10.2.5-0.el7.x86_64)?
regards, I
--
####
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://p
..
> .
> [Targets: 0]
>
>
> After building connection to the two targets by iscsi-initiator, it
> will have two iscsi dev: /dev/sdm and /dev/sdn,
>
> but the multipath doesn't recognize it was same device.
>
>
> By leaning multipat
Gk4strXE0UbWW4yzg==
Any Idea?
regards, I
--
####
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get=0xD9DF0B3D6C8C08AC
###
t
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
##
Hi Jon,
Then this is not the issue, RDB was supported on KVM long time ago.
Cheers, I
2016-06-14 21:40 GMT+02:00 Jonathan D. Proulx <j...@csail.mit.edu>:
> On Tue, Jun 14, 2016 at 05:48:11PM +0200, Iban Cabrillo wrote:
> :Hi Jon,
> : Which is the hypervisor used for
t;
>
> --
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
#
uten Communications
> http://www.gol.com/
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Iban Cabrillo Bartolome
Instituto de Fisica
HI,
Please could some one give me any advises?
regards, I
2016-05-20 10:22 GMT+02:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Hi cephers,
>Could someone tell me the right steps, for bring to life and OSD
> server? data and journal disks seems to be OK but the dual SD slo
Hi cephers,
Could someone tell me the right steps, for bring to life and OSD server?
data and journal disks seems to be OK but the dual SD slot for SO have
failed.
Regards, I
--
Iban Cabrillo Bartolome
Instituto de
t; # systemctl start ceph-mon@$(hostname -s)
>
> instead. As far as I know, numbers are only used for osd services,
> mon- and mds-services use the short hostname to identify themselve.
>
> Best regards
> Karsten
>
> 2016-04-27 19:54 GMT+02:00 Iban Cabrillo <cabri...@ifca.unica
with
>
> systemctl mask ceph.service
>
> There are already several open pull requests regarding this issue
> (
> https://github.com/ceph/ceph/pulls?utf8=%E2%9C%93=is%3Apr+is%3Aopen+systemd
> ),
> so I hope it will be fixed with the next point release.
>
> Best regards
> Karsten
--cluster
ceph -f
the mon starts well ( health HEALTH_OK)
Any idea about this??
regards, I
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http
off silently as well.
>>
>> While you could of course do a NFS (some pains) or iSCSI (major pains)
>> head for Ceph the pains and reduced performance make it not an attractive
>> proposition.
>>
>> Christian
>>
>
> ___
> ceph-users mailing l
-) 2015-11-16 12:01:38*
> *===*
>
> *Check if the NPT server is up and running correctly.*
> *if the above both cinder-volume should be in enabled state, then only you
> cinder commands will work.*
>
>
>
> On Mon, Nov 16, 2015 at 5:39 PM, Iban Cabrillo <cabri...@ifca.unican.es>
>
elow command:
>
> cinder-manage service list
>
>
> On Mon, Nov 16, 2015 at 4:45 PM, Iban Cabrillo <cabri...@ifca.unican.es>
> wrote:
>
>> cloud:~ # cinder list
>>
>> +--+---+--+--+---
-6e1c86d5-efb6-469a-bbad-58b1011507bf
volume-7da08f12-fb0f-4269-931a-d528c1507fee
2015-11-10 21:08 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Hi Vasily,
>Did you see anything interesting in the logs?? I do not really kown
> where else look for. Everything seems
Hi Vasily,
Did you see anything interesting in the logs?? I do not really kown
where else look for. Everything seems to be ok for me.
Any help will be very appreciated.
2015-11-06 15:29 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Hi Vasily,
> Of course,
> from ci
ed vcpus: 24
.
I can attach the full log if you want.
2015-11-06 13:48 GMT+01:00 Vasiliy Angapov <anga...@gmail.com>:
> There must be something in /var/log/cinder/volume.log or
> /var/log/nova/nova-compute.log that points to the problem. Can you
> p
LT.
>
> 2015-11-06 19:45 GMT+08:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> > Hi,
> > One more step debugging this issue (hypervisor/nova-compute node is XEN
> > 4.4.2):
> >
> > I think the problem is that libvirt is not getting the correct user or
>
= *AQAonAdWS3iMJxxj9iErv001a0k+vyFdUg==*
Any idea will be welcomed.
regards, I
2015-11-04 10:51 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Dear Cephers,
>
>I still can attach volume to my cloud machines, ceph version is 0.94.5
> (9764da52395923e0b32908d83a9f7304401fe
; > genericfilestorebackend(/var/lib/ceph/osd/ceph-2
> > ) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap'
> > config option
> > 2015-10-30 01:07:02.681342 7f0ed0d067c0 0
> > genericfilestorebackend(/var/lib/ceph/osd/ceph-2
> > ) detect_features: syncfs(2) syscall fully supported (
vdc
Thanks in advance, I
2015-11-03 11:18 GMT+01:00 Iban Cabrillo <cabri...@ifca.unican.es>:
> Hi all,
> During last week I been trying to deploy the pre-existing ceph cluster
> with out openstack intance.
> The ceph-cinder integration was easy (or at least I think so!!)
&g
Hi all,
During last week I been trying to deploy the pre-existing ceph cluster
with out openstack intance.
The ceph-cinder integration was easy (or at least I think so!!)
There is only one volume to attach block storage to out cloud machines.
The client.cinder has permission on
.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
####
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get=0xD9DF0B3D6C8C08AC
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get=0xD9DF0B3D6C8C08AC
Bertrand Russell:
*"El problema con el mundo e
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=getsearch=0xD9DF0B3D6C8C08AC
Bertrand Russell:
*El
HI Udo,
Thanks a lot! The resync flag have solved my doubts.
Regards, I
2014-10-16 12:21 GMT+02:00 Udo Lembke ulem...@polarzone.de:
Am 15.10.2014 22:08, schrieb Iban Cabrillo:
HI Cephers,
I have an other question related to this issue, What would be the
procedure to restore a server
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=getsearch=0xD9DF0B3D6C8C08AC
Bertrand Russell:
*El problema con el
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=getsearch=0xD9DF0B3D6C8C08AC
mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http
...@gmail.com wrote:
thanks for ur reply.
in ur case, u deploy 3 osds in one server. my case is that 3 osds in 3
server.
how to do ?
2014-07-21 17:59 GMT+07:00 Iban Cabrillo cabri...@ifca.unican.es:
Dear,
I am not an expert, but Yes This is possible.
I have RAID1 SAS disk journal
Hi Pratik,
I am not an expert, but I think you need one more OSD server, the default
pools (rbd, metadata, data) have 3 replicas by default.
Regards, I
El 18/07/2014 14:19, Pratik Rupala pratik.rup...@calsoftinc.com
escribió:
Hi,
I am deploying firefly version on CentOs 6.4. I am following
Hi,
I am getting some troubles with the ceph mon stability.
Every couple of days mons die. I only see this error in the logs:
2014-07-08 14:24:53.056805 7f713bb5b700 -1 mon.cephmon02@1(peon) e2 ***
Got Signal Interrupt ***
2014-07-08 14:24:53.061795 7f713bb5b700 1 mon.cephmon02@1(peon) e2
Hi folk,
I am following step by step the test intallation, and checking some
configuration before try to deploy a production cluster.
Now I have a Health cluster with 3 mons + 4 OSDs.
I have created a pool with belonging all osd.x and two more one for two
servers o the other for the other
to chooseleaf in your rules.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Jul 3, 2014 at 4:17 AM, Iban Cabrillo cabri...@ifca.unican.es
wrote:
Hi folk,
I am following step by step the test intallation, and checking some
configuration before try to deploy
Hi Alfredo and folk,
Could you have a look at this?
Someone else has any idea why i am getting this error?
Thanks in advance, I
2014-06-27 16:37 GMT+02:00 Iban Cabrillo cabri...@ifca.unican.es:
Hi Alfredo,
This is the complete procedure:
On OSD node:
[ceph@ceph02 ~]$ sudo parted
+02:00 Alfredo Deza alfredo.d...@inktank.com:
On Mon, Jun 30, 2014 at 11:22 AM, Iban Cabrillo cabri...@ifca.unican.es
wrote:
Hi Alfredo and folk,
Could you have a look at this?
Someone else has any idea why i am getting this error?
Thanks in advance, I
2014-06-27 16:37 GMT
Hi,
I am a little frustrated. After 6 times trying to deploy a test ceph
always get the same error in the osd activation stage.
The version is firefly (for el6 repo), 3 mons, 3 osds all of then Xen VMs.
The mons wake up correctly and I do not know why two osd servers too after
a lot of
paste the full ceph-deploy logs? there are a few reasons why
this might be happening.
On Fri, Jun 27, 2014 at 6:42 AM, Iban Cabrillo cabri...@ifca.unican.es
wrote:
Hi,
I am a little frustrated. After 6 times trying to deploy a test ceph
always
get the same error in the osd activation
Dear,
I am trying to deploy a new test following the instructions. for the
latest firefly version under yum repo.
Installing : ceph-libs-0.80.1-2.el6.x86_64
Installing : ceph-0.80.1-2.el6.x86_64
The initial setups contains 3 mon and little osds (1GB per journal)
The cluster has been
, 2014, at 3:04 AM, Iban Cabrillo cabri...@ifca.unican.es
wrote:
Dear,
I am trying to deploy a new test following the instructions. for the
latest firefly version under yum repo.
Installing : ceph-libs-0.80.1-2.el6.x86_64
Installing : ceph-0.80.1-2.el6.x86_64
The initial setups
Hi,
I am really newbie on ceph.
I was trying to deploy a ceph-test on SL6.2, package installation has been
OK.
I have create a initial cluster with 3 machines (cephadm, ceph02 and
ceph03), ssh passwdless using ceph user is ok
using a config file: cephcloud.conf
[global]
-rw-r--r-- 1 root root 21 Jun 19 13:38 magic
-rw-r--r-- 1 root root 4 Jun 19 13:43 store_version
-rw-r--r-- 1 root root 42 Jun 19 13:43 superblock
-rw-r--r-- 1 root root 2 Jun 19 13:43 whoami
regards, I
2014-06-19 10:36 GMT+02:00 Iban Cabrillo cabri
,
and migrate data between them automatically (most used, size, last time
access) *
*Please could anyone clarify to me this point?*
*Regards, I*
--
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
60 matches
Mail list logo