Mailing lists matching ceph
ceph-users ceph.io
It is ourself mistake.Please close it!Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
i user ceph octopus v15
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
"ceph daemon osd.x ops" shows ops currently in flight, the number is different
from "ceph osd status
Thanks soo much to the Ceph Teams and Community, all your efforts are amazing
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
email id: esing...@es.iitr.ac.in
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
There is a command `ceph pg getmap`.
It produces a binary file. Are there any utility to decode it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It seems that the only way to modify the code is manually ...
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hey, did you ever find a resolution for this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I do not have cache pool in it
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I forget to add that the Ceph version is 17.2.5 managed with cephadm.
/Jimmy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks for this, I've replied above but sadly a client eviction and remount
didn't help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
rbd --version
ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
good news
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
probably `ceph mgr fail` will help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Of course, the ceph cluster has sufficient capacity to handle the job
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Package: ceph
Version: 12.2.11+dfsg1-2.1
Severity: grave
Justification: renders package unusable
Dear Maintainer,
I'm trying to deploy a 2 monitor ceph cluster with 2 arm64 server nodes.
root@ceph-node1:~# ceph -v
ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous
(stable
Package: ceph
Version: 12.2.11+dfsg1-2.1
Severity: grave
Justification: renders package unusable
Dear Maintainer,
I'm trying to deploy a 2 monitor ceph cluster with 2 arm64 server nodes.
root@ceph-node1:~# ceph -v
ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous
(stable
Hi all,
I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu
14.04.4 (openstack instance).
This is my setup:
ceph-admin
ceph-mon
ceph-osd-1
ceph-osd-2
I've following these steps from ceph-admin node:
I have the user "ceph" created in all nodes and access fr
root at ceph-mgr <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~#
ceph balancer mode upmap
root at ceph-mgr <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>:~#
ceph balancer optimize myplan
root at ceph-mgr <http://lists.ceph.com/listinfo.cgi/ceph-users-cep
>
> On Mon, Oct 22, 2018 at 9:46 AM Dylan McCulloch wrote:
> >
> > On Mon, Oct 8, 2018 at 2:57 PM Dylan McCulloch > wrote:
> > >>
> > >> Hi all,
> > >>
> > >>
> > >> We have identified some unexpected blocking beh
Hi,
I'm a home user of ceph. Most of the time I can look at the email lists and
articles and figure things out on my own. I've unfortunately run into an
issue I can't troubleshoot myself.
Starting one of my monitors yields this error:
2020-01-17 15:34:13.497 7fca3d006040 0 mon.kvm2@-1(probing
>
> Try to install a completely new ceph cluster from scratch on fresh
> installed LTS Ubuntu by this doc
> https://docs.ceph.com/en/latest/cephadm/install/ . Many interesting
> discoveries await you.
on centos7 14.2.22, manual with no surprises (just installed, so not reall
Hi,
root@ppm-c240-ceph3:/var/run/ceph# ceph --admin-daemon
/var/run/ceph/ceph-osd.11.asok config show | less | grep rgw_max_chunk_size
rgw_max_chunk_size: 524288,
root@ppm-c240-ceph3:/var/run/ceph#
And the value is above 4 MB.
Regards,
--
Vivek Varghese Cherian
Hi,
http://ceph.com/community/careers/
Has non inktank Ceph jobs ;-)
Cheers
On 04/03/2014 19:06, Ivo Jimenez wrote:
Is there a listing of Ceph Jobs somewhere on the net (besides Inktank's)?
If so, can someone point me to it?
thanks a lot
Hi,
It would be nice to replace the files
https://github.com/ceph/ceph/blob/giant/deps.deb.txt
https://github.com/ceph/ceph/blob/giant/deps.rpm.txt
with a script that uses the corresponding package files:
https://github.com/ceph/ceph/blob/giant/debian/control
https://github.com/ceph/ceph/blob
Hi all,
I wanna deploy Ceph and I see the doc here
(http://docs.ceph.com/docs/dumpling/start/quick-start-preflight/). I
wonder how could I install ceph from latest source codes install of
specific software libraries like `sudo apt-get install ceph-deploy`?
After I compile ceph source codes I
Hello Ceph lovers
You would have noticed that recently RedHat has released RedHat Ceph
Storage 1.3
http://redhatstorage.redhat.com/2015/06/25/announcing-red-hat-ceph-storage-1-3/
My question is
- What's the exact version number of OpenSource Ceph is provided with this
Product
- RHCS 1.3
Ray,
Just wondering, what’s the benefit for binding the ceph-osd to a specific CPU
core?
Thanks
Jian
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ray Sun
Sent: Tuesday, June 30, 2015 12:19 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to use cgroup
On Tue, 13 Oct 2015, Nick Fisk wrote:
> Do you know if any of the Tiering + EC performance improvements
> currently waiting to merge will make the final release or is it likely
> they will get pushed back to Jewel?
>
> Specifically:-
> https://github.com/ceph/ceph/pull/5486
&g
Just a reminder that our Performance Ceph Tech Talk with Mark Nelson
will be starting in 1 hour.
If you are unable to attend there will be a recording posted on the
Ceph YouTube channel and linked from the page at:
http://ceph.com/ceph-tech-talks/
--
Best Regards,
Patrick McGarry
Director
I believe this is the source of issues (cited line).
Purge all ceph packages from this node and remove user/group 'ceph',
than retry.
On 06/13/2016 02:46 PM, Fran Barrera wrote:
[ceph-admin][WARNIN] usermod: user ceph is currently used by process 1303
I am using ceph-dash for dashboard of ceph clusters.
There are contrib directory for apache,nginx,wsgi in ceph-dash sources.
However, I cannot adjust those files to start ceph-dah as a apache daemon
or other daemon.
How to run ceph-dash as a daemon?
thanks.
John Haan
Just a reminder that this month’s Ceph Tech Talk is starting in about 10m.
http://ceph.com/ceph-tech-talks/
Come join us to hear about: “PostgreSQL on Ceph under Mesos/Aurora with Docker.”
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http
Hey cephers,
Here are the links to both the video and the slides from the Ceph Tech
Talk today. Thanks again to Thorvald and Medallia for stepping forward
to present.
Video: https://youtu.be/OqlC7S3cUKs
Slides:
http://www.slideshare.net/Inktank_Ceph/2016jan28-high-performance-production
On Wed, 24 May 2017, Jim Curtis wrote:
> Hi,
>
> A few of us that work on Ceph in Kubernetes wanted to get an
> understanding of how the community is using Ceph with Kubernetes.
>
> Specifically, what are you using to deploy Ceph on Kubernetes? Are
> you using Ansible sc
installed mimic on an empty cluster. yanked out an osd about 1/2hr ago and
its still showing as in with ceph -s, ceph osd stat, and ceph osd tree.
is the timeout long?
hosts run ubuntu 16.04. ceph installed using ceph-ansible branch stable-3.1
the playbook didnt make the default rbd pool
Can anyone confirm if the Ceph repos for Debian/Ubuntu contain packages for
Debian? I'm not seeing any, but maybe I'm missing something...
I'm seeing ceph-deploy install an older version of ceph on the nodes (from the
Debian repo) and then failing when I run "ceph-deploy osd ..." be
this series updates our backend to be ceph nautilus ready
(especially ceph-volume) and also improves the gui quite a bit
it features:
* select db/wal and the size on the gui for new osds
* show manager seperately of monitors
* be able to start/stop/restart all services
* view the syslog
Hi,
Currently it's only set at OSD startup time. There is a PR in the works
to fix this however:
https://github.com/ceph/ceph/pull/29606
Thanks,
Mark
On 8/29/19 9:23 AM, Amudhan P wrote:
Hi,
How do i change "osd_memory_target" in ceph command line.
regar
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/
On Sat, Jun 13, 2020 at 2:31 PM masud parvez
wrote:
> Could anyone give me the latest version ceph install guide for ubuntu 20.04
> ___
> ceph-users mai
aha, thanks very much for pointing out, Anthony!
Just a summary for the screenshot pasted in my previous email. Based on my
understanding, "ceph daemon osd.x ops" or "ceph daemon osd.x
dump_ops_in_flight" shows the ops currently being processed in the os
Hi, Experts,
we have a ceph cluster report HEALTH_ERR due to multiple old versions.
health: HEALTH_ERR
There are daemons running multiple old versions of ceph
after run `ceph version`, we see three ceph versions in {16.2.*} , these
daemons are ceph osd.
our question is: how
Hi,
The backport to pacific was rejected [1], you may switch to reef, when [2]
merged and released
[1] https://github.com/ceph/ceph/pull/55109
[2] https://github.com/ceph/ceph/pull/55110
k
Sent from my iPhone
> On Jan 25, 2024, at 04:12, changzhi tan <544463...@qq.com&
c65158
sys-cluster/ceph: Update dep on sys-fs/fuse to include slot
Package-Manager: Portage-2.3.6, Repoman-2.3.3
sys-cluster/ceph/ceph-0.94.9.ebuild| 2 +-
sys-cluster/ceph/ceph-10.2.3-r2.ebuild | 2 +-
sys-cluster/ceph/ceph-10.2.5-r1.ebuild | 2 +-
sys-cluster/ceph/ceph-10.2.5-r3.ebuild | 2 +
Hi together,
I believe the deciding factor is whether the OSD was deployed using ceph-disk
(in "ceph-volume" speak, a "simple" OSD),
which means the metadata will be on a separate partition, or whether it was
deployed with "ceph-volume lvm".
The latter store
udevadm info -e >/tmp/1828617-2.out
~# ls -l /var/lib/ceph/osd/ceph*
-rw--- 1 ceph ceph 69 May 21 08:44
/var/lib/ceph/osd/ceph.client.osd-upgrade.keyring
/var/lib/ceph/osd/ceph-11:
total 24
lrwxrwxrwx 1 ceph ceph 93 May 28 22:12 block ->
/dev/ceph-33de740d-bd8c-4b47-a601-3e6e634e48
Hello Cephers
I am trying to setup RGW using Ceph-deploy which is described here
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
But unfortunately it doesn't seems to be working
Is there something i am missing or you know some fix for this.
[root@ceph-node1
Dear Ceph Users,
I have the following situation in my small 3-node cluster:
--snip
root@ceph2:~# ceph status
cluster d1af2097-8535-42f2-ba8c-0667f90cab61
health HEALTH_WARN
1 mons down, quorum 0,1 ceph0,ceph1
monmap e1: 3 mons at
{ceph0=10.0.0.30:6789/0,ceph1=10.0.0.31
administrator@hvs001:~$ sudo cephadm shell -- ceph versions
[sudo] password for administrator:
Inferring fsid dd4b0610-b4d2-11ec-bb58-d1b32ae31585
Using recent ceph image
quay.io/ceph/ceph@sha256:6f2e9e45515e003fb332bbf9302c55d604810ff35978e88b75fe005a5f470f41
{
"mon": {
&qu
So I disabled ceph-disk and will chalk it up as a red herring to ignore.
On Thu, Jul 20, 2017 at 11:02 AM Roger Brown <rogerpbr...@gmail.com> wrote:
> Also I'm just noticing osd1 is my only OSD host that even has an enabled
> target for ceph-disk (ceph-disk@dev-sdb2.service).
&g
Hi,
troubles with ceph_init (after a test reboot)
# ceph_init restart osd
# ceph_init restart osd.0
/usr/lib/ceph/ceph_init.sh: osd.0 not found (/etc/ceph/ceph.conf defines
mon.xxx , /var/lib/ceph defines mon.xxx)
1 # ceph-disk list
[...]
/dev/sdc :
/dev/sdc1 ceph data, prepared, cluster
Hi,
I am a newcomer to Ceph. After having a look at the docs (BTW, it is nice
to see its concepts being implemented), I am trying to do some tests,
mainly to check the Python APIs to access RADOS and RDB components. I am
following this quick guide:
http://ceph.com/docs/next/start/quick-ceph
How did you install ceph and which version exactly? Running as ceph
should only happen with >= 9.2, which is available on Xenial.
If I install ceph=10.0.3-0ubuntu1 on a new machine, /var/lib/ceph and
/var/run/ceph have ceph:ceph as owner, which looks fine to me. One could
discuss the owners
Ceph-disk didn't remove an osd from the cluster either. That has never been
a thing for ceph-disk or ceph-volume. There are other commands for that.
On Sat, Jun 2, 2018, 4:29 PM Marc Roos wrote:
>
> But leaves still entries in crush map and maybe also ceph auth ls, and
> the dir in
installing epel-release before installing
> ceph-deploy. If the order of installation is ceph-deploy followed by
> epel-release, the issue is being hit.
>
> Thanks,
> Pavana
>
> On Sat, Aug 29, 2015 at 10:02 AM, pavana bhat <pavanakrishnab...@gmail.com>
> wrot
Dear TeamAfter executing :ceph-deploy -v osd prepare ceph-node2:/home/ceph/osd1i'm getting some error :[ceph-node2][DEBUG ] connected to host: ceph-node2[ceph-node2][DEBUG ] detect platform information from remote host[ceph-node2][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distro info
Actually, it is. We took the single host getting started out, because
nobody would really deploy a distributed system like Ceph for production on
single host. The problem is that the default crush rule is set to the host
level, not the osd level.
Note, I think ceph-deploy mon create-initial
Thank you,
That did it.
On Sat, Feb 1, 2014 at 1:15 AM, John Wilkins john.wilk...@inktank.com wrote:
Actually, it is. We took the single host getting started out, because nobody
would really deploy a distributed system like Ceph for production on single
host. The problem is that the default
Hy,
I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu
precise ( with raring kernel 3.8.0-25 ) as client.
My problem is the distribution of data on the cluster. I have 3 servers
each with 6 osd, but the distribution is very heterogeneous :
86% /var/lib/ceph/osd/ceph-15
On Mon, Jul 1, 2013 at 8:49 AM, Pierre BLONDEAU
pierre.blond...@unicaen.fr wrote:
Hy,
I use the 0.64.1 version of CEPH on debian wheezy for serveur and ubuntu
precise ( with raring kernel 3.8.0-25 ) as client.
My problem is the distribution of data on the cluster. I have 3 servers each
Symbol
4,99% ceph-osd [.]
rocksdb::autovector*, 8ul>::size
◆
3,14% ceph-osd [.]
std::
Hi All,
I am testing install ceph cluster from ceph-deploy 1.3.2, I get a python
error when execute ceph-deploy disk list.
Here is my output:
[root@ceph-02 my-cluster]# ceph-deploy disk list ceph-02
[ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bin/ceph-deploy disk list
ceph-02
[ceph-02][DEBUG
I'm following the tutorial at
http://docs.ceph.com/docs/v0.79/start/quick-ceph-deploy/ to deploy a
monitor using
% ceph-deploy mon create-initial
But I got the following errors:
...
[ceph-node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-node1.asok
Where should I go to get "ceph-installer" source code?
Rgds,
Shinobu
- Original Message -
From: "Ken Dreyer" <kdre...@redhat.com>
To: "ceph-devel" <ceph-de...@vger.kernel.org>, "ceph-users"
<ceph-users@lists.ceph.com>
Sent:
Hello,
I have couple of questions on ceph-mon with mon daemon:
Q1: Working command: /etc/init.d/ceph status mon
Not working : status ceph-mon id=node-13
Why first command is working and why not the 2nd command nto working
status ceph-mon id=node-13
status
On Tue, Oct 11, 2016 at 12:18 PM, Tomáš Kukrál <kukra...@fit.cvut.cz> wrote:
> Hi,
> I wanted to have more control over the configuration than provided by
> ceph-deploy and tried Ceph-ansible https://github.com/ceph/ceph-ansible.
>
> However, it was too complicated and i have
FYI when creating these rgw pools, not all are automatically 'enabled
application'
I created these
ceph osd pool create default.rgw
ceph osd pool create default.rgw.meta
ceph osd pool create default.rgw.control
ceph osd pool create default.rgw.log
ceph osd pool create .rgw.root
ceph osd
Also I'm just noticing osd1 is my only OSD host that even has an enabled
target for ceph-disk (ceph-disk@dev-sdb2.service).
roger@osd1:~$ systemctl list-units ceph*
UNIT LOAD ACTIVE SUB DESCRIPTION
● ceph-disk@dev-sdb2.service loaded failed failed Ceph disk
Hi,
Here is a simple fix for this bug, both ceph-common and ceph-base delete
'etc/ceph' on purge in their postrm script but only ceph-common owns
files in this directory. Just remove the cleanup code from the ceph-base
postrm script.
Cheers,
Michael
>F
Hi,
Here is a simple fix for this bug, both ceph-common and ceph-base delete
'etc/ceph' on purge in their postrm script but only ceph-common owns
files in this directory. Just remove the cleanup code from the ceph-base
postrm script.
Cheers,
Michael
>F
are reasonably high-quality videos and
>include Ceph talks such as:
>"Bringing smart device failure prediction to Ceph"
>"Pains & Pleasures Testing the Ceph Distributed Storage Stack"
>"Ceph cloud object storage: the right way"
>"Lessons Learned
Hello Team ,
Hello Team , I am trying to enable cephx in existing cluster using
ceph-ansible and it is failing when it tried to do`ceph --cluster
ceph --name mon. -k /var/lib/ceph/mon/ceph-computenode01/keyring auth
get-key mon.` . I am sure `mon.` user exist because I created
"ceph log last cephadm" shows the host was added without errors.
"ceph orch host ls" shows the host as well.
"python3 -c import sys;exec(...)" is running on the host.
But still no devices on this host is listed.
Where else can I check?
Thanks!
Tony
> -Origi
In case someone else runs into the same issue in future:
I came out of this issue by installing epel-release before installing
ceph-deploy. If the order of installation is ceph-deploy followed by
epel-release, the issue is being hit.
Thanks,
Pavana
On Sat, Aug 29, 2015 at 10:02 AM, pavana bhat
Hi,
I'm trying to install ceph for the first time following the quick
installation guide. I'm getting the below error, can someone please help?
ceph-deploy install --release=firefly ceph-vm-mon1
[*ceph_deploy.conf*][*DEBUG* ] found configuration file at:
/home/cloud-user/.cephdeploy.conf
Hi, can't pretend that I have all the answers (or any of them!) but I've
also been unable to deploy a mon node that doesn't appear in the 'mon
initial members' list. However, the No such file or directory is
something that I don't remember. Did you run ceph-deploy install against
the node first
On Mon, May 26, 2014 at 5:22 AM, JinHwan Hwang calanc...@gmail.com wrote:
I'm trying to install ceph 0.80.1 on ubuntu 14.04. All other things goes
well except 'activate osd' phase. It tells me they can't find proper fsid
when i do 'activate osd'. This is not my first time of installing ceph
Hi,
I have faced a similar issue. This happens if the ceph disks aren't
purged/cleaned completely. Clear of the contents in the /dev/sdb1 device.
There is a file named ceph_fsid in the disk which would have the old
cluster's fsid. This needs to be deleted for it to work.
Hope it helps.
Sharmila
This reminds me that we should also schedule some sort of meetup during
the Openstack summit which is also in Paris !
--
David Moreau Simard
Le 2014-09-01, 8:06 AM, « Loic Dachary » l...@dachary.org a écrit :
Hi Ceph,
The next Paris Ceph meetup is scheduled immediately after the Ceph day
I was trying to get systemd to bring up the monitor using the new systemd
files in Giant. However, I'm not finding the systemd files included in the
CentOS 7 packages. Are they missing or am I confused about how it should
work?
ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578
hi,
Ubuntu 14.04 currently ships ceph 0.79. After firefly release ubuntu
maintainer will update ceph version in ubuntu's repos.
On 2014.04.30 07:08, Kenneth wrote:
Latest Ceph release is Firefly v0.80 right? Or is it still in beta?
And Ubuntu is on 14.04.
Will I be able to install ceph 0.80
How to submit patch: https://github.com/ceph/ceph/blob/master/SubmittingPatches
You can register a bug on tracker.ceph.com/projects/ceph/issues
On Wed, Apr 30, 2014 at 4:30 PM, You, Ji ji@intel.com wrote:
Hi,
An simple question, how to submit path for ceph? I just find the steps
Looks like ceph pg dump all -f json = ceph pg dump summary.
On Fri, May 16, 2014 at 1:54 PM, Cao, Buddy buddy@intel.com wrote:
Hi there,
“ceph pg dump summary –f json” does not returns data as much as “ceph pg
dump summary”, are there any ways to get the fully Json format data for
“ceph
thanks! Hadn't noticed there were a couple of Suse openings there.
On Tue, Mar 4, 2014 at 1:35 PM, Loic Dachary l...@dachary.org wrote:
Hi,
http://ceph.com/community/careers/
Has non inktank Ceph jobs ;-)
Cheers
On 04/03/2014 19:06, Ivo Jimenez wrote:
Is there a listing of Ceph Jobs
Op 7 aug. 2013 om 10:20 heeft Da Chun ng...@qq.com het volgende geschreven:
On Ubuntu, we can start/stop ceph daemons separately as below:
start ceph-mon id=ceph0
stop ceph-mon id=ceph0
How to do this on Centos or rhel? Thanks!
I think this should work:
$ service ceph stop mon.ceph0
Hi,
I accidentally forced push to ceph master at 11pm CEST 8 december 2013 the
following:
https://github.com/ceph/ceph/commit/17e0a7b2942899e3f4307bf3e9c41bcb4304619d
https://github.com/ceph/ceph/commit/d5e44cf8b233e517d8f7f26cd556ad0ac15714c1
https://github.com/ceph/ceph/commit
I see ceph-deploy mounts the ceph data drives as
/dev/sdc1 on /var/lib/ceph/osd/ceph-2 type xfs (rw,noatime)
/dev/sdd1 on /var/lib/ceph/osd/ceph-3 type xfs (rw,noatime)
how do i add a mount option (like -o inode64, ...etc ) to ceph-deploy in the
ceph-deploy osd create
or
ceph-deploy osd
Any thoughts anyone?
Is it safe to perform OS version upgrade on the osd and mon servers?
Thanks
Andrei
- Original Message -
From: "Andrei Mikhailovsky" <and...@arhont.com>
To: ceph-us...@ceph.com
Sent: Tuesday, 20 October, 2015 8:05:19 PM
Subject: [
Hello, everyone!
I just tried to create a new Ceph cluster, using 3 LXC clusters as
monitors, and the 'ceph-deploy mon create-initial' command fails for each
of the monitors with a 'initctl: Event failed' error, when running the
following command:
[ceph-mon-01][INFO ] Running command: sudo
This may see more traction in ceph-users and ceph-devel.
Most people don't usually subscribe to ceph-community.
Cheers!
-Joao
On 09/08/2015 11:44 AM, Robert Sander wrote:
> Hi,
>
> the next meetup in Berlin takes place on September 28 at 18:00 CEST.
>
> Please RSVP at http:/
Hi, experts
While doing the command
ceph-fuse /home/ceph/cephfs
I got the following error :
ceph-fuse[28460]: starting ceph client
2015-09-17 16:03:33.385602 7fabf999b780 -1 init, newargv = 0x2c730c0 newargc=11
ceph-fuse[28460]: ceph mount failed with (110) Connection timed out
ceph
Can you share the output with us?
Rgds,
Shinobu
- Original Message -
From: "Wade Holler" <wade.hol...@gmail.com>
To: "ceph-users" <ceph-users@lists.ceph.com>
Sent: Friday, January 8, 2016 7:29:07 AM
Subject: [ceph-users] ceph osd tree output
Sometimes
You should test out cephfs exported as an NFS target.
- Original Message -
From: "david" <wan...@neunn.com>
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 4:36:17 AM
Subject: [ceph-users] Ceph and NFS
Hello All.
Does anyone provides Ceph rbd/rgw/ce
I think the biggest change is systemd ?
It's works fine with debian jessie, so I think it should be trivial to make it
run on ubuntu 16.04
- Mail original -
De: "Robertz C." <robe...@riseup.net>
À: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Merc
Hello, everyone!
We are trying to create a custom cluster name using the latest ceph-deploy
version (1.5.39), but we keep getting the error:
*'ceph-deploy new: error: subnet must have at least 4 numbers separated by
dots like x.x.x.x/xx, but got: cluster_name'*
We tried to run the new command
Hello,
By the fact ceph-disk is now deprecated, that would be great to update
documentation to have also processes with ceph-volume.
for example :
add-or-rm-osds =>
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
bluestore-migration =>
http://docs.ceph.com/docs/
On Fri, May 18, 2018 at 9:55 AM, Marc Roos <m.r...@f1-outsourcing.eu> wrote:
>
> Should ceph osd status not be stdout?
Oops, that's a bug.
http://tracker.ceph.com/issues/24175
https://github.com/ceph/ceph/pull/22089
John
> So I can do something like this
>
> [@ ~]# ceph
401 - 500 of 186529 matches
Mail list logo