ceph-volume lvm zap --destroy $DEVICE
From: ceph-users on behalf of Vadim Bulst
Sent: Tuesday, 12 June 2018 4:46:44 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Filestore -> Bluestore
Thanks Sergey.
Could you specify your answer a bit more? When
Hi Pardhiv,
Thanks for sharing!
MJ
On 11-6-2018 22:30, Pardhiv Karri wrote:
Hi MJ,
Here are the links to the script and config file. Modify the config file
as you wish, values in config file can be modified while the script
execution is in progress. The script can be run from any monitor or
no change:
root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
--> Zapping: /dev/dm-0
Running command: /sbin/cryptsetup status /dev/mapper/
stdout: /dev/mapper/ is inactive.
--> Skipping --destroy because no associated physical volumes are found
for /dev/dm-0
Running comman
Jason Dillaman wrote:
One more question, how should I set profile 'rbd-read-only' properly
? I tried to set is for 'client.iso' on both 'iso' and 'jerasure21' pools,
and this did not work. Set profile on both pools to 'rbd', it worked. But I
don't want my iso imaged to be accidentally m
Hi Guys,
i've inherited a CephFS-Cluster, I'm fairly new to CephFS.
The Cluster was down and I managed somehow to bring it up again.
But now there are some Problems that I can't fix that easily.
This is what 'ceph -s' is giving me as Info:
[root@pcl241 ceph]# ceph -s
cluster cde1487e-f930-417a
I have completed the installation of ISCSI.
The documentation is wrong in several parts of it.
Is it anyway to contribute and update with right commands?
I found only
https://github.com/ceph/ceph/tree/master/doc
Which is inside the main Ceph project.
Should I use this to fix docs?
Il 11/
Hi Herbert,
could you please run "ceph osd df"?
Cheers,
Vadim
On 12.06.2018 11:06, Steininger, Herbert wrote:
Hi Guys,
i've inherited a CephFS-Cluster, I'm fairly new to CephFS.
The Cluster was down and I managed somehow to bring it up again.
But now there are some Problems that I can't fix
Figure out which OSDs are too full:
ceph osd df tree
Then you can either reduce their weight:
ceph osd reweight 0.9
Or increase the threshhold after which an OSD is considered too full for
backfills.
How this is configured depends on the version, i think in your version it
is still
ceph p
Hi everyone again,
I continue set up of my testing Ceph cluster (1-node so far).
I changed 'chooseleaf' from 'host' to 'osd' in CRUSH map
to make it run healthy on 1 node. For the same purpose,
I also set 'minimum_gateways = 1' for Ceph iSCSI gateway.
Hi ,
I have created a ceph cluster with 3 osds and everything is running fine.
And our public network configuration parameter field was set to
10.xx.xx.0/24 in ceph.conf file as shown below.
If I reconfigure my ipaddress from 10.xx.xx.xx to 192.xx.xx.xx and by
changing the public network and
On 12. juni 2018 12:17, Muneendra Kumar M wrote:
conf file as shown below.
If I reconfigure my ipaddress from 10.xx.xx.xx to 192.xx.xx.xx and by
changing the public network and mon_host filed in the ceph.conf
Will my cluster will work as it is ?
Below is my ceph.conf file details.
Any input
On Tue, Jun 12, 2018 at 4:37 AM, Vadim Bulst wrote:
> no change:
>
>
> root@polstor01:/home/urzadmin# ceph-volume lvm zap --destroy /dev/dm-0
> --> Zapping: /dev/dm-0
This is the problem right here. Your script is using the dm device
that belongs to an LV.
What you want to do here is destroy/zap
Hi Alfredo,
thanks for your help. Yust to make this clear /dev/dm-0 is the name of
my multipath disk:
root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-name-35000c500866f8947 ->
../../dm-0
lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm
On Tue, Jun 12, 2018 at 6:47 AM, Vadim Bulst wrote:
> Hi Alfredo,
>
> thanks for your help. Yust to make this clear /dev/dm-0 is the name of my
> multipath disk:
>
> root@polstor01:/home/urzadmin# ls -la /dev/disk/by-id/ | grep dm-0
> lrwxrwxrwx 1 root root 10 Jun 12 07:50 dm-name-35000c500866f8
I cannot release this lock! This is an expansion shelf connected with
two cables to the controller. If there is no multipath management, the
os would see every disk at least twice. Ceph has to deal with it
somehow. I guess I'm not the only one who has a setup like this.
Best,
Vadim
On 12.0
Hi,
I am designing a new ceph cluster and was wondering whether I should bond
the 10 GB adapters or use one for public one for private
The advantage of bonding is simplicity and, maybe, performance
The catch though is that I cannot use jumbo frames as most of my servers
that needs to "consume" st
On 06/12/2018 02:00 PM, Steven Vacaroaia wrote:
> Hi,
>
> I am designing a new ceph cluster and was wondering whether I should
> bond the 10 GB adapters or use one for public one for private
>
> The advantage of bonding is simplicity and, maybe, performance
> The catch though is that I cannot
> On 12 Jun 2018, at 14.00, Steven Vacaroaia wrote:
>
> Hi,
>
> I am designing a new ceph cluster and was wondering whether I should bond the
> 10 GB adapters or use one for public one for private
>
> The advantage of bonding is simplicity and, maybe, performance
> The catch though is that I
On 2018-06-12 01:01, Jialin Liu wrote:
> Hello Ceph Community,
>
> I used libradosstriper api to test the striping feature, it doesn't seem to
> improve the performance at all, can anyone advise what's wrong with my
> settings:
>
> The rados object store testbed at my center has
> osd: 48
On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst wrote:
> I cannot release this lock! This is an expansion shelf connected with two
> cables to the controller. If there is no multipath management, the os would
> see every disk at least twice. Ceph has to deal with it somehow. I guess I'm
> not the onl
Hi all,
I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
disks are strictly identical (model and size).
(The cluster is also composed of 3 MON servers on 3 other machines)
For design reason, I would like to separate my cluster storage into 2
pools of 60 disks.
My idea is
Hi,
my ceph cluster has two pools and I reinstalled the osds of one complete
host. Ceph now recovers from this.
I was expecting when using
ceph osd pool set pool_a recovery_priority 5
ceph osd pool set pool_b recovery_priority 10
that this would lead to recovering the pool_a first (btw. I t
You should pass underlying device instead of DM volume to ceph-volume.
On Jun 12, 2018, 15:41 +0300, Alfredo Deza , wrote:
> On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst
> wrote:
> > I cannot release this lock! This is an expansion shelf connected with two
> > cables to the controller. If there i
Yeah I've tried it - no success.
On 12.06.2018 15:41, Sergey Malinin wrote:
You should pass underlying device instead of DM volume to ceph-volume.
On Jun 12, 2018, 15:41 +0300, Alfredo Deza , wrote:
On Tue, Jun 12, 2018 at 7:04 AM, Vadim Bulst
wrote:
I cannot release this lock! This is an exp
On Tue, Jun 12, 2018 at 4:53 AM, Wladimir Mutel wrote:
> Jason Dillaman wrote:
>
>>> One more question, how should I set profile 'rbd-read-only'
>>> properly
>>> ? I tried to set is for 'client.iso' on both 'iso' and 'jerasure21'
>>> pools,
>>> and this did not work. Set profile on both p
Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and
well it is definitely not generic. Is there a other way to get this
setup working? Or do I have to go back to filestore?
Cheers,
Vadim
On 12.06.2018 14:41, Alfredo Deza wrote:
On Tue, Jun 12, 2018 at 7:04 AM, Vadim B
On Tue, Jun 12, 2018 at 10:06 AM, Vadim Bulst
wrote:
> Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and well
> it is definitely not generic. Is there a other way to get this setup
> working? Or do I have to go back to filestore?
If you have an LV on top of your multipath
Okay i'll try that.
On 12 Jun 2018 4:24 pm, Alfredo Deza wrote:
On Tue, Jun 12, 2018 at 10:06 AM, Vadim Bulst
wrote:
> Thanks Alfredo - I can imagine why. I edited the filter in lvm.conf and well
> it is definitely not generic. Is there a other way to get this setup
> working? Or do I have to
On Tue, Jun 12, 2018 at 5:30 AM, Wladimir Mutel wrote:
> Hi everyone again,
>
> I continue set up of my testing Ceph cluster (1-node so far).
> I changed 'chooseleaf' from 'host' to 'osd' in CRUSH map
> to make it run healthy on 1 node. For the same purpose,
>
Hi,
Thanks Guys for your Answers.
'ceph osd df' gives me:
[root@pcl241 ceph]# ceph osd df
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
1 18.18999 1.0 18625G 15705G 2919G 84.32 1.04 152
0 18.18999 1.0 18625G 15945G 2680G 85.61 1.06 165
3 18.18999 1.0 18625G 14755G
Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans <
herve.ball...@ias.u-psud.fr>:
> Hi all,
>
> I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
> disks are strictly identical (model and size).
> (The cluster is also composed of 3 MON servers on 3 other machines)
>
> For design
On Tue, Jun 12, 2018 at 5:08 AM, Max Cuttins wrote:
> I have completed the installation of ISCSI.
> The documentation is wrong in several parts of it.
>
> Is it anyway to contribute and update with right commands?
> I found only
>
> https://github.com/ceph/ceph/tree/master/doc
>
> Which is inside
Good catch but sadly that didn't make any difference. Is there some permission
I'm missing?
Gesendet von Yahoo Mail. App herunterladen
Am Montag, 11. Juni 2018, 14:10:01 MESZ hat Jason Dillaman
Folgendes geschrieben:
It appears like you have a typo in your cap: "rdb_children" -> "
Nevermind, I had another typo. Everything is working now. Thank you!
Gesendet von Yahoo Mail. App herunterladen
Am Montag, 11. Juni 2018, 14:10:01 MESZ hat Jason Dillaman
Folgendes geschrieben:
It appears like you have a typo in your cap: "rdb_children" -> "rbd_children"
On Sun, J
Thank Jason,
it's a honor to me contribute to the main repo of ceph.
Just a throught, is it wise having DOCS within the software?
Isn't better to move docs to a less sensite repo?
Il 12/06/2018 17:02, Jason Dillaman ha scritto:
On Tue, Jun 12, 2018 at 5:08 AM, Max Cuttins wrote:
I have co
I migrated my OSDs from filestore to bluestore.
Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with
bluestore data.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 2.7T 0 disk
|-sdd2 8:50 0 2.7T 0 part
`-sdd1 8:49 0 100M 0 part /var/lib/c
Hi everybody,
i have a running iSCSI-ceph environment that connect to XenServer 7.2.
I have some dubts and rookie questions about iSCSI.
1) Xen refused to connect to iSCSI gateway since I didn't turn up
multipath on Xen.
To me it's ok. But Is it right say that multipath is much more than just
Hello all,
I have recently had need to make use of the S3 API on my Rados
Gateway. We've been running just Swift API backed by Openstack for
some time with no issues.
Upon trying to use the S3 API I discovered that our combination of
Jewel and Keystone renders AWS v4 signatures unusable. Apparent
hello,
is there any performance impact on cephfs for using file layouts to bind a
specific directory in cephfs to a given pool? Of course, such pool is not
the default data pool for this cephfs.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRL
Great,
Am 5. Juni 2018 17:13:12 MESZ schrieb Robert Sander
:
>Hi,
>
>On 27.05.2018 01:48, c...@elchaka.de wrote:
>>
>> Very interested to the Slides/vids.
>
>Slides are now available:
>https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/
Thanks you very much Robert!
- Mehmet
>
>Regards
Hi!
*Is it safe to run GFS2 on ceph as RBD and mount it to approx. 3 to 5 vm's?*
Idea is to consolidate 3 webservers which are located behind proxys. The
old infrastructure is not HA or capable of load balancing.
I would like to set up a webserver, clone the image and mount the GFS2 disk
as shared
Is it necessary to update the crush map with
class hdd
Before adding ssd's the cluster?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Well Herbert,
as Paul mentioned. You should reconfigure the threshold of your osds
first and reweight second. Paul has sent you some hints.
Jewel Documentation:
http://docs.ceph.com/docs/jewel/rados/
|osd backfill full ratio|
Description: Refuse to accept backfill requests when the Ceph OS
Hello Paul,
Am 5. Juni 2018 22:17:15 MESZ schrieb Paul Emmerich :
>Hi,
>
>If anyone wants to play around with Ceph on Debian: I just made our
>mirror
>for our
>dev/test image builds public:
>
>wget -q -O- 'https://static.croit.io/keys/release.asc' | apt-key add -
>echo 'deb https://static.croit
Eg
# rules
rule replicated_ruleset {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
To
# rules
rule replicated_ruleset {
id 0
type replicated
min_size 1
Hi all!
I'm having trouble creating OSDs on some boxes that once held Bluestore
OSDs. I have rolled the ceph software back from 12.2.4 -> 10.2.9 on the
boxes, but I'm running into this error when creating osds.
2018-06-12 22:32:42.78 7fcaf39e2800 0 ceph version 10.2.9
(2ee413f77150c0f375ff6
Not sure if you have been helped, but this is know issue if you have many
files/subfolder. It depends on what cephFS version you are running. This should
have been resolved in the Red Hat version 3 of ceph which is based on Luminous.
http://tracker.ceph.com/issues/19438
https://access.redhat.c
Is it necessary to update the crush map with
class hdd
Before adding ssd's the cluster?
Of course, if this osds of one root.
It is not necessary to manually edit crush:
ceph osd crush rule create-replicated replicated_hosts_hdd default host hdd
ceph osd crush rule create-replicated replicat
Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with
bluestore data.
Very. Very bad idea. When your ssd/nvme dead you lost your linux box.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
49 matches
Mail list logo