On 2/16/19 12:33 AM, David Turner wrote:
The answer is probably going to be in how big your DB partition is vs
how big your HDD disk is. From your output it looks like you have a
6TB HDD with a 28GB Blocks.DB partition. Even though the DB used size
isn't currently full, I would guess that at
Yesterday I saw this one.. it puzzles me:
2019-02-15 21:00:00.000126 mon.torsk1 mon.0 10.194.132.88:6789/0 604164 :
cluster [INF] overall HEALTH_OK
2019-02-15 21:39:55.793934 mon.torsk1 mon.0 10.194.132.88:6789/0 604304 :
cluster [WRN] Health check failed: 2 slow requests are blocked > 32 sec.
Hi,
I tried to add a "archive" storage class to our Openstack environment by
introducing a second storage backend offering RBD volumes having their
data in an erasure coded pool. As I will have to specify a data-pool I
tried it as follows:
### keyring files:
Hi.
I've got a bunch of "small" files moved onto CephFS as archive/bulk storage
and now I have the backup (to tape) to spool over them. A sample of the
single-threaded backup client delivers this very consistent pattern:
$ sudo strace -T -p 7307 2>&1 | grep -A 7 -B 3 open
write(111,
On Fri, Feb 15, 2019 at 1:39 AM Ilya Dryomov wrote:
> On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
> >
> > Hi Marc,
> >
> > You can see previous designs on the Ceph store:
> >
> > https://www.proforma.com/sdscommunitystore
>
> Hi Mike,
>
> This site stopped working during DevConf and
Actually I think I misread what this was doing, sorry.
Can you do a “ceph osd tree”? It’s hard to see the structure via the text
dumps.
On Wed, Feb 13, 2019 at 10:49 AM Gregory Farnum wrote:
> Your CRUSH rule for EC spools is forcing that behavior with the line
>
> step chooseleaf indep 1 type
The answer is probably going to be in how big your DB partition is vs how
big your HDD disk is. From your output it looks like you have a 6TB HDD
with a 28GB Blocks.DB partition. Even though the DB used size isn't
currently full, I would guess that at some point since this OSD was created
that
I'm leaving the response on the CRUSH rule for Gregory, but you have
another problem you're running into that is causing more of this data to
stay on this node than you intend. While you `out` the OSD it is still
contributing to the Host's weight. So the host is still set to receive
that amount
I have found that running a zap before all prepare/create commands with
ceph-volume helps things run smoother. Zap is specifically there to clear
everything on a disk away to make the disk ready to be used as an OSD.
Your wipefs command is still fine, but then I would lvm zap the disk before
Hi,
I want to install a second radosgw to my existing ceph cluster (mimic)
on another server. Should I create it like the first one, with
'ceph-deploy rgw create' ?
I don't want to mess with the existing rgw system pools.
Thanks.
___
ceph-users
On 2/15/19 2:54 PM, Alexandre DERUMIER wrote:
>>> Just wanted to chime in, I've seen this with Luminous+BlueStore+NVMe
>>> OSDs as well. Over time their latency increased until we started to
>>> notice I/O-wait inside VMs.
>
> I'm also notice it in the vms. BTW, what it your nvme disk size ?
>>Just wanted to chime in, I've seen this with Luminous+BlueStore+NVMe
>>OSDs as well. Over time their latency increased until we started to
>>notice I/O-wait inside VMs.
I'm also notice it in the vms. BTW, what it your nvme disk size ?
>>A restart fixed it. We also increased memory target
Hi!
I've created a mount.ceph.c replacement in Python which also utilizes the
kernel keyring and does name resolutions.
You can mount a CephFS without installing Ceph that way (and without using the
legacy secret= mount option).
https://github.com/SFTtech/ceph-mount
When you place the script
Bingo! Changed disk to scsi and adapter to virtio is working perfectly.
Thank you Mark!
Regards,
Gesiel
Em sex, 15 de fev de 2019 às 10:21, Marc Roos
escreveu:
>
> Use scsi disk and virtio adapter? I think that is recommended also for
> use with ceph rbd.
>
>
>
> -Original Message-
>
On 2/15/19 2:31 PM, Alexandre DERUMIER wrote:
> Thanks Igor.
>
> I'll try to create multiple osds by nvme disk (6TB) to see if behaviour is
> different.
>
> I have other clusters (same ceph.conf), but with 1,6TB drives, and I don't
> see this latency problem.
>
>
Just wanted to chime in,
Is there anyway to find out which files are stored in a CephFS data pool? I
know you can reference the extended attributes, but those are only relevant for
files created after ceph.dir.layout.pool or ceph.file.layout.pool attributes
are set - I need to know about all the files in a pool.
Thanks Igor.
I'll try to create multiple osds by nvme disk (6TB) to see if behaviour is
different.
I have other clusters (same ceph.conf), but with 1,6TB drives, and I don't see
this latency problem.
- Mail original -
De: "Igor Fedotov"
Cc: "ceph-users" , "ceph-devel"
Envoyé:
Yeah.
I'm monitoring such issue reports for a while and it looks like
something is definitely wrong with response times under certain
circumstances. Mpt sure if all these reports have the same root cause
though.
Scrubbing seems to be one of the trigger.
Perhaps we need more low-level
Hi Alexander,
I've read through your reports, nothing obvious so far.
I can only see several times average latency increase for OSD write ops
(in seconds)
0.002040060 (first hour) vs.
0.002483516 (last 24 hours) vs.
0.008382087 (last hour)
subop_w_latency:
0.000478934 (first hour) vs.
Hi Alexander,
I've read through your reports, nothing obvious so far.
I can only see several times average latency increase for OSD write ops
(in seconds)
0.002040060 (first hour) vs.
0.002483516 (last 24 hours) vs.
0.008382087 (last hour)
subop_w_latency:
0.000478934 (first hour) vs.
Hi Igor,
Thanks for your reply.
I can verify, discard is disabled in our cluster:
10:03 root@node106b [fra]:~# ceph daemon osd.417 config show | grep discard
"bdev_async_discard": "false",
"bdev_enable_discard": "false",
[...]
So there must be something else causing the problems.
Use scsi disk and virtio adapter? I think that is recommended also for
use with ceph rbd.
-Original Message-
From: Gesiel Galvão Bernardes [mailto:gesiel.bernar...@gmail.com]
Sent: 15 February 2019 13:16
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Online disk resize with
HI Marc,
i tried this and the problem continue :-(
Em sex, 15 de fev de 2019 às 10:04, Marc Roos
escreveu:
>
>
> And then in the windows vm
> cmd
> diskpart
> Rescan
>
> Linux vm
> echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan (sda)
> echo 1 >
And then in the windows vm
cmd
diskpart
Rescan
Linux vm
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan (sda)
echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan (sdd)
I have this to, have to do this to:
virsh qemu-monitor-command vps-test2 --hmp "info block"
virsh
I have this to, have to do this to:
virsh qemu-monitor-command vps-test2 --hmp "info block"
virsh qemu-monitor-command vps-test2 --hmp "block_resize
drive-scsi0-0-0-0 12G"
-Original Message-
From: Gesiel Galvão Bernardes [mailto:gesiel.bernar...@gmail.com]
Sent: 15 February 2019
Hi,
I'm making a environment for VMs with qemu/kvm and Ceph using RBD, and I'm
with the follow problem: The guest VM not recognizes disk resize
(increase). The cenario is:
Host:
Centos 7.6
Libvirt 4.5
Ceph 13.2.4
I follow the following steps to increase the disk (ex: disk 10Gb to 20Gb):
# rbd
Hi Denny,
Do not remember exactly when discards appeared in BlueStore but they are
disabled by default:
See bdev_enable_discard option.
Thanks,
Igor
On 2/15/2019 2:12 PM, Denny Kreische wrote:
Hi,
two weeks ago we upgraded one of our ceph clusters from luminous 12.2.8 to
mimic 13.2.4,
Hi,
two weeks ago we upgraded one of our ceph clusters from luminous 12.2.8 to
mimic 13.2.4, cluster is SSD-only, bluestore-only, 68 nodes, 408 OSDs.
somehow we see strange behaviour since then. Single OSDs seem to block for
around 5 minutes and this causes the whole cluster and connected
On Fri, Feb 15, 2019 at 12:01 PM Willem Jan Withagen wrote:
>
> On 15/02/2019 11:56, Dan van der Ster wrote:
> > On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen
> > wrote:
> >>
> >> On 15/02/2019 10:39, Ilya Dryomov wrote:
> >>> On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
>
>
On 15/02/2019 11:56, Dan van der Ster wrote:
On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen wrote:
On 15/02/2019 10:39, Ilya Dryomov wrote:
On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
Hi Marc,
You can see previous designs on the Ceph store:
I have no issues opening that site from Germany.
Zitat von Dan van der Ster :
On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen wrote:
On 15/02/2019 10:39, Ilya Dryomov wrote:
> On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
>>
>> Hi Marc,
>>
>> You can see previous designs on the
On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen wrote:
>
> On 15/02/2019 10:39, Ilya Dryomov wrote:
> > On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
> >>
> >> Hi Marc,
> >>
> >> You can see previous designs on the Ceph store:
> >>
> >> https://www.proforma.com/sdscommunitystore
> >
>
On 15/02/2019 10:39, Ilya Dryomov wrote:
On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
Hi Marc,
You can see previous designs on the Ceph store:
https://www.proforma.com/sdscommunitystore
Hi Mike,
This site stopped working during DevConf and hasn't been working since.
I think Greg
today I again hit the warn with 30G also...
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the mons keep old maps and
On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
>
> Hi Marc,
>
> You can see previous designs on the Ceph store:
>
> https://www.proforma.com/sdscommunitystore
Hi Mike,
This site stopped working during DevConf and hasn't been working since.
I think Greg has contacted some folks about this,
On Fri, 2019-02-15 at 15:34 +0800, Marvin Zhang wrote:
> Thanks Jeff.
> If I set Attr_Expiration_Time as zero in conf , deos it mean timeout
> is zero? If so, every client will see the change immediately. Will it
> decrease the performance hardly?
> I seems that GlusterFS FSAL use UPCALL to
36 matches
Mail list logo