t; PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Would this be superior to setting:
osd_recovery_sleep
oot
echo Sleeping 10 minutes
sleep 600
done
--
Alex Gorbachev
Intelligent Systems Services Inc.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Jun 5, 2019 at 2:30 PM Sameh wrote:
>
> Le (On) Wed, Jun 05, 2019 at 01:57:52PM -0400, Alex Gorbachev ecrivit (wrote):
> >
> >
> > I get this in a lab sometimes, and
> > do
> >
> > ceph osd set noout
> >
> > and reboot the node wit
;
> Cheers,
I get this in a lab sometimes, and
do
ceph osd set noout
and reboot the node with the stuck PG.
In production, we remove OSDs one by one.
--
Alex Gorbachev
Intelligent Systems Services Inc.
___
ceph-users mailing list
ceph-users@lists.
On Tue, Jun 4, 2019 at 3:32 PM Sage Weil wrote:
>
> [pruning CC list]
>
> On Tue, 4 Jun 2019, Alex Gorbachev wrote:
> > Late question, but I am noticing
> >
> > ceph-volume: automatic VDO detection
> >
> > Does this mean that the OSD layer will at
Late question, but I am noticing
ceph-volume: automatic VDO detection
Does this mean that the OSD layer will at some point support
deployment with VDO?
Or that one could build on top of VDO devices and Ceph would detect
this and report somewhere?
Best,
--
Alex Gorbachev
ISS Storcium
On Tue
Oh you are so close David, but I have to go to Tampa to a client site,
otherwise I'd hop on a flight to Boston to say hi.
Hope you are doing well. Are you going to the Cephalocon in Barcelona?
--
Alex Gorbachev
Storcium
On Sun, Feb 24, 2019 at 10:40 AM David Turner wrote
ox supports Ceph integrated with their clusters (we are liking
that technology as well, more and more due to very good
thought-through design and quality).
If you provide more information on the specific use cases, it would be helpful.
--
Alex Gorbachev
Storcium
> ___
they would
> need to have the design ready for print on demand through the ceph
> store
>
> https://www.proforma.com/sdscommunitystore
>
> --
> Mike Perez (thingee)
I am nowhere near being an artist, but would the reference to Jules
Verne's trilogy be relevant at all being Nau
Proxmox,
OpenStack etc, whatever you are using) to be aware of data consistency
- Export and import Ceph images
All depends on what are the applications, what are RTO and RPO
requirements, how much data, what distance, what is the network
bandwidth
--
Alex Gorbachev
Storcium
__
On Thu, Dec 13, 2018 at 10:48 AM Dietmar Rieder
wrote:
>
> Hi Cephers,
>
> one of our OSD nodes is experiencing a Disk controller problem/failure
> (frequent resetting), so the OSDs on this controller are flapping
> (up/down in/out).
>
> I will hopefully get the replacement part soon.
>
> I have
I doing wrong? Any help is appreciated.
Hi Uwe,
If these are Proxmox images, would you be able to move them simply
using Proxmox Move Disk in hardware for VM? I have had good results
with that.
--
Alex Gorbachev
Storcium
>
> Thanks,
>
> Uwe
>
>
>
> Am 07.
with regards to
>> > >>> >> > > > spam my bet
>> > >>> >> > > > is that gmail's known dislike for attachments is the cause of
>> > >>> >> > > > these
>> > >>> >> > > > bounces and that setting is beyond your contr
s and disks) for all nodes should be very telling, collecting and
>> graphing this data might work, too.
>>
>> My suspects would be deep scrubs and/or high IOPS spikes when this is
>> happening, starving out OSD processes (CPU wise, RAM should be fine one
>> suppo
On Mon, Aug 13, 2018 at 10:25 AM, Ilya Dryomov wrote:
> On Mon, Aug 6, 2018 at 8:17 PM Ilya Dryomov wrote:
>>
>> On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote:
>> >
>> > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
>> > wrote:
>>
-- Forwarded message --
From: Matt.Brown
Can you please add me to the ceph-storage slack channel? Thanks!
Me too, please
--
Alex Gorbachev
Storcium
- Matt Brown | Lead Engineer | Infrastructure Services – Cloud &
Compute | Target | 7000 Target Pkwy N., NCE-0706 | Broo
On Fri, Jul 27, 2018 at 9:33 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev
> wrote:
>>
>> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
>> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
>> > wrote:
>> >
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>>
On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
>> > wrote:
>>
On Thu, Jul 26, 2018 at 9:21 AM, Ilya Dryomov wrote:
> On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
> wrote:
>>
>> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
>> wrote:
>> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
>> > wrote:
&g
On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
wrote:
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> wrote:
>> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>>>
>>>
>>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
>>> wr
On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>
>
> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
> wrote:
>>
>> I am not sure this related to RBD, but in case it is, this would be an
>> important bug to fix.
>>
>> Running LVM on top of RB
in pagecache.
Dropping the caches syncs up the block with what's on "disk" and
everything is fine.
Working on steps to reproduce simply - ceph is Luminous 12.2.7, RHEL
client is Jewel 10.2.10-17.el7cp
--
Alex Gorbachev
Storcium
___
ceph-use
iki/ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool
I like the fact that a failed drive will be reported to the OS, which
is not always the case with hardware RAID controllers.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
the
limitations and challenges). WAL/DB on SSD or NVMe is a must. We use
EnhanceIO to overcome some read bottlenecks. Currently running a
petabyte of storage with three ESXi clusters.
Regards,
Alex Gorbachev
Storcium
>
> Thanks
> Steven
>
> ___
all IO is pretty hard
on RBD devices, considering there is also the filesystem overhead that
serves NFS. When taking into account the single or multiple streams
(Ceph is great at multiple streams, but single stream performance will
take a good deal of tuning), and the IO size, t
, with repackaging kRBD
with XFS. I tried rbd-nbd as well, but performance is not good when
running sync.
--
Alex Gorbachev
Storcium
>
> Thanks
> Steven
>
> ___
> ceph-users mailing list
> cep
On Thu, May 3, 2018 at 6:54 AM, Nick Fisk <n...@fisk.me.uk> wrote:
> -Original Message-
> From: Alex Gorbachev <a...@iss-integration.com>
> Sent: 02 May 2018 22:05
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
>
this behavior even on Areca 1883, which does buffer HDD writes.
The way out was to put WAL and DB on NVMe drives and that solved
performance problems.
--
Alex Gorbachev
Storcium
>
>
>
> Thanks,
>
> Nick
>
>
> ___
> ceph-users maili
sometimes have to restart the rbd VM to recognize the new
caps. Sorry, no experience with libvirt on this, but the caps process
seems to work well.
--
Alex Gorbachev
Storcium
>
>
> Kind Regards
>
> Sven
>
>
>
>
> bringe Informationstechnik GmbH
>
> Zur Seepl
2 sec (REQUEST_SLOW)
> 2018-04-17 15:47:04.611307 mon.osd01 [WRN] Health check update: 2 slow
> requests are blocked > 32 sec (REQUEST_SLOW)
> 2018-04-17 15:47:10.102803 mon.osd01 [WRN] Health check update: 13 slow
> requests are blocked > 32 sec (REQUEST_SLOW)
>
On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman <jdill...@redhat.com> wrote:
>> If you run "partprobe" after you resize in your second example, is the
>> change visible in &quo
or size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End SizeFile system Flags
1 0.00B 2147MB 2147MB xfs
>
> On Wed, Apr 11, 2018 at 11:01 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Wed, Apr 11, 2018 a
"bd_set_size" within
>> "nbd_size_update", which seems suspicious to me at initial glance.
>>
>> [1]
>> https://github.com/torvalds/linux/commit/29eaadc0364943b6352e8994158febcb699c9f9b#diff-bc9273bcb259fef182ae607a1d06a142L180
>>
>> On Wed, Apr 1
t; https://github.com/torvalds/linux/commit/29eaadc0364943b6352e8994158febcb699c9f9b#diff-bc9273bcb259fef182ae607a1d06a142L180
>>
>> On Wed, Apr 11, 2018 at 11:09 AM, Alex Gorbachev <a...@iss-integration.com>
>> wrote:
>>>> On Wed, Apr 11, 2018 at 10:27 AM, Alex G
> On Wed, Apr 11, 2018 at 10:27 AM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Wed, Apr 11, 2018 at 2:43 AM, Mykola Golub <to.my.troc...@gmail.com>
>> wrote:
>>> On Tue, Apr 10, 2018 at 11:14:58PM -0400, Alex Gorbachev wrote:
>>>
&g
On Wed, Apr 11, 2018 at 2:43 AM, Mykola Golub <to.my.troc...@gmail.com> wrote:
> On Tue, Apr 10, 2018 at 11:14:58PM -0400, Alex Gorbachev wrote:
>
>> So Josef fixed the one issue that enables e.g. lsblk and sysfs size to
>> reflect the correct siz on change. However, partp
On Sun, Mar 11, 2018 at 3:50 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Sun, Mar 11, 2018 at 4:23 AM, Mykola Golub <to.my.troc...@gmail.com> wrote:
>> On Sat, Mar 10, 2018 at 08:25:15PM -0500, Alex Gorbachev wrote:
>>> I am running into the problem des
eue, but not sure if it would do similar
things. Any ideas?
https://vendor2.nginfotpdx.net/gitlab/upstream/ceph/commit/8aa159befa58cd9059ad99c752146f3a5dbfcb8b
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
to find
from dump historic ops what is the bottleneck.
It seems that integrating the timings into some sort of a debug flag for
rbd bench or fio would help a lot of us locate bottlenecks faster.
Thanks,
Alex
--
--
Alex Gorbachev
Storcium
___
ceph-users
On Thu, Mar 29, 2018 at 5:09 PM, Ronny Aasen <ronny+ceph-us...@aasen.cx> wrote:
> On 29.03.2018 20:02, Alex Gorbachev wrote:
>>
>> w Luminous 12.2.4 cluster with Bluestore, I see a good deal
>> of scrub and deep scrub operations. Tried to find a reference,
On the new Luminous 12.2.4 cluster with Bluestore, I see a good deal
of scrub and deep scrub operations. Tried to find a reference, but
nothing obvious out there - was it not supposed to not need scrubbing
any more due to CRC checks?
Thanks for any clarification.
--
Alex Gorbachev
Storcium
z confirm.
>
I turn the writeback mode off for the SSDs, as this seems to make the
controller cache a bottleneck.
--
Alex Gorbachev
Storcium
>
> Thanks
>
> 2018-03-26 23:00 GMT+07:00 Sam Huracan <nowitzki.sa...@gmail.com>:
>
>> Thanks for your information.
&
issue
> is still happening?
>
Thank you Igor, reducing to 3GB now and will advise. I did not
realize there's additional memory on top of the 90GB, the nodes each
have 128 GB.
--
Alex Gorbachev
Storcium
>
> Thanks,
>
> Igor
>
>
>
> On 3/26/2018 5:09 PM, Alex Gorbac
= 0.5 immediately relieved the problem. I
tried the value of 1, but it slowed recovery too much.
This seems like a very important operational parameter to note.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On Mon, Mar 12, 2018 at 12:21 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Mon, Mar 12, 2018 at 7:53 AM, Дробышевский, Владимир <v...@itgorod.ru>
> wrote:
>>
>> I was following this conversation on tracker and got the same question. I've
>> got a
ecause I knew I've upgraded Mellanox
> drivers on one host, and just decided to check IB config (and the root was
> there: adapter switched into the datagram mode). But if it wouldn't be the
> reason I would really lost.
>
> 12 мар. 2018 г. 9:39 пользователь "Alex Gorbachev
On Mon, Mar 5, 2018 at 11:20 PM, Brad Hubbard <bhubb...@redhat.com> wrote:
> On Fri, Mar 2, 2018 at 3:54 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Thu, Mar 1, 2018 at 10:57 PM, David Turner <drakonst...@gmail.com> wrote:
>>> Blocked
On Sun, Mar 11, 2018 at 4:23 AM, Mykola Golub <to.my.troc...@gmail.com> wrote:
> On Sat, Mar 10, 2018 at 08:25:15PM -0500, Alex Gorbachev wrote:
>> I am running into the problem described in
>> https://lkml.org/lkml/2018/2/19/565 and
>> https://tracker.ceph.com/issu
in lsblk and /sys/block/nbdX/size, but not
in parted for a mounted filesystem.
Unmapping and remapping the NBD device shows the size in parted.
Thank you for any help
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On Wed, Mar 7, 2018 at 8:37 PM, Alex Gorbachev <a...@iss-integration.com> wrote:
> On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius
> <cassi...@tips.com.br> wrote:
>> Hi all, this issue already have been discussed in older threads and I've
>> already tried
lumc1 [ERR] overall HEALTH_ERR 1 scrub
errors; Possible data damage: 1 pg inconsistent
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
On Mon, Mar 5, 2018 at 2:17 PM, Gregory Farnum wrote:
> On Thu, Mar 1, 2018 at 9:21 AM Max Cuttins wrote:
>>
>> I think this is a good question for everybody: How hard should be delete a
>> Pool?
>>
>> We ask to tell the pool twice.
>> We ask to add
On Fri, Mar 2, 2018 at 9:56 AM, Alex Gorbachev <a...@iss-integration.com> wrote:
>
> On Fri, Mar 2, 2018 at 4:17 AM Maged Mokhtar <mmokh...@petasan.org> wrote:
>>
>> On 2018-03-02 07:54, Alex Gorbachev wrote:
>>
>> On Thu, Mar 1, 2018 at 10:57 PM, Dav
On Fri, Mar 2, 2018 at 4:17 AM Maged Mokhtar <mmokh...@petasan.org> wrote:
> On 2018-03-02 07:54, Alex Gorbachev wrote:
>
> On Thu, Mar 1, 2018 at 10:57 PM, David Turner <drakonst...@gmail.com>
> wrote:
>
> Blocked requests and slow requests are synonyms in ceph. Th
On Thu, Mar 1, 2018 at 10:57 PM, David Turner <drakonst...@gmail.com> wrote:
> Blocked requests and slow requests are synonyms in ceph. They are 2 names
> for the exact same thing.
>
>
> On Thu, Mar 1, 2018, 10:21 PM Alex Gorbachev <a...@iss-integration.com> wrote:
>
to a compression setting
on the pool, nothing in OSD logs.
I replied to another compression thread. This makes sense since
compression is new, and in the past all such issues were reflected in
OSD logs and related to either network or OSD hardware.
Regards,
Alex
>
> On Thu, Mar 1, 2018 at 2:
OSD logs if anyone is interested.
This does not occur when compression is unset.
--
Alex Gorbachev
Storcium
>
> Subhachandra
>
> On Thu, Mar 1, 2018 at 6:18 AM, David Turner <drakonst...@gmail.com> wrote:
>>
>> With default memory settings, the general rule is 1GB ram/1TB
32 sec
(REQUEST_SLOW)
2018-02-28 18:09:53.794882 7f6de8551700 0
mon.roc-vm-sc3c234@0(leader) e1 handle_command mon_command({"prefix":
"status", "format": "json"} v 0) v1
--
Alex Gorbachev
Storcium
___
ceph-use
any recent kernel updates with nbd.c
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ty are
> definitely more important.
I would avoid both bcache and tiering to simplify the configuration,
and seriously consider larger nodes if possible, and more OSD drives.
HTH,
--
Alex Gorbachev
Storcium
>
> Thanks in advance for your advice!
>
> Best,
> Ean
>
>
>
>
&
On Tue, Jan 16, 2018 at 2:17 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev <a...@iss-integration.com>
> wrote:
>>
>> I found a few WAN RBD cluster design discussions, but not a local one,
>> so was
ion: in case of a permanent failure of the main site (with two
replicas), how to manually force the other site (with one replica and
MON) to provide storage? I would think a CRUSH map change and
modifying ceph.conf to include just one MON, then build two more MONs
locally and add?
--
Alex Gorbachev
Storc
and simple
use cases, which could be automated this way.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
their
web sites, as well as hardware that supports SGPIO (most enterprise
JBODs and drives do). There's likely similar options to other HBAs.
Areca:
UID on:
cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=
UID OFF:
cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=0
ay to modify the device-class using crushtool,
> is that correct?
This is how we do it in Storcium based on
http://docs.ceph.com/docs/master/rados/operations/crush-map/
ceph osd crush rm-device-class
ceph osd crush set-device-class
--
Best regards,
Alex Gorbachev
Storcium
>
> Wido
> ___
>
> On Thu, Oct 5, 2017 at 7:45 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> I am testing rbd mirroring, and have two existing clusters named ceph
>> in their ceph.conf. Each cluster has a separate fsid. On one
>> cluster, I renamed
such configuration work?
Thank you,
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Mark, great to hear from you!
On Tue, Oct 3, 2017 at 9:16 AM Mark Nelson <mnel...@redhat.com> wrote:
>
>
> On 10/03/2017 07:59 AM, Alex Gorbachev wrote:
> > Hi Sam,
> >
> > On Mon, Oct 2, 2017 at 6:01 PM Sam Huracan <nowitzki.sa...@gmail.com
> >
I have
not yet put a serious effort into tuning it, amd it does seem stable.
Hth, Alex
>>
>> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In Jewel and prior there was a health status for MONs in ceph -s JSON
output, this seems to be gone now. Is there a place where a status of
a given monitor is shown in Luminous?
Thank you
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph
never run the odd releases as too risky.
A good deal if functionality comes in updates, and usually the Ceph team
brings them in gently, with the more experimental features off by default.
I suspect the 9 month even cycle will also make it easier to perform more
i
using PCIe journals (e.g. Intel P3700 or even the
older 910 series) in front of such SSDs?
Thanks for any info you can share.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
dm-crypt as well.
Regards,
Alex
> Any suggestions?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users
about things,
> small people talk ... about other people.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
On Fri, Jun 30, 2017 at 8:12 AM Nick Fisk <n...@fisk.me.uk> wrote:
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 30 June 2017 03:54
> *To:* Ceph Users <ceph-users@lists.ceph.com>; n...@fisk.me.uk
>
>
> *Subject:* Re: [ceph-users
gt; ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the ceph.log for any anomalies?
Any occurrences on OSD nodes, anything in their OSD logs or syslogs?
Aany odd page cache settings on the clients?
Alex
>
> Thanks,
> Nick
>
> ___
> ceph-u
On Mon, Jun 19, 2017 at 3:12 AM Wido den Hollander <w...@42on.com> wrote:
>
> > Op 19 juni 2017 om 5:15 schreef Alex Gorbachev <a...@iss-integration.com>:
> >
> >
> > Has anyone run into such config where a single client consumes storage
> from
> >
We are seeing the same problem as http://tracker.ceph.com/issues/18945
where OSDs are not activating, with the lockbox error as below.
--
Alex Gorbachev
Storcium
un 19 17:11:56 roc03r-sca070 ceph-osd6804: starting osd.75 at :/0
osd_data /var/lib/ceph/osd/ceph-75 /var/lib/ceph/osd/ceph-75/journal
Has anyone run into such config where a single client consumes storage from
several ceph clusters, unrelated to each other (different MONs and OSDs,
and keys)?
We have a Hammer and a Jewel cluster now, and this may be a way to have
very clean migrations.
Best regards,
Alex
Storcium
--
--
Alex
lients
running kernel NFS servers in a Pacemaker/Corosync cluster. We
utilize NFS ACLs to restrict access and consume RBD as XFS
filesystems.
Best regards,
Alex Gorbachev
Storcium
>
> Thank you.
>
> Regards,
> Ossi
>
>
>
> ___
-sca040 kernel: [7126712.363404] mpt2sas_cm0:
log_info(0x30030101): originator(IOP), code(0x03), sub_code(0x0101)
root@roc02r-sca040:/var/log#
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
>>> On Wed,
On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> Hi Ilya,
>>
>> On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov <idryo...@gmail.com> wrote:
>
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Ilya,
On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov <idryo...@gmail.com> wrote:
> On Tue, Apr 11, 2017 at 3:10 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
> > Hi Ilya,
> >
> > On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov <idryo...@gmail.com&g
Hi Ilya,
On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev <a...@iss-integration.com>
>>
Hi Piotr,
On Tue, Apr 11, 2017 at 2:41 AM, Piotr Dałek <piotr.da...@corp.ovh.com> wrote:
> On 04/10/2017 08:16 PM, Alex Gorbachev wrote:
>>
>> I am trying to understand the cause of a problem we started
>> encountering a few weeks ago. There are 30 or so per hour messa
On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> I am trying to understand the cause of a problem we started
> encountering a few weeks ago. There are 30 or so per hour messages on
> OSD nodes of type:
>
> ceph-osd.33.log:2017-04-10 13:42:3
oubleshooting here - dump historic ops on
OSD, wireshark the links or anything else?
3. Christian, if you are looking at this, what would be your red flags in atop?
Thank you.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.
(e.g. areca), all SSD OSDs whenever these can be
affordable, or start experimenting with cache pools. Does not seem like
SSDs are getting any cheaper, just new technologies like 3DXP showing up.
>
> On 03/21/17 23:22, Alex Gorbachev wrote:
>
> I wanted to share the recent experience, in
you very much !
>
> --
> Alejandrito
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
Regards,
Alex
Storcium
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
sers mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas wrote:
> On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster
> wrote:
>>> I'm sorry, I may have worded that in a manner that's easy to
>>> misunderstand. I generally *never* suggest that people use CFQ on
>>>
about the time
when journals fail
Any other solutions?
Thank you for sharing.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Sat, Dec 31, 2016 at 5:38 PM Tyler Bishop
wrote:
> Enjoy the leap second guys.. lol your cluster gonna be skewed.
>
> Yep, pager went off right at dinner :)
>
> _
>
> ___
>
1 - 100 of 192 matches
Mail list logo