Hello, seeing issues with OSDs stalling and error messages such as:
2015-06-04 06:48:17.119618 7fc932d59700 0 -- 10.80.4.15:6820/3501 10.80.4.30
:6811/3003603 pipe(0xb6b4000 sd=19 :33085 s=1 pgs=311 cs=4 l=0 c=0x915c6e0).conn
ect claims to be 10.80.4.30:6811/4106 not 10.80.4.30:6811/3003603 -
I wonder if your issue is related to:
http://tracker.ceph.com/issues/5195
I had to add the new monitor to the local ceph.conf file and push
that with ceph-deploy --overwrite-conf config push host to all
cluster hosts and I had to issue ceph mon add host ip on one of
the existing cluster monitors
it
correctly...
Jan
On 03 Jul 2015, at 10:16, Alex Gorbachev a...@iss-integration.com wrote:
Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and
we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9.
Thank you for any advice.
Alex
Jul 3 03:42:06
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alex Gorbachev
Sent: 27 June 2015 19:02
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Redundant networks in Ceph
The current network design in Ceph
(http://ceph.com/docs
.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
Of Alex Gorbachev
Sent: 27 June 2015 19:02
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Redundant networks in Ceph
The current network design in Ceph
(http
What about https://github.com/Frontier314/EnhanceIO? Last commit 2
months ago, but no external contributors :(
The nice thing about EnhanceIO is there is no need to change device
name, unlike bcache, flashcache etc.
Best regards,
Alex
On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz
IE, should we be focusing on IOPS? Latency? Finding a way to avoid journal
overhead for large writes? Are there specific use cases where we should
specifically be focusing attention? general iscsi? S3? databases directly
on RBD? etc. There's tons of different areas that we can work on
Hello, this is an issue we have been suffering from and researching
along with a good number of other Ceph users, as evidenced by the
recent posts. In our specific case, these issues manifest themselves
in a RBD - iSCSI LIO - ESXi configuration, but the problem is more
general.
When there is an
on this currently, so
hopefully in the future there will be a direct RBD interface into LIO and it
will all work much better.
Either tgt or SCST seem to be pretty stable in testing.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alex
clusterwide IO to a
crawl. I am trying to envision this situation in production and how
would one find out what is slowing everything down without guessing.
Regards,
Alex
Jan
On 24 Aug 2015, at 18:26, Alex Gorbachev a...@iss-integration.com wrote:
This can be tuned in the iSCSI initiation
with it...
So I haven't tested it heavily.
Bcache should be the obvious choice if you are in control of the environment.
At least you can cry on LKML's shoulder when you lose data :-)
Jan
On 18 Aug 2015, at 01:49, Alex Gorbachev a...@iss-integration.com wrote:
What about https://github.com
Hi Nick,
On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk n...@fisk.me.uk wrote:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Nick Fisk
Sent: 13 August 2015 18:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to improve single
Just to update the mailing list, we ended up going back to default
ceph.conf without any additional settings than what is mandatory. We are
now reaching speeds we never reached before, both in recovery and in
regular usage. There was definitely something we set in the ceph.conf
bogging
...@schermer.cz wrote:
What’s the value of /proc/sys/vm/min_free_kbytes on your system? Increase
it to 256M (better do it if there’s lots of free memory) and see if it
helps.
It can also be set too high, hard to find any formula how to set it
correctly...
Jan
On 03 Jul 2015, at 10:16, Alex
May I suggest checking also the error counters on your network switch?
Check speed and duplex. Is bonding in use? Is flow control on? Can you
swap the network cable? Can you swap a NIC with another node and does the
problem follow?
Hth, Alex
On Friday, July 17, 2015, Steve Thompson
to be helping a lot, it could be just the superior
switch response on a higher end switch.
Using blk_mq scheduler, it's been reported to improve performance on random
IO.
Good luck!
--
Alex Gorbachev
Storcium
On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets <nefelim...@gmail.com>
wrote:
Hi Jiwan,
On Sat, Jul 11, 2015 at 4:44 PM, Jiwan N jiwan.ningle...@gmail.com wrote:
Hi Ceph-Users,
I am quite new to Ceph Storage (storage tech in general). I have been
investigating Ceph to understand the precise process clearly.
*Q: What actually happens When I create a block image of
FWIW. Based on the excellent research by Mark Nelson (
http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/)
we have dropped SSD journals altogether, and instead went for the battery
protected controller writeback cache.
Benefits:
- No negative force
Hi Patrick,
On Thu, Aug 27, 2015 at 12:00 PM, Patrick McGarry pmcga...@redhat.com
wrote:
Just a reminder that our Performance Ceph Tech Talk with Mark Nelson
will be starting in 1 hour.
If you are unable to attend there will be a recording posted on the
Ceph YouTube channel and linked from
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger
<n...@linux-iscsi.org> wrote:
> (RESENDING)
>
> On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote:
>> e have experienced a repeatable issue when performing the following:
>>
>> Ceph backend with
>
>> On 03 Sep 2015, at 03:14, Alex Gorbachev <a...@iss-integration.com> wrote:
>>
>> e have experienced a repeatable issue when performing the following:
>>
>> Ceph backend with no issues, we can repeat any time at will in lab and
>> production. Clonin
Hello,
We have run into an OSD crash this weekend with the following dump. Please
advise what this could be.
Best regards,
Alex
2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >>
10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251
l=0
e have experienced a repeatable issue when performing the following:
Ceph backend with no issues, we can repeat any time at will in lab and
production. Cloning an ESXi VM to another VM on the same datastore on
which the original VM resides. Practically instantly, the LIO machine
becomes
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger
<n...@linux-iscsi.org> wrote:
> (RESENDING)
>
> On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote:
>> e have experienced a repeatable issue when performing the following:
>>
>> Ceph backend with
We had multiple issues with 4TB drives and delays. Here is the
configuration that works for us fairly well on Ubuntu (but we are about to
significantly increase the IO load so this may change).
NTP: always use NTP and make sure it is working - Ceph is very sensitive to
time being precise
Hi Brad,
This occurred on a system under moderate load - has not happened since and
I do not know how to reproduce.
Thank you,
Alex
On Tue, Sep 22, 2015 at 7:29 PM, Brad Hubbard <bhubb...@redhat.com> wrote:
> - Original Message -
>
> > From: "Alex Gorbachev&quo
Please review http://docs.ceph.com/docs/master/rados/operations/crush-map/
regarding weights
Best regards,
Alex
On Wed, Sep 23, 2015 at 3:08 AM, wikison wrote:
> Hi,
> I have four storage machines to build a ceph storage cluster as
> storage nodes. Each of them is
Hi Josh,
On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin <jdur...@redhat.com> wrote:
> On 12/07/2015 03:29 PM, Alex Gorbachev wrote:
>
>> When trying to merge two results of rbd export-diff, the following error
>> occurs:
>>
>> iss@lab2-b1:~$ rbd export-diff --fro
found this link
http://tracker.ceph.com/issues/12911 but not sure if the patch should have
already been in hammer or how to get it?
System: ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
Ubuntu 14.04.3 kernel 4.2.1-040201-generic
Thank you
--
Alex Gorbachev
Storcium
Great, thanks Josh! Using stdin/stdout merge-diff is working. Thank you
for looking into this.
--
Alex Gorbachev
Storcium
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin <jdur...@redhat.com> wrote:
> This is the problem:
>
> http://tracker.ceph.com/issues/14030
>
> As a wor
rt04.bck
Merging image diff: 13% complete...failed.
rbd: merge-diff error
I am not sure how to run gdb in such scenario with stdin/stdout
Thanks,
Alex
>
>
> Josh
>
>
> On 12/08/2015 11:11 PM, Josh Durgin wrote:
>
>> On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
>&g
More oddity: retrying several times, the merge-diff sometimes works and
sometimes does not, using the same source files.
On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> Hi Josh, looks like I celebrated too soon:
>
> On Wed, Dec 9, 2015 at 2:25
Is there any way to obtain a snapshot creation time? rbd snap ls does not
list it.
Thanks!
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, 479 MB/s
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
h as http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors
?
Our cluster has 8 racks right now, and I would love to place a MON at the
top of the rack (maybe on SDN switches in the future - why not?). Thank
you for helping answer these questions.
--
Alex Gorbachev
Sorry, one last comment on issue #1 (slow with SCST iSCSI but fast qla2xxx
FC with Ceph RBD):
> tly work fine in combination with SCST so I'd recommend to continue
>>> testing with a recent kernel. I'm running myself kernel 4.3.0 since some
>>> time on my laptop and development workstation.
>>
e should
be helpful as well to add robustness to the Ceph networking backend.
Best regards, Alex
>
> Thanks for feedback and regards . Götz
>
>
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
).
Is this the best way to determine snapshots and are letters "s" and "t"
going to stay the same?
Best regards,
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ket name be the same with hostname.
> >
> >
> > Or host bucket name does no matter?
> >
> >
> >
> > Best regards,
> >
> > Xiucai
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com <javascript:;>
estore.
Best regards,
Alex
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com <javascript:;>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-u
On Tue, Jan 12, 2016 at 12:09 PM, Josh Durgin <jdur...@redhat.com> wrote:
> On 01/12/2016 06:10 AM, Alex Gorbachev wrote:
>
>> Good day! I am working on a robust backup script for RBD and ran into a
>> need to reliably determine start and end snapshots for differential
&
but maybe the following links will
help you make some progress:
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17820.html
https://ceph.com/community/incomplete-pgs-oh-my/
Good
is quite successfully. Ubuntu with 4.1+ kernel seems
to work really well for all types of bonding and multiple bonds.
HTH, Alex
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Jan, I believe the block device (vs. filesystem) OSD layout is addressed in
the Newstore/Bluestore:
http://tracker.ceph.com/projects/ceph/wiki/NewStore_(new_osd_backend)
--
Alex Gorbachev
Storcium
On Thu, Jan 28, 2016 at 4:32 PM, Jan Schermer <j...@schermer.cz> wrote:
> You can't run
speed links >10Gb and also with multiple bonds
>
>
> On 26 Jan 2016, at 06:32, Alex Gorbachev <a...@iss-integration.com
> <javascript:_e(%7B%7D,'cvml','a...@iss-integration.com');>> wrote:
>
>
>
> On Saturday, January 23, 2016, 名花 <louisfang2...@gmail
Reviving an old thread:
On Sunday, July 12, 2015, Lionel Bouton <lionel+c...@bouton.name> wrote:
> On 07/12/15 05:55, Alex Gorbachev wrote:
> > FWIW. Based on the excellent research by Mark Nelson
> > (
> http://ceph.com/community/ceph-performance-part-2-write-through
by clients' IO load.
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTLogicalUnit
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTTarget
https://github.com/akurz/resource-agents/blob/SCST/heartbeat/iscsi-scstd
--
Alex Gorbachev
http://www.iss-integration.com
_
> ceph-users mailing list
> ceph-users@lists.ceph.com <javascript:;>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> On Friday, May 13, 2016, Mike Jacobacci wrote:
> Hello,
>
> I have a quick and probably dumb question… We would like to use Ceph
> for our storage, I was thinking of a cluster with 3 Monitor and OSD
> nodes. I was wondering if it was a bad idea to
eing the below output - this means
that discard is being sent to the backing (RBD) device, correct?
Including the ceph-users list to see if there is a reason RBD is not
processing this discard/unmap.
Thank you,
--
Alex Gorbachev
Storcium
Jul 26 08:23:38 e1 kernel: [ 858.324715] [20426]: scst:
scs
with UNMAP)
- blkdiscard does release the space
--
Alex Gorbachev
Storcium
On Wed, Jul 27, 2016 at 11:55 AM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> Hi Vlad,
>
> On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>> Hi,
>>
&
>
> On Wednesday, July 27, 2016, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>>
>>
>> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
>> > One other experiment: just running blkdiscard against the RBD block
>> > device completely clears it, to th
Hi Vlad,
On Wednesday, July 27, 2016, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>
> Alex Gorbachev wrote on 07/27/2016 10:33 AM:
> > One other experiment: just running blkdiscard against the RBD block
> > device completely clears it, to the point where the rbd-diff met
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>>> Alex Gorbache
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>> Hi Ilya,
>>
>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov <idryo...@gmail.com> wrote:
>>> On Mon, Aug 1, 2016 at 7:55 PM
root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END {
print SUM/1024 " KB" }'
819200 KB
root@e1:/var/log# blkdiscard -o 0 -l 4096 /dev/rbd28
root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END {
print SUM/1024 " KB" }'
782336 KB
--
Alex Gorbache
On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev <a...@iss-integration.com> wrote:
> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
>>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo
On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>>> I'm confused. How can a 4M discard not free anything? It's either
>>> going to hit an e
erm kernel:
https://lkml.org/lkml/2016/7/12/919
https://lkml.org/lkml/2016/7/12/297
--
Alex Gorbachev
Storcium
>
> 2016-07-05 11:47 GMT+03:00 Nick Fisk <n...@fisk.me.uk>:
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] O
ts are open source, there are several options for those.
Currently running 3 VMware clusters with 15 hosts total, and things are
quite decent.
Regards,
Alex Gorbachev
Storcium
>
> Thank you !
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Inter
turned off CFQ and blk-mq/scsi-mq and are using
just the noop scheduler.
Does the ceph kernel code somehow use the fair scheduler code block?
Thanks
--
Alex Gorbachev
Storcium
Jun 28 09:46:41 roc04r-sca090 kernel: [137912.684974] CPU: 30 PID:
10403 Comm: ceph-osd Not tainted 4.4.13-040413
016 um 17:59 schrieb Tim Bishop <tim-li...@bishnet.net>:
>
> Yes - I noticed this today on Ubuntu 16.04 with the default kernel. No
> useful information to add other than it's not just you.
>
> Tim.
>
> On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote:
>
HI Nick,
On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk wrote:
> However, there are a number of pain points with iSCSI + ESXi + RBD and they
> all mainly centre on write latency. It seems VMFS was designed around the
> fact that Enterprise storage arrays service writes in
project, which offers excellent deduplication.
HTH,
Alex
>
>
> Any advice is greatly appreciated.
>
> Thanks,
> Brendan
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the default kernel. No
>>>> useful information to add other than it's not just you.
>>>>
>>>> Tim.
>>>>
>>>> On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote:
>>>>
>>>> After upgrading to kernel 4.4
On Wed, Aug 3, 2016 at 10:54 AM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
>>> Alex
next thing.
Thank you for your input, it is very practical and helpful long term.
Alex
>
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> I'm confused. How can a 4M discard not free anything? It's either
> going to hit an entire object or two adjacent objects, truncating the
> tail of one and zeroing the head of another. Using rbd diff:
>
> $ rbd diff test | grep -A 1 25165824
> 25165824 4194304 data
> 29360128 4194304 data
>
Hi Ilya,
On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> RBD illustration showing RBD ignoring discard until a certain
>> threshold - why is that?
On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote:
> Alex Gorbachev wrote on 08/02/2016 07:56 AM:
>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
>>> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev <a..
tanding issue that's only just been
>>> resolved, another user chimed in on the lkml thread a couple of days
>>> ago as well and again his trace had ceph-osd in it as well.
>>>
>>> https://lkml.org/lkml/headers/2016/6/21/491
>>>
>>> Campbell
>&
about the time
when journals fail
Any other solutions?
Thank you for sharing.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
re
interesting like CephFS/Ganesha.
Thanks for your very valuable info on analysis and hw build.
Alex
>
>
>
> Am 21.08.2016 um 09:31 schrieb Nick Fisk <n...@fisk.me.uk <javascript:;>>:
>
> >> -Original Message-
> >> From: Alex Gorbachev [mailt
On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов <anga...@gmail.com> wrote:
>> Guys,
>>
>> This bug is hitting me constantly, may be once per several days. Does
>> anyone kn
Hi Nick,
On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote:
>> -Original Message-
>> From: w...@globe.de [mailto:w...@globe.de]
>> Sent: 21 July 2016 13:23
>> To: n...@fisk.me.uk; 'Horace Ng'
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users]
is the best it’s been for a long time and I’m reluctant to fiddle any
> further.
>
>
>
> But as mentioned above, thick vmdk’s with vaai might be a really good fit.
>
Any chance thin vs. thick difference could be related to discards? I saw
zillions of them in recent testing.
>
&g
; underneath a RBD device. We needed high sequential Write and Read performance
> on those RBD devices since we were storing large files on there.
>
> Different approach, kind of similar result.
Question: what scheduler were you guys using to facilitate the
readahead on the RBD client
HI Nick,
On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk <n...@fisk.me.uk> wrote:
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 21 August 2016 15:27
> *To:* Wilhelm Redbrake <w...@globe.de>
> *Cc:* n...@fisk.me.uk; Horace Ng <hor...@hkisl.net&
On Saturday, September 3, 2016, Alex Gorbachev <a...@iss-integration.com>
wrote:
> HI Nick,
>
> On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk <n...@fisk.me.uk
> <javascript:_e(%7B%7D,'cvml','n...@fisk.me.uk');>> wrote:
>
>> *From:* Alex Gorbachev [mailto:a..
-recommends install -o
Dpkg::Options::=--force-confnew ceph-osd ceph-mds ceph-mon radosgw
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk <n...@fisk.me.uk> wrote:
>
>
>
>
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 04 September 2016 04:45
> *To:* Nick Fisk <n...@fisk.me.uk>
> *Cc:* Wilhelm Redbrake <w...@globe.de>; Horace
--
Alex Gorbachev
Storcium
On Sun, Sep 11, 2016 at 12:54 PM, Nick Fisk <n...@fisk.me.uk> wrote:
>
>
>
>
> *From:* Alex Gorbachev [mailto:a...@iss-integration.com]
> *Sent:* 11 September 2016 16:14
>
> *To:* Nick Fisk <n...@fisk.me.uk>
> *Cc:* Wilhelm R
Confirmed - older version of ceph-deploy is working fine. Odd as
there is a large number of Hammer users out there. Thank you for the
explanation and fix.
--
Alex Gorbachev
Storcium
On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni <vakul...@redhat.com> wrote:
> There is a known issue wi
-on-nfs-vs.html )
Alex
>
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 04 September 2016 04:45
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>;
> ceph-users <ceph-users@lists.cep
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote:
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are
On Sat, Aug 13, 2016 at 4:51 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
>>> On Sun,
continue to promote, improve, support, and
deploy open source storage and compute solutions for healthcare and
business applications.
http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=san=41781=san=1=41781=0=1_interval=10=Partner=Asc
--
Alex Gorbachev
Storcium
t:;>
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com <javascript:;>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com <javascript:;>Global OnLine Japan/Rakuten
> Communications
> http://www.gol.com/
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com <javascript:;>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Nov 28, 2016 at 2:59 PM Ilya Dryomov wrote:
> On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel
> wrote:
> > Hi *,
> >
> > I am currently testing different scenarios to try to optimize sequential
> > read and write speeds using Kernel RBD.
> >
__
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Joakim,
On Mon, Dec 5, 2016 at 1:35 PM wrote:
> Hello,
>
> I have a question regarding if Ceph is suitable for small scale
> deployments.
>
> Lets say I have two machines, connected with gbit lan.
>
> I want to share data between them, like an ordinary NFS
> share, but with
Referencing
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is not
allowed to be an existing partition. You have to specify the entire block
device, on which the tools create a partition equal to osd
Hi Pierre,
On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU
<pierre.blond...@unicaen.fr> wrote:
> Le 05/12/2016 à 05:14, Alex Gorbachev a écrit :
>> Referencing
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html
>>
>> When using --d
On Sat, Dec 31, 2016 at 5:38 PM Tyler Bishop
wrote:
> Enjoy the leap second guys.. lol your cluster gonna be skewed.
>
> Yep, pager went off right at dinner :)
>
> _
>
> ___
>
(e.g. areca), all SSD OSDs whenever these can be
affordable, or start experimenting with cache pools. Does not seem like
SSDs are getting any cheaper, just new technologies like 3DXP showing up.
>
> On 03/21/17 23:22, Alex Gorbachev wrote:
>
> I wanted to share the recent experience, in
you very much !
>
> --
> Alejandrito
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
sers mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
Regards,
Alex
Storcium
--
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas wrote:
> On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster
> wrote:
>>> I'm sorry, I may have worded that in a manner that's easy to
>>> misunderstand. I generally *never* suggest that people use CFQ on
>>>
oubleshooting here - dump historic ops on
OSD, wireshark the links or anything else?
3. Christian, if you are looking at this, what would be your red flags in atop?
Thank you.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.
1 - 100 of 192 matches
Mail list logo