On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote:
> Dear colleagues,
>
> we build L3 topology for use with CEPH, which is based on OSPF routing
> between Loopbacks, in order to get reliable and ECMPed topology, like this:
...
> CEPH configured in the way
You have a minor misconfigu
At the risk of hijacking this thread, like I said I've ran into this
problem again, and have captured a log with debug_osd=20, viewable at
https://www.dropbox.com/s/8zoos5hhvakcpc4/ceph-osd.3.log?dl=0 - any
pointers?
On Tue, Jan 8, 2019 at 11:31 AM Peter Woodman wrote:
>
> For the record, in the
Hi Everyone.
Thanks for the testing everyone - I think my system works as intented.
When reading from another client - hitting the cache of the OSD-hosts
I also get down to 7-8ms.
As mentioned, this is probably as expected.
I need to figure out to increase parallism somewhat - or convince users
Hi Noah!
With an eye toward improving documentation and community, two things come to
mind:
1. I didn’t know about this meeting or I would have done my very best to enlist
my roommate, who probably could have answered these questions very quickly. I
do know there’s something to do with the met
1 PM PST / 9 PM GMT
https://bluejeans.com/908675367
On Fri, Jan 18, 2019 at 10:31 AM Noah Watkins wrote:
>
> We'll be discussing SEO for the Ceph documentation site today at the
> DocuBetter meeting. Currently when Googling or DuckDuckGoing for
> Ceph-related things you may see results from maste
We'll be discussing SEO for the Ceph documentation site today at the
DocuBetter meeting. Currently when Googling or DuckDuckGoing for
Ceph-related things you may see results from master, mimic, or what's
a dumpling? The goal is figure out what sort of approach we can take
to make these results more
On 19/01/2019 02.24, Brian Topping wrote:
>
>
>> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote:
>>
>> On 12/01/2019 15:07, Brian Topping wrote:
>>> I’m a little nervous that BlueStore assumes it owns the partition table and
>>> will not be happy that a couple of primary partitions have been
> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote:
>
> On 12/01/2019 15:07, Brian Topping wrote:
>> I’m a little nervous that BlueStore assumes it owns the partition table and
>> will not be happy that a couple of primary partitions have been used. Will
>> this be a problem?
>
> You should
On 1/18/19 9:22 AM, Nils Fahldieck - Profihost AG wrote:
Hello Mark,
I'm answering on behalf of Stefan.
Am 18.01.19 um 00:22 schrieb Mark Nelson:
On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote:
Hello Mark,
after reading
http://docs.ceph.com/docs/master/rados/configuration/bluestore-c
On Fri, Jan 18, 2019 at 10:07 AM Jan Kasprzak wrote:
>
> Alfredo,
>
> Alfredo Deza wrote:
> : On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
> : > Eugen Block wrote:
> : > :
> : > : I think you're running into an issue reported a couple of times.
> : > : For the use of LVM you have t
On 1/18/19 7:26 AM, Igor Fedotov wrote:
Hi Kevin,
On 1/17/2019 10:50 PM, KEVIN MICHAEL HRPCEK wrote:
Hey,
I recall reading about this somewhere but I can't find it in the docs or list
archive and confirmation from a dev or someone who knows for sure would be
nice. What I recall is that blues
Hello Mark,
I'm answering on behalf of Stefan.
Am 18.01.19 um 00:22 schrieb Mark Nelson:
>
> On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote:
>> Hello Mark,
>>
>> after reading
>> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
>>
>> again i'm really confused how
Alfredo,
Alfredo Deza wrote:
: On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
: > Eugen Block wrote:
: > :
: > : I think you're running into an issue reported a couple of times.
: > : For the use of LVM you have to specify the name of the Volume Group
: > : and the respective Logical
Yes, and to be sure I did the read test again from another client.
-Original Message-
From: David C [mailto:dcsysengin...@gmail.com]
Sent: 18 January 2019 16:00
To: Marc Roos
Cc: aderumier; Burkhard.Linke; ceph-users
Subject: Re: [ceph-users] CephFS - Small file - single thread - read
On Fri, 18 Jan 2019, 14:46 Marc Roos
>
> [@test]# time cat 50b.img > /dev/null
>
> real0m0.004s
> user0m0.000s
> sys 0m0.002s
> [@test]# time cat 50b.img > /dev/null
>
> real0m0.002s
> user0m0.000s
> sys 0m0.002s
> [@test]# time cat 50b.img > /dev/null
>
> real0m0.002s
[@test]# time cat 50b.img > /dev/null
real0m0.004s
user0m0.000s
sys 0m0.002s
[@test]# time cat 50b.img > /dev/null
real0m0.002s
user0m0.000s
sys 0m0.002s
[@test]# time cat 50b.img > /dev/null
real0m0.002s
user0m0.000s
sys 0m0.001s
[@test]# time cat 50b.img
On Fri, Jan 18, 2019 at 2:12 PM wrote:
> Hi.
>
> We have the intention of using CephFS for some of our shares, which we'd
> like to spool to tape as a part normal backup schedule. CephFS works nice
> for large files but for "small" .. < 0.1MB .. there seem to be a
> "overhead" on 20-40ms per fil
Hi,
I don't have so big latencies:
# time cat 50bytesfile > /dev/null
real0m0,002s
user0m0,001s
sys 0m0,000s
(It's on an ceph ssd cluster (mimic), kernel cephfs client (4.18), 10GB network
with small latency too, client/server have 3ghz cpus)
- Mail original -
De: "Burkh
Hi,
On 1/18/19 3:11 PM, jes...@krogh.cc wrote:
Hi.
We have the intention of using CephFS for some of our shares, which we'd
like to spool to tape as a part normal backup schedule. CephFS works nice
for large files but for "small" .. < 0.1MB .. there seem to be a
"overhead" on 20-40ms per file.
Hi.
We have the intention of using CephFS for some of our shares, which we'd
like to spool to tape as a part normal backup schedule. CephFS works nice
for large files but for "small" .. < 0.1MB .. there seem to be a
"overhead" on 20-40ms per file. I tested like this:
root@abe:/nfs/home/jk# time
On 1/16/19 4:54 PM, c...@jack.fr.eu.org wrote:
> Hi,
>
> My 2 cents:
> - do drop python2 support
I wouldn't agree. Python 2 needs to be dropped.
> - do not drop python2 support unexpectedly, aka do a deprecation phase
>
Indeed. Deprecate it at the Nautilus release and drop it after N.
Write
On 18/01/2019 22.33, Alfredo Deza wrote:
> On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote:
>>
>> On 17/01/2019 00:45, Sage Weil wrote:
>>> Hi everyone,
>>>
>>> This has come up several times before, but we need to make a final
>>> decision. Alfredo has a PR prepared that drops Python 2 suppo
On Fri, Jan 18, 2019 at 7:07 AM Hector Martin wrote:
>
> On 17/01/2019 00:45, Sage Weil wrote:
> > Hi everyone,
> >
> > This has come up several times before, but we need to make a final
> > decision. Alfredo has a PR prepared that drops Python 2 support entirely
> > in master, which will mean na
On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
>
> Eugen Block wrote:
> : Hi Jan,
> :
> : I think you're running into an issue reported a couple of times.
> : For the use of LVM you have to specify the name of the Volume Group
> : and the respective Logical Volume instead of the path, e.g.
>
Hi Kevin,
On 1/17/2019 10:50 PM, KEVIN MICHAEL HRPCEK wrote:
Hey,
I recall reading about this somewhere but I can't find it in the docs
or list archive and confirmation from a dev or someone who knows for
sure would be nice. What I recall is that bluestore has a max 4GB file
size limit based
On Fri, Jan 18, 2019 at 12:42:21PM +0100, Robert Sander wrote:
> On 18.01.19 11:48, Eugen Leitl wrote:
>
> > OSD on every node (Bluestore), journal on SSD (do I need a directory, or a
> > dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?)
>
> You need a partition on the SSD
Eugen Block wrote:
: Hi Jan,
:
: I think you're running into an issue reported a couple of times.
: For the use of LVM you have to specify the name of the Volume Group
: and the respective Logical Volume instead of the path, e.g.
:
: ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --d
Dear colleagues,
we build L3 topology for use with CEPH, which is based on OSPF routing
between Loopbacks, in order to get reliable and ECMPed topology, like this:
10.10.200.6 proto bird metric 64
nexthop via 10.10.15.3 dev enp97s0f1 weight 1
nexthop via 10.10.25.3 dev enp19s0f0 weight
On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub wrote:
>
> On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote:
> > Hi,
> >
> > We am trying to use Ceph in our products to address some of the use cases.
> > We think Ceph block device for us. One of the use cases is that we have a
> > numb
On 17/01/2019 00:45, Sage Weil wrote:
Hi everyone,
This has come up several times before, but we need to make a final
decision. Alfredo has a PR prepared that drops Python 2 support entirely
in master, which will mean nautilus is Python 3 only.
All of our distro targets (el7, bionic, xenial) i
Hi Jan,
I think you're running into an issue reported a couple of times.
For the use of LVM you have to specify the name of the Volume Group
and the respective Logical Volume instead of the path, e.g.
ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --data /dev/sda
Regards,
Eugen
On 16/01/2019 18:33, Götz Reinicke wrote:
My question is: How are your experiences with the current >=8TB SATA disks are
some very bad models out there which I should avoid?
Be careful with Seagate consumer SATA drives. They are now shipping SMR
drives without mentioning that fact anywhere in
On 18.01.19 11:48, Eugen Leitl wrote:
> OSD on every node (Bluestore), journal on SSD (do I need a directory, or a
> dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?)
You need a partition on the SSD for the block.db (it's not a journal
anymore with blustore). You should loo
Hello, Ceph users,
replying to my own post from several weeks ago:
Jan Kasprzak wrote:
: [...] I plan to add new OSD hosts,
: and I am looking for setup recommendations.
:
: Intended usage:
:
: - small-ish pool (tens of TB) for RBD volumes used by QEMU
: - large pool for object-based co
On 12/01/2019 15:07, Brian Topping wrote:
I’m a little nervous that BlueStore assumes it owns the partition table and
will not be happy that a couple of primary partitions have been used. Will this
be a problem?
You should look into using ceph-volume in LVM mode. This will allow you
to creat
On Fri, Jan 18, 2019 at 9:25 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 1/17/19 7:27 PM, Void Star Nill wrote:
>
> Hi,
>
> We am trying to use Ceph in our products to address some of the use cases. We
> think Ceph block device for us. One of the use cases is that we have a number
> of jobs running
(Crossposting this from Reddit /r/ceph , since likely to have more technical
audience present here).
I've scrounged up 5 old Atom Supermicro nodes and would like to run them 365/7
for limited production as RBD with Bluestore (ideally latest 13.2.4 Mimic),
triple copy redundancy. Underlying OS
On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote:
> Hi,
>
> We am trying to use Ceph in our products to address some of the use cases.
> We think Ceph block device for us. One of the use cases is that we have a
> number of jobs running in containers that need to have Read-Only access
Is there an overview of previous tshirts?
-Original Message-
From: Anthony D'Atri [mailto:a...@dreamsnake.net]
Sent: 18 January 2019 01:07
To: Tim Serong
Cc: Ceph Development; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Nautilus Release T-shirt Design
>> Lenz has provide
Hi,
On 1/17/19 7:27 PM, Void Star Nill wrote:
Hi,
We am trying to use Ceph in our products to address some of the use
cases. We think Ceph block device for us. One of the use cases is that
we have a number of jobs running in containers that need to have
Read-Only access to shared data. The d
40 matches
Mail list logo