Re: [ceph-users] qemu-img convert vs rbd import performance

2017-12-22 Thread Konstantin Shalygin

It's already in qemu 2.9

http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d


"
This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults to 8). And the -W 
parameter to
allow qemu-img to write to the target out of order rather than sequential. This 
improves
performance as the writes do not have to wait for each other to complete.
"


And performance was dramatically increase!

Runed it with Luminous and qemu 2.9.0 (this is host with qemu-img, 
network bandwith with ceph cluster):


http://storage6.static.itmages.ru/i/17/1223/h_1514004003_2271300_d3ee031fda.png

From 11:05 to 11:28: 35% of  100Gb. Started googling about news in 
qemu, founded this message. Append -m 16 -W. Network iface utilisation 
was raises from ~150Mbit/s to ~2500Mbit/s (this is convert from one rbd 
pool to another).




k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Removing an OSD host server

2017-12-22 Thread David Turner
The hosts got put there because OSDs started for the first time on a server
with that name. If you name the new servers identically to the failed ones,
the new osds will just place themselves under the host in the crush map and
everything will be fine. There shouldn't be any problems with that based on
what you've said of the situation.

On Fri, Dec 22, 2017, 9:00 PM Brent Kennedy  wrote:

> Been looking around the web and I cant find a what seems to be “clean way”
> to remove an OSD host from the “ceph osd tree” command output.  I am
> therefore hesitant to add a server with the same name, but I still see the
> removed/failed nodes from the list.  Anyone know how to do that?  I found
> an article here, but it doesn’t seem to be a clean way:
> https://arvimal.blog/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/
>
>
>
> Regards,
>
> -Brent
>
>
>
> Existing Clusters:
>
> Test: Jewel with 3 osd servers, 1 mon, 1 gateway
>
> US Production: Firefly with 4 osd servers, 3 mons, 3 gateways behind
> haproxy LB
>
> UK Production: Hammer with 5 osd servers, 3 mons, 3 gateways behind
> haproxy LB
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Removing an OSD host server

2017-12-22 Thread Brent Kennedy
Been looking around the web and I cant find a what seems to be "clean way"
to remove an OSD host from the "ceph osd tree" command output.  I am
therefore hesitant to add a server with the same name, but I still see the
removed/failed nodes from the list.  Anyone know how to do that?  I found an
article here, but it doesn't seem to be a clean way:
https://arvimal.blog/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/

 

Regards,

-Brent

 

Existing Clusters:

Test: Jewel with 3 osd servers, 1 mon, 1 gateway

US Production: Firefly with 4 osd servers, 3 mons, 3 gateways behind haproxy
LB

UK Production: Hammer with 5 osd servers, 3 mons, 3 gateways behind haproxy
LB

 

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to evict a client in rbd

2017-12-22 Thread Karun Josy
Hello,

I am unable to delete this abandoned image.Rbd info shows a watcher ip
Image is not mapped
Image has no snapshots


rbd status cvm/image  --id clientuser
Watchers:
watcher=10.255.0.17:0/3495340192 client.390908
cookie=18446462598732841114

How can I  evict or black list a watcher client so that image can be deleted
http://docs.ceph.com/docs/master/cephfs/eviction/
I see this is possible in Cephfs



Karun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Proper way of removing osds

2017-12-22 Thread Karun Josy
Thank you!

Karun Josy

On Thu, Dec 21, 2017 at 3:51 PM, Konstantin Shalygin  wrote:

> Is this the correct way to removes OSDs, or am I doing something wrong ?
>>
> Generic way for maintenance (e.g. disk replace) is rebalance by change osd
> weight:
>
>
> ceph osd crush reweight osdid 0
>
> cluster migrate data "from this osd"
>
>
> When HEALTH_OK you can safe remove this OSD:
>
> ceph osd out osd_id
> systemctl stop ceph-osd@osd_id
> ceph osd crush remove osd_id
> ceph auth del osd_id
> ceph osd rm osd_id
>
>
>
> k
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH luminous - Centos kernel 4.14 qfull_time not supported

2017-12-22 Thread Mike Christie
On 12/20/2017 03:21 PM, Steven Vacaroaia wrote:
> Hi,
> 
> I apologies for creating a new thread ( I already mentioned my issue in
> another one)
> but I am hoping someone will be able to 
> provide clarification / instructions 
> 
> Looks like the patch for including qfull_time is missing from kernel 4.14
> as the following error occurs when creating disks
> 
> Could not set LIO device attribute cmd_time_out/qfull_time_out for
> device: rbd.disk1. Kernel not supported. - error(Cannot find attribute:
> qfull_time_out)
> Dec 20 11:03:34 osd03 journal: LUN alloc problem - Could not set LIO
> device attribute cmd_time_out/qfull_time_out for device: rbd.disk1.
> Kernel not supported. - error(Cannot find attribute: qfull_time_out)
> 
> When should we expect to have it included ?
> 

Basically whenever the upstream target layer maintainer comes back from
wherever he is. I know that answer sucks, and I am sorry. All I can do
is ping him every once in a while, resend my patches to keep them up to
date, and wait for him to get time.

When the target layer maintainer at least says that the kernel/user API
being added in this patchset

https://www.spinics.net/lists/target-devel/msg16372.html

is ok, I have a kernel that I can push to

https://github.com/ceph/ceph-client

with everything needed so you do not have to wait for them.

A updated tcmu-runner rpm will also needed, but I am the maintainer and
that is just waiting for the kernel patches to get the OK.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous RGW Metadata Search

2017-12-22 Thread Youzhong Yang
I followed the exact steps of the following page:

http://ceph.com/rgw/new-luminous-rgw-metadata-search/

"us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue,
the service runs successfully.

"us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service
was unable to start:

# /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-rgw2 --setuser
ceph --setgroup ceph   2017-12-22 16:35:48.513912 7fc54e98ee80 -1
Couldn't init storage provider (RADOS)

It's this mysterious error message "Couldn't init storage provider
(RADOS)", there's no any clue what is wrong, what is mis-configured or
anything like that.

Yes, I have elasticsearch installed and running on host 'ceph-rgw2'. Is
there any additional configuration required for ElasticSearch?

Did I miss anything? what is the magic to make this basic stuff work?

Thanks,

--Youzhong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander



On 12/22/2017 02:40 PM, Dan van der Ster wrote:

Hi Wido,

We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
couple of years.
The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
enclosures.



Yes, I see. I was looking for a solution without a JBOD and about 12 
drives 3.5" or ~20 2.5" in 1U with a decent CPU to run OSDs on.



Other than that, I have nothing particularly interesting to say about
these. Our data centre procurement team have also moved on with
standard racked equipment, so I suppose they also found these
uninteresting.



It really depends. When properly deployed OCP can seriously lower power 
costs for numerous reasons and thus lower the TCO of a Ceph cluster.


But I dislike the machines with a lot of disks for Ceph, I prefer 
smaller machines.


Hopefully somebody knows a vendor who makes such OCP machines.

Wido


Cheers, Dan

[1] http://www.wiwynn.com/english/product/type/details/32?ptype=28


On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander  wrote:

Hi,

I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
I'm looking for.

First of all, the geek in me loves OCP and the design :-) Now I'm trying to
match it with Ceph.

Looking at wiwynn [1] they offer a few OCP servers:

- 3 nodes in 2U with a single 3.5" disk [2]
- 2U node with 30 disks and a Atom C2000 [3]
- 2U JDOD with 12G SAS [4]

For Ceph I would want:

- 1U node / 12x 3.5" / Fast CPU
- 1U node / 24x 2.5" / Fast CPU

They don't seem to exist yet when looking for OCP server.

Although 30 drives is fine, it would become a very large Ceph cluster when
building with something like that.

Has anybody build Ceph clusters yet using OCP hardaware? If so, which vendor
and what are your experiences?

Thanks!

Wido

[0]: http://www.opencompute.org/
[1]: http://www.wiwynn.com/
[2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
[3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
[4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander



On 12/22/2017 03:27 PM, Luis Periquito wrote:

Hi Wido,

what are you trying to optimise? Space? Power? Are you tied to OCP?



A lot of things. I'm not tied to OCP, but OCP has a lot of advantages 
over regular 19" servers and thus I'm investigating Ceph+OCP


- Less power loss due to only one AC->DC conversion (in rack)
- Distributed UPS by having UPS in each rack
- Machine maintenance without tools

And the list goes on. But I don't want to make this a OCP thread or 
commercial.


Just wondering of there are people who have OCP servers running with 
Ceph and if so, which models.


Wido


I remember Ciara had some interesting designs like this
http://www.ciaratech.com/product.php?id_prod=539=en_cat1=1_cat2=67
though I don't believe they are OCP.

I also had a look and supermicro has a few that may fill your
requirements 
(https://www.supermicro.nl/products/system/1U/6019/SSG-6019P-ACR12L.cfm)
>

On Fri, Dec 22, 2017 at 1:40 PM, Dan van der Ster  wrote:

Hi Wido,

We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
couple of years.
The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
enclosures.

Other than that, I have nothing particularly interesting to say about
these. Our data centre procurement team have also moved on with
standard racked equipment, so I suppose they also found these
uninteresting.

Cheers, Dan

[1] http://www.wiwynn.com/english/product/type/details/32?ptype=28


On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander  wrote:

Hi,

I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
I'm looking for.

First of all, the geek in me loves OCP and the design :-) Now I'm trying to
match it with Ceph.

Looking at wiwynn [1] they offer a few OCP servers:

- 3 nodes in 2U with a single 3.5" disk [2]
- 2U node with 30 disks and a Atom C2000 [3]
- 2U JDOD with 12G SAS [4]

For Ceph I would want:

- 1U node / 12x 3.5" / Fast CPU
- 1U node / 24x 2.5" / Fast CPU

They don't seem to exist yet when looking for OCP server.

Although 30 drives is fine, it would become a very large Ceph cluster when
building with something like that.

Has anybody build Ceph clusters yet using OCP hardaware? If so, which vendor
and what are your experiences?

Thanks!

Wido

[0]: http://www.opencompute.org/
[1]: http://www.wiwynn.com/
[2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
[3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
[4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS behind on trimming

2017-12-22 Thread Stefan Kooman
Quoting Stefan Kooman (ste...@bit.nl):
> Quoting Dan van der Ster (d...@vanderster.com):
> > Hi,
> > 
> > We've used double the defaults for around 6 months now and haven't had any
> > behind on trimming errors in that time.
> > 
> >mds log max segments = 60
> >mds log max expiring = 40
> > 
> > Should be simple to try.
> Yup, and works like a charm:
> 
> ceph tell mds.* injectargs '--mds_log_max_segments=60'
> ceph tell mds.* injectargs '--mds_log_max_expiring=40'


^^ I have bumped these again to "--mds_log_max_segments=120" and
"--mds_log_max_expiring=80" cause while doing 2K objects/sec (client IO
120 MB/s ~ 12000 IOPS) the MDS was behind on trimming again.

FYI,

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Luis Periquito
Hi Wido,

what are you trying to optimise? Space? Power? Are you tied to OCP?

I remember Ciara had some interesting designs like this
http://www.ciaratech.com/product.php?id_prod=539=en_cat1=1_cat2=67
though I don't believe they are OCP.

I also had a look and supermicro has a few that may fill your
requirements 
(https://www.supermicro.nl/products/system/1U/6019/SSG-6019P-ACR12L.cfm)



On Fri, Dec 22, 2017 at 1:40 PM, Dan van der Ster  wrote:
> Hi Wido,
>
> We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
> couple of years.
> The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
> enclosures.
>
> Other than that, I have nothing particularly interesting to say about
> these. Our data centre procurement team have also moved on with
> standard racked equipment, so I suppose they also found these
> uninteresting.
>
> Cheers, Dan
>
> [1] http://www.wiwynn.com/english/product/type/details/32?ptype=28
>
>
> On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander  wrote:
>> Hi,
>>
>> I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
>> I'm looking for.
>>
>> First of all, the geek in me loves OCP and the design :-) Now I'm trying to
>> match it with Ceph.
>>
>> Looking at wiwynn [1] they offer a few OCP servers:
>>
>> - 3 nodes in 2U with a single 3.5" disk [2]
>> - 2U node with 30 disks and a Atom C2000 [3]
>> - 2U JDOD with 12G SAS [4]
>>
>> For Ceph I would want:
>>
>> - 1U node / 12x 3.5" / Fast CPU
>> - 1U node / 24x 2.5" / Fast CPU
>>
>> They don't seem to exist yet when looking for OCP server.
>>
>> Although 30 drives is fine, it would become a very large Ceph cluster when
>> building with something like that.
>>
>> Has anybody build Ceph clusters yet using OCP hardaware? If so, which vendor
>> and what are your experiences?
>>
>> Thanks!
>>
>> Wido
>>
>> [0]: http://www.opencompute.org/
>> [1]: http://www.wiwynn.com/
>> [2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
>> [3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
>> [4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to use vfs_ceph

2017-12-22 Thread David Disseldorp
On Fri, 22 Dec 2017 12:10:18 +0100, Felix Stolte wrote:

> I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working 
> now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very 
> unhappy with that).

The ceph:user_id smb.conf functionality was first shipped with
Samba 4.7.0 (via commit ec788bead311), so your version is likely
lacking this functionality.

> Which Samba Version & Linux Distribution are using? 

With SES5 and openSUSE 42.3 we ship Samba 4.6.9, but it includes a
backport of ec788bead311.

> Are you using quotas on subdirectories and are they applied when you 
> export the subdirectory via samba?

I've not used them personally, but given that Samba uses the standard
libcephfs API, max_bytes and max_files quotas should be enforced.
However, keep in mind that they don't in any way map to SMB quotas.

Cheers, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Dan van der Ster
Hi Wido,

We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
couple of years.
The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
enclosures.

Other than that, I have nothing particularly interesting to say about
these. Our data centre procurement team have also moved on with
standard racked equipment, so I suppose they also found these
uninteresting.

Cheers, Dan

[1] http://www.wiwynn.com/english/product/type/details/32?ptype=28


On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander  wrote:
> Hi,
>
> I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
> I'm looking for.
>
> First of all, the geek in me loves OCP and the design :-) Now I'm trying to
> match it with Ceph.
>
> Looking at wiwynn [1] they offer a few OCP servers:
>
> - 3 nodes in 2U with a single 3.5" disk [2]
> - 2U node with 30 disks and a Atom C2000 [3]
> - 2U JDOD with 12G SAS [4]
>
> For Ceph I would want:
>
> - 1U node / 12x 3.5" / Fast CPU
> - 1U node / 24x 2.5" / Fast CPU
>
> They don't seem to exist yet when looking for OCP server.
>
> Although 30 drives is fine, it would become a very large Ceph cluster when
> building with something like that.
>
> Has anybody build Ceph clusters yet using OCP hardaware? If so, which vendor
> and what are your experiences?
>
> Thanks!
>
> Wido
>
> [0]: http://www.opencompute.org/
> [1]: http://www.wiwynn.com/
> [2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
> [3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
> [4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs limis

2017-12-22 Thread Yan, Zheng
On Fri, Dec 22, 2017 at 3:23 PM, nigel davies  wrote:
> Right ok I take an look. Can you do that after the pool /cephfs has been set
> up
>
yes, see http://docs.ceph.com/docs/jewel/rados/operations/pools/


>
> On 21 Dec 2017 12:25 pm, "Yan, Zheng"  wrote:
>>
>> On Thu, Dec 21, 2017 at 6:18 PM, nigel davies  wrote:
>> > Hay all is it possable to set cephfs to have an sapce limit
>> > eg i like to set my cephfs to have an limit of 20TB
>> > and my s3 storage to have 4TB for example
>> >
>>
>> you can set pool quota on cephfs data pools
>>
>> > thanks
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-22 Thread Webert de Souza Lima
On Thu, Dec 21, 2017 at 12:52 PM, shadow_lin  wrote:
>
> After 18:00 suddenly the write throughput dropped and the osd latency
> increased. TCmalloc started relcaim page heap freelist much more
> frequently.All of this happened very fast and every osd had the indentical
> pattern.
>
Could that be caused by OSD scrub?  Check your "osd_scrub_begin_hour"

  ceph daemon osd.$ID config show | grep osd_scrub


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS locatiins

2017-12-22 Thread Webert de Souza Lima
it depends on how you use it. for me, it runs fine on the OSD hosts but the
mds server consumes loads of RAM, so be aware of that.
if the system load average goes too high due to osd disk utilization the
MDS server might run into troubles too, as delayed response from the host
could cause the MDS to be marked as down.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Fri, Dec 22, 2017 at 5:24 AM, nigel davies  wrote:

> Hay all
>
> Is it ok to set up mds on the same serves that do host the osd's or should
> they be on different server's
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs mds millions of caps

2017-12-22 Thread Webert de Souza Lima
On Fri, Dec 22, 2017 at 3:20 AM, Yan, Zheng  wrote:

> idle client shouldn't hold so many caps.
>

i'll try to make it reproducible for you to test.


yes. For now, it's better to run "echo 3 >/proc/sys/vm/drop_caches"
> after cronjob finishes


Thanks. I'll adopt that for now.


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to use vfs_ceph

2017-12-22 Thread Felix Stolte

Hi David,

I am using Samba 4.6.7 (shipped with Ubuntu 17.10). I've got it working 
now by copying the ceph.client.admin.keyring to /etc/ceph (I'm very 
unhappy with that). Which Samba Version & Linux Distribution are using? 
Are you using quotas on subdirectories and are they applied when you 
export the subdirectory via samba?


Regards Felix


On 12/21/2017 06:19 PM, David Disseldorp wrote:

Hi Felix,

On Thu, 21 Dec 2017 14:16:23 +0100, Felix Stolte wrote:


Hello folks,

is anybody using the vfs_ceph module for exporting cephfs as samba
shares?

Yes, alongside Luminous.
Which version of Samba are you using? Make sure it includes the fix for
https://bugzilla.samba.org/show_bug.cgi?id=12911 .


We are running ceph jewel with cephx enabled. Manpage of
vfs_ceph only references the option ceph:config_file. How do I need to
configure my share (or maybe ceph.conf)?

log.smbd:  '/' does not exist or permission denied when connecting to
[vfs] Error was Transport endpoint is not connected

I have a user ctdb with keyring file /etc/ceph/ceph.client.ctdb.keyring
with permissions:

      caps: [mds] allow rw
      caps: [mon] allow r    caps: [osd] allow rwx
pool=cephfs_metadata,allow rwx pool=cephfs_data

I can mount cephfs with cephf-fuse using the id ctdb and its keyfile.

My share definition is:

[vfs]
      comment = vfs
      path = /
      read only = No
      vfs objects = acl_xattr ceph
      ceph:user_id = ctdb
      ceph:config_file = /etc/ceph/ceph.conf

Your configuration looks fine - can you confirm that the *mapped* Samba
user is permitted access to the root of the CephFS filesystem? Have you
tried using ceph-fuse as the mapped user? If you're still running into
problems, feel free to raise a ticket at bugzilla.samba.org and assign
it to me.

Cheers, David


--
Felix Stolte
IT-Services
Tel.: +49 2461 61-9243
Email: f.sto...@fz-juelich.de

Forschungszentrum Jülich GmbH
52425 Jülich
Sitz der Gesellschaft: Jülich
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir. Dr. Karl Eugen Huthmacher
Geschäftsführung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt




smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Open Compute (OCP) servers for Ceph

2017-12-22 Thread Wido den Hollander

Hi,

I'm looking at OCP [0] servers for Ceph and I'm not able to find yet 
what I'm looking for.


First of all, the geek in me loves OCP and the design :-) Now I'm trying 
to match it with Ceph.


Looking at wiwynn [1] they offer a few OCP servers:

- 3 nodes in 2U with a single 3.5" disk [2]
- 2U node with 30 disks and a Atom C2000 [3]
- 2U JDOD with 12G SAS [4]

For Ceph I would want:

- 1U node / 12x 3.5" / Fast CPU
- 1U node / 24x 2.5" / Fast CPU

They don't seem to exist yet when looking for OCP server.

Although 30 drives is fine, it would become a very large Ceph cluster 
when building with something like that.


Has anybody build Ceph clusters yet using OCP hardaware? If so, which 
vendor and what are your experiences?


Thanks!

Wido

[0]: http://www.opencompute.org/
[1]: http://www.wiwynn.com/
[2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
[3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
[4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Permissions for mon status command

2017-12-22 Thread Andreas Calminder
Thanks!
I completely missed that, adding name='client.something' did the trick.

/andreas

On 22 December 2017 at 02:22, David Turner  wrote:
> You aren't specifying your cluster user, only the keyring.  So the
> connection command is still trying to use the default client.admin instead
> of client.python.  Here's the connect line I use in my scripts.
>
> rados.Rados(conffile='/etc/ceph/ceph.conf', conf=dict(keyring =
> '/etc/ceph/ceph.client.python.keyring'), name='client.python')
>
> On Thu, Dec 21, 2017 at 6:55 PM Alvaro Soto  wrote:
>>
>> Hi Andreas,
>> I believe is not a problem of caps, I have tested using the same cap on
>> mon and I have the same problem, still looking into.
>>
>> [client.python]
>>
>> key = AQDORjxaYHG9JxAA0qiZC0Rmf3qulsO3P/bZgw==
>>
>> caps mon = "allow r"
>>
>>
>>
>> # ceph -n client.python --keyring ceph.client.python.keyring health
>>
>> HEALTH_OK
>>
>>
>> but if I run the python script that contains a connect command to the
>> cluster.
>>
>>
>> # python health.py
>>
>> Traceback (most recent call last):
>>
>>   File "health.py", line 13, in 
>>
>> r.connect()
>>
>>   File "/usr/lib/python2.7/dist-packages/rados.py", line 429, in connect
>>
>> raise make_ex(ret, "error connecting to the cluster")
>>
>> rados.Error: error connecting to the cluster: errno EINVAL
>>
>>
>> ** PYTHON SCRIPT 
>>
>> #!/usr/bin/env python
>>
>>
>> import rados
>>
>> import json
>>
>>
>> def get_cluster_health(r):
>>
>> cmd = {"prefix":"status", "format":"json"}
>>
>> ret, buf, errs = r.mon_command(json.dumps(cmd), b'', timeout=5)
>>
>> result = json.loads(buf)
>>
>> return result['health']['overall_status']
>>
>>
>> r = rados.Rados(conffile = '/etc/ceph/ceph.conf', conf = dict (keyring =
>> '/etc/ceph/ceph.client.python.keyring'))
>>
>> r.connect()
>>
>>
>> print("{0}".format(get_cluster_health(r)))
>>
>>
>> if r is not None:
>>
>> r.shutdown()
>>
>> *
>>
>>
>>
>>
>> On Thu, Dec 21, 2017 at 4:15 PM, Andreas Calminder
>>  wrote:
>>>
>>> Hi,
>>> I'm writing a small python script using librados to display cluster
>>> health, same info as ceph health detail show, it works fine but I rather not
>>> use the admin keyring for something like this. However I have no clue what
>>> kind of caps I should or can set, I was kind of hoping that mon allow r
>>> would do it, but that didn't work, and I'm unable to find any documentation
>>> that covers this. Any pointers would be appreciated.
>>>
>>> Thanks,
>>> Andreas
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>>
>> ATTE. Alvaro Soto Escobar
>>
>> --
>> Great people talk about ideas,
>> average people talk about things,
>> small people talk ... about other people.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com