Re: [ceph-users] ACL support in Jewel using fuse and SAMBA

2016-05-07 Thread Eric Eastman
On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
 wrote:

> As it should be working, I will increase the logging level in my
> smb.conf file and see what info I can get out of the logs, and report back.

Setting the log level = 20 in my smb.conf file, and trying to add an
additional user to a directory on the Windows 2012 server, that has
mounted the share using a fuse mount to a Ceph file system shows the
error: "Operation not supported"  in the smbd log file:

[2016/05/07 23:41:19.213997, 10, pid=2823630, effective(2000501,
2000514), real(2000501, 0)]
../source3/modules/vfs_posixacl.c:92(posixacl_sys_acl_set_file)
  Calling acl_set_file: New folder (4), 0
[2016/05/07 23:41:19.214170, 10, pid=2823630, effective(2000501,
2000514), real(2000501, 0)]
../source3/modules/vfs_posixacl.c:111(posixacl_sys_acl_set_file)
  acl_set_file failed: Operation not supported

A simple test of setting an ACL from the command line to a fuse
mounted Ceph file system also fails:
# mkdir /cephfsFUSE/x
# setfacl -m d:o:rw /cephfsFUSE/x
setfacl: /cephfsFUSE/x: Operation not supported

The same test to the same Ceph file system using the kernel mount
method works.

Is there some option in my ceph.conf file or on the mount line that
needs to be used to support setting ACLs on a fuse mounted Ceph file
system?

-Eric


>
> On Fri, May 6, 2016 at 12:53 PM, Gregory Farnum  wrote:
>> On Fri, May 6, 2016 at 9:53 AM, Eric Eastman
>>  wrote:
>>> I was doing some SAMBA testing and noticed that a kernel mounted share
>>> acted differently then a fuse mounted share with Windows security on
>>> my windows client. I cut my test down to as simple as possible, and I
>>> am seeing the kernel mounted Ceph file system working as expected with
>>> SAMBA and the fuse mounted file system not creating all the SAMBA
>>> ACLs. Is there some option that needs to be turned on to have the fuse
>>> mount to support ACL in the same way the kernel mount does?
>>>
...
>>> The file created by SAMBA using the fuse mount is missing the
>>> user.SAMBA_PAI and security.NTACL ACLs.  This prevents SAMBA from
>>> properly supporting fuse mounted file systems in an AD setup.
>>
>> This is odd — the Client library quite explicitly supports "user",
>> "security", "trusted", and "ceph" xattr namespaces. And I think this
>> is tested by other things.
>>
>> Presumably you can get some logs out of Samba indicating that the
>> xattr writes failed?
>>
>> Also, it looks like you've noted Samba's CephFS VFS — is there some
>> reason you don't want to just use that? :)
>> -Greg
>>
>>>
>>> Test setup info:
>>> ceph -v
>>> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
>>>
>>> Ubuntu version is 14.04 with the 4.6rc4 PPA kernel:
>>> uname -a
>>> Linux ede-c1-gw04 4.6.0-040600rc4-generic #201604172330 SMP Mon Apr 18
>>> 03:32:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>>>
>>> Samba version 4.4.2
>>>
>>> Ceph file system mount info:
>>> grep ceph /proc/mounts
>>> 10.14.2.11,10.14.2.12,10.14.2.13:/ /cephfs ceph
>>> rw,noatime,name=cephfs,secret=,acl 0 0
>>> ceph-fuse /cephfsFUSE fuse.ceph-fuse
>>> rw,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
>>>
>>> I have put instructions on how I built SAMBA, the smb.conf file,
>>> /etc/fstab, and the ceph.conf file in pastebin at:
>>> http://pastebin.com/hv7PEqNm
>>>
>>> Best regards,
>>> Eric
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Migrating from Ubuntu to CentOS / RHEL

2016-05-07 Thread Tu Holmes
Hey Cephers.

So I have been thinking about migrating my Ceph cluster from Ubuntu to
CentOS.

I have a lot more experience with CentOS and RHEL.

What would be the best path to do this in your opinions?

My overall thought would be to rebuild my mons with CentOS and upgrade the
kernel and finally move into Jewel, but I'm thinking I could have some
conflicts.

Obviously my mons need to be at jewel before my nodes, but do you think I
could do it at the same time or should I go ahead and upgrade the kernels
all around and then upgrade to Jewel and then move monitors to CentOS and
then my nodes one at a time?

If that's the case, do I need to upgrade the kernels before Jewel or will
it be "ok" enough?

Thoughts?

Any tips greatly appreciated.

//Tu Holmes
//Doer of Things and Stuff
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to avoid kernel conflicts

2016-05-07 Thread ceph
As the tip said, you should not use rbd via kernel module on an OSD host

However, using it with userspace code (librbd etc, as in kvm) is fine

Generally, you should not have both:
- "server" in userspace
- "client" in kernelspace


On 07/05/2016 22:13, K.C. Wong wrote:
> Hi,
> 
> I saw this tip in the troubleshooting section:
> 
> DO NOT mount kernel clients directly on the same node as your Ceph Storage 
> Cluster,
> because kernel conflicts can arise. However, you can mount kernel clients 
> within
> virtual machines (VMs) on a single node.
> 
> Does this mean having a converged deployment is
> a bad idea? Do I really need dedicated storage
> nodes?
> 
> By converged, I mean every node hosting an OSD.
> At the same time, workload on the node may mount
> RBD volumes or access CephFS. Do I have to isolate
> the OSD daemon in its own VM?
> 
> Any advice would be appreciated.
> 
> -kc
> 
> K.C. Wong
> kcw...@verseon.com
> 4096R/B8995EDE  E527 CBE8 023E 79EA 8BBB  5C77 23A6 92E9 B899 5EDE
> hkps://hkps.pool.sks-keyservers.net
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to avoid kernel conflicts

2016-05-07 Thread K.C. Wong
Hi,

I saw this tip in the troubleshooting section:

DO NOT mount kernel clients directly on the same node as your Ceph Storage 
Cluster,
because kernel conflicts can arise. However, you can mount kernel clients within
virtual machines (VMs) on a single node.

Does this mean having a converged deployment is
a bad idea? Do I really need dedicated storage
nodes?

By converged, I mean every node hosting an OSD.
At the same time, workload on the node may mount
RBD volumes or access CephFS. Do I have to isolate
the OSD daemon in its own VM?

Any advice would be appreciated.

-kc

K.C. Wong
kcw...@verseon.com
4096R/B8995EDE  E527 CBE8 023E 79EA 8BBB  5C77 23A6 92E9 B899 5EDE
hkps://hkps.pool.sks-keyservers.net



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Read/Write Speed

2016-05-07 Thread Mark Nelson
Interesting, we've seen some issues with aio_submit and NVMe cards with 
3.10, but haven't seen any issues with spinning disks.


Mark

On 05/07/2016 01:00 PM, Roozbeh Shafiee wrote:

Thank you Mark for your respond,

The problem caused by some kernel issues. I installed Jewel version on
CentOS 7 with 3.10 kernel, and it seems 3.10 is too old for Ceph Jewel
so with upgrading to kernel 4.5.2, everything fixed and works perfectly.

Regards,
Roozbeh

On May 3, 2016 21:13, "Mark Nelson" > wrote:

Hi Roozbeh,

There isn't nearly enough information here regarding your benchmark
and test parameters to be able to tell why you are seeing
performance swings.  It could be anything from network hiccups, to
throttling in the ceph stack, to unlucky randomness in object
distribution, to vibrations in the rack causing your disk heads to
resync, to fragmentation of the underlying filesystem (especially
important for sequential reads).

Generally speaking if you want to try to isolate the source of the
problem, it's best to find a way to make the issue repeatable on
demand, then setup your tests so you can record system metrics
(device queue/service times, throughput stalls, network oddities,
etc) and start systematically tracking down when and why slowdowns
occur.  Sometimes you might even be able to reproduce issues outside
of Ceph (Network problems are often a common source).

It might also be worth looking at your PG and data distribution.  IE
if you have some clumpiness you might see variation in performance
as some OSDs starve for IOs while others are overloaded.

Good luck!

Mark

On 05/03/2016 11:16 AM, Roozbeh Shafiee wrote:

Hi,

I have a test Ceph cluster in my lab which will be a storage
backend for
one of my projects.
This cluster is my first experience on CentOS-7, but recently I
had some
use case on Ubuntu 14.04 too.

Actually everything works fine and I have a good functionality
on this
cluster, but the main problem is the performance
of cluster in read and write data. I have too much swing in read and
write and the rate of this swing is between 60 KB/s - 70 MB/s,
specially
on read.
how can I tune this cluster as stable storage backend for my case?

More information:

Number of OSDs: 5 physical server with 4x4TB - 16 GB of RAM
- Core
i7 CPU
Number of Monitors: 1 virtual machine with 180 GB on SSD -
16 GB of
RAM - on an KVM Virtualization Machine
All Operating Systems: CentOS 7.2 with default kernel 3.10
All File Systems: XFS
Ceph Version: 10.2 Jewel
Switch for Private Networking: D-Link DGS-1008D Gigabit 8
NICs: Gb/s NIC x 2 for each server
Block Device on Client Server: Linux kernel RBD module


Thank you
Roozbeh



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Read/Write Speed

2016-05-07 Thread Roozbeh Shafiee
Thank you Mark for your respond,

The problem caused by some kernel issues. I installed Jewel version on
CentOS 7 with 3.10 kernel, and it seems 3.10 is too old for Ceph Jewel so
with upgrading to kernel 4.5.2, everything fixed and works perfectly.

Regards,
Roozbeh
On May 3, 2016 21:13, "Mark Nelson"  wrote:

> Hi Roozbeh,
>
> There isn't nearly enough information here regarding your benchmark and
> test parameters to be able to tell why you are seeing performance swings.
> It could be anything from network hiccups, to throttling in the ceph stack,
> to unlucky randomness in object distribution, to vibrations in the rack
> causing your disk heads to resync, to fragmentation of the underlying
> filesystem (especially important for sequential reads).
>
> Generally speaking if you want to try to isolate the source of the
> problem, it's best to find a way to make the issue repeatable on demand,
> then setup your tests so you can record system metrics (device
> queue/service times, throughput stalls, network oddities, etc) and start
> systematically tracking down when and why slowdowns occur.  Sometimes you
> might even be able to reproduce issues outside of Ceph (Network problems
> are often a common source).
>
> It might also be worth looking at your PG and data distribution.  IE if
> you have some clumpiness you might see variation in performance as some
> OSDs starve for IOs while others are overloaded.
>
> Good luck!
>
> Mark
>
> On 05/03/2016 11:16 AM, Roozbeh Shafiee wrote:
>
>> Hi,
>>
>> I have a test Ceph cluster in my lab which will be a storage backend for
>> one of my projects.
>> This cluster is my first experience on CentOS-7, but recently I had some
>> use case on Ubuntu 14.04 too.
>>
>> Actually everything works fine and I have a good functionality on this
>> cluster, but the main problem is the performance
>> of cluster in read and write data. I have too much swing in read and
>> write and the rate of this swing is between 60 KB/s - 70 MB/s, specially
>> on read.
>> how can I tune this cluster as stable storage backend for my case?
>>
>> More information:
>>
>> Number of OSDs: 5 physical server with 4x4TB - 16 GB of RAM - Core
>> i7 CPU
>> Number of Monitors: 1 virtual machine with 180 GB on SSD - 16 GB of
>> RAM - on an KVM Virtualization Machine
>> All Operating Systems: CentOS 7.2 with default kernel 3.10
>> All File Systems: XFS
>> Ceph Version: 10.2 Jewel
>> Switch for Private Networking: D-Link DGS-1008D Gigabit 8
>> NICs: Gb/s NIC x 2 for each server
>> Block Device on Client Server: Linux kernel RBD module
>>
>>
>> Thank you
>> Roozbeh
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CfP 11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '16) (deadline extended May 20th)

2016-05-07 Thread VHPC 16
CfP 11th Workshop on Virtualization in High-Performance Cloud
Computing (VHPC '16)




CALL FOR PAPERS



11th Workshop on Virtualization in High­-Performance Cloud Computing
(VHPC '16) held in conjunction with the International Supercomputing
Conference - High Performance (ISC), June 19-23, 2016, Frankfurt,
Germany.





Date: June 23, 2016

Workshop URL: http://vhpc.org


Paper Submission Deadline: May 20th (extended)



Call for Papers


Virtualization technologies constitute a key enabling factor for
flexible resource management in modern data centers, and particularly
in cloud environments.  Cloud providers need to manage complex
infrastructures in a seamless fashion to support the highly dynamic
and heterogeneous workloads and hosted applications customers deploy.
Similarly, HPC environments have been increasingly adopting techniques
that enable flexible management of vast computing and networking
resources, close to marginal provisioning cost, which is unprecedented
in the history of scientific and commercial computing.


Various virtualization technologies contribute to the overall picture
in different ways: machine virtualization, with its capability to
enable consolidation of multiple under­utilized servers with
heterogeneous software and operating systems (OSes), and its
capability to live­-migrate a fully operating virtual machine (VM)
with a very short downtime, enables novel and dynamic ways to manage
physical servers; OS-­level virtualization (i.e., containerization),
with its capability to isolate multiple user­-space environments and
to allow for their co­existence within the same OS kernel, promises to
provide many of the advantages of machine virtualization with high
levels of responsiveness and performance; I/O Virtualization allows
physical NICs/HBAs to take traffic from multiple VMs or containers;
network virtualization, with its capability to create logical network
overlays that are independent of the underlying physical topology and
IP addressing, provides the fundamental ground on top of which evolved
network services can be realized with an unprecedented level of
dynamicity and flexibility; the increasingly adopted paradigm of
Software-­Defined Networking (SDN) promises to extend this flexibility
to the control and data planes of network paths.



Topics of Interest


The VHPC program committee solicits original, high-quality submissions
related to virtualization across the entire software stack with a
special focus on the intersection of HPC and the cloud. Topics
include, but are not limited to:


- Virtualization in supercomputing environments, HPC clusters, cloud

  HPC and grids

- OS-level virtualization including container runtimes (Docker, rkt et

  al.)

- Lightweight compute node operating systems/VMMs

- Optimizations of virtual machine monitor platforms, hypervisors

- QoS and SLA in hypervisors and network virtualization

- Cloud based network and system management for SDN and NFV

- Management, deployment and monitoring of virtualized environments

- Virtual per job / on-demand clusters and cloud bursting

- Performance measurement, modelling and monitoring of

  virtualized/cloud workloads

- Programming models for virtualized environments

- Virtualization in data intensive computing and Big Data processing

- Cloud reliability, fault-tolerance, high-availability and security

- Heterogeneous virtualized environments, virtualized accelerators,

  GPUs and co-processors

- Optimized communication libraries/protocols in the cloud and for HPC

  in the cloud

- Topology management and optimization for distributed virtualized applications

- Adaptation of emerging HPC technologies (high performance networks,

  RDMA, etc..)

- I/O and storage virtualization, virtualization aware file systems

- Job scheduling/control/policy in virtualized environments

- Checkpointing and migration of VM-based large compute jobs

- Cloud frameworks and APIs

- Energy-efficient / power-aware virtualization



The Workshop on Virtualization in High­-Performance Cloud Computing
(VHPC) aims to bring together researchers and industrial practitioners
facing the challenges posed by virtualization in order to foster
discussion, collaboration, mutual exchange of knowledge and
experience, enabling research to ultimately provide novel solutions
for virtualized computing systems of tomorrow.


The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections, plus
lightning talks that are limited to 5 minutes.  Presentations may be
accompanied by interactive demonstrations.


Important Dates


May 20, 2016 - Paper submission deadline

May 30, 2016 Acceptance notification

June 23, 2016 - Workshop Day

July 25, 2016 - Camera-ready version due



Chair


Michael Alexander (chair), TU Wien, Austria


[ceph-users] OSD - single drive RAID 0 or JBOD?

2016-05-07 Thread Tim Bishop
Hi all,

I've got servers (Dell R730xd) with a number of drives in connected to a
Dell H730 RAID controller. I'm trying to make a decision about whether I
should put the drives in "Non-RAID" mode, or if I should make individual
RAID 0 arrays for each drive.

Going for the RAID 0 approach would mean that the cache on the
controller will be used for the drives giving some increased
performance. But it comes at the expense of less direct access to the
disks for the operating system and Ceph, and I have had one oddity[1]
with a drive that went away when switched from RAID 0 to Non-RAID.

There are currently no SSDs being used as journals in the machines, so
the increased performance is beneficial, but data integrity is obviously
paramount.

I've seen recommendations both ways on this list, so I'm hoping for
feedback from people who've made the same decision, possibly with
evidence (postive or negative) about either approach.

Thanks!

Tim.

[1] The OS was getting I/O errors when reading certain files which gave
scrub errors. No problems were shown in the RAID controller, and a check
of the disk didn't reveal any issues. Switching to Non-RAID mode and
recreating the OSD fixed the problem and there haven't been any issues
since.

-- 
Tim Bishop
http://www.bishnet.net/tim/
PGP Key: 0x6C226B37FDF38D55

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing 'rados cppool' command

2016-05-07 Thread Mykola Golub
On Fri, May 06, 2016 at 03:41:34PM -0400, Sage Weil wrote:
> This PR
> 
>   https://github.com/ceph/ceph/pull/8975
> 
> removes the 'rados cppool' command.  The main problem is that the command 
> does not make a faithful copy of all data because it doesn't preserve the 
> snapshots (and snapshot related metadata).  That means if you copy an RBD 
> pool it will render the images somewhat broken (snaps won't be present and 
> won't work properly).  It also doesn't preserve the user_version field 
> that some librados users may rely on.
> 
> Since it's obscure and of limited use, this PR just removes it.

Copying a pool sometimes is useful, even with those limitations.

Until there is an alternative way to do the same I would not remove
this. A better approach to me is to move this functionality to
something like `ceph_radostool` (people use such tools only when
facing extraordinary situations so they are more careful and expect
limitations).

-- 
Mykola Golub
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com