[ceph-users] Infernalis

2016-01-08 Thread HEWLETT, Paul (Paul)
Hi Cephers

Just fired up first Infernalis cluster on RHEL7.1.

The following:

[root@citrus ~]# systemctl status ceph-osd@0.service
ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled)
   Active: active (running) since Fri 2016-01-08 15:57:11 GMT; 1h 8min ago
 Main PID: 7578 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
   └─7578 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph 
--setgroup ceph

Jan 08 15:57:10 citrus.arch.velocix.com systemd[1]: Starting Ceph object 
storage daemon...
Jan 08 15:57:10 citrus.arch.velocix.com ceph-osd-prestart.sh[7520]: getopt: 
unrecognized option '--setuser'
Jan 08 15:57:10 citrus.arch.velocix.com ceph-osd-prestart.sh[7520]: getopt: 
unrecognized option '--setgroup'
Jan 08 15:57:11 citrus.arch.velocix.com ceph-osd-prestart.sh[7520]: 
create-or-move updating item name 'osd.0' weight 0.2678 at location 
{host=citrus,root=default} to crush map
Jan 08 15:57:11 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.
Jan 08 15:57:11 citrus.arch.velocix.com ceph-osd[7578]: starting osd.0 at :/0 
osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 08 15:57:11 citrus.arch.velocix.com ceph-osd[7578]: 2016-01-08 
15:57:11.743134 7f61ee37e900 -1 osd.0 0 log_to_monitors {default=true}
Jan 08 15:57:11 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.
Jan 08 15:57:12 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.
Jan 08 15:57:12 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.
Jan 08 15:57:12 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.
Jan 08 15:57:14 citrus.arch.velocix.com systemd[1]: Started Ceph object storage 
daemon.

Shows some warnings:

   - setuser unrecognised option (and setgroup) - Is this an error?
   - why 5 msgs about starting the Ceph object storage daemon? Is this also 
an error of some kind?

Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] bug 12200

2016-01-04 Thread HEWLETT, Paul (Paul)
Thanks...




On 23/12/2015, 21:33, "Gregory Farnum" <gfar...@redhat.com> wrote:

>On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul)
><paul.hewl...@alcatel-lucent.com> wrote:
>> Seasons Greetings Cephers..
>>
>> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in
>> Infernalis?
>>
>> Any chance that it can be back ported to Hammer ? (I don’t see it planned)
>>
>> We are hitting this bug more frequently than desired so would be keen to see
>> it fixed in Hammer
>
>David tells me the fix was fairly complicated, involved some encoding
>changes, and doesn't backport cleanly. So I guess it's not likely to
>happen.
>-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] letting and Infernalis

2016-01-04 Thread HEWLETT, Paul (Paul)
Hi Cephers and Happy New Year

I am under the impression that ceph was refactored to allow dynamic enabling of 
lttng in Infernalis.

Is there any documentation on how to enable lttng  in Infernalis? (I cannot 
find anything…)

Regards
Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] bug 12200

2015-12-23 Thread HEWLETT, Paul (Paul)
Seasons Greetings Cephers..

Can I assume that http://tracker.ceph.com/issues/12200 is fixed in Infernalis?

Any chance that it can be back ported to Hammer ? (I don’t see it planned)

We are hitting this bug more frequently than desired so would be keen to see it 
fixed in Hammer

Regards
Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
When installing Hammer on RHEL7.1 we regularly got the message that partprobe 
failed to inform the kernel. We are using the ceph-disk command from ansible to 
prepare the disks. The partprobe failure seems harmless and our OSDs always 
activated successfully.

If the Infernalis version of ceph-disk is going to trap an error from part 
probe then we will be unable to prepare our OSDs and this becomes a bug.

Regards
Paul



On 16/12/2015, 06:17, "ceph-users on behalf of Matt Taylor" 
 wrote:

>Hi all,
>
>After recently upgrading to CentOS 7.2 and installing a new Ceph cluster 
>using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.
>
>I have observed the same behaviour over multiple Ceph servers when 
>preparing disk's. All the servers are identical.
>
>Disk's are zapping fine, however when running 'ceph-deploy disk 
>prepare', we're encountering the following error:
>
>> [ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
>> kvsrv02:/dev/sdr
>> [ceph_deploy.cli][INFO ] ceph-deploy options:
>> [ceph_deploy.cli][INFO ] username : None
>> [ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
>> [ceph_deploy.cli][INFO ] dmcrypt : False
>> [ceph_deploy.cli][INFO ] verbose : False
>> [ceph_deploy.cli][INFO ] overwrite_conf : False
>> [ceph_deploy.cli][INFO ] subcommand : prepare
>> [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO ] quiet : False
>> [ceph_deploy.cli][INFO ] cd_conf : > instance at 0x7f1d54a4a7a0>
>> [ceph_deploy.cli][INFO ] cluster : ceph
>> [ceph_deploy.cli][INFO ] fs_type : xfs
>> [ceph_deploy.cli][INFO ] func : 
>> [ceph_deploy.cli][INFO ] ceph_conf : None
>> [ceph_deploy.cli][INFO ] default_release : False
>> [ceph_deploy.cli][INFO ] zap_disk : False
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
>> [kvsrv02][DEBUG ] connection detected need for sudo
>> [kvsrv02][DEBUG ] connected to host: kvsrv02
>> [kvsrv02][DEBUG ] detect platform information from remote host
>> [kvsrv02][DEBUG ] detect machine type
>> [kvsrv02][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
>> [ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
>> [kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
>> activate False
>> [kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
>> --fs-type xfs -- /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-allows-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-wants-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-needs-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=fsid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=osd_journal_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdr
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 
>> on /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: 

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
Hi Loic

You are correct – it is partx – sorry for the confusion

ansible.stderr:partx: specified range <1:0> does not make sense
ansible.stderr:partx: /dev/sdg: error adding partition 2
ansible.stderr:partx: /dev/sdg: error adding partitions 1-2
ansible.stderr:partx: /dev/sdg: error adding partitions 1-2


Regards
Paul

On 16/12/2015, 09:36, "Loic Dachary" 
<l...@dachary.org<mailto:l...@dachary.org>> wrote:

Hi Paul,

On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote:
When installing Hammer on RHEL7.1 we regularly got the message that partprobe 
failed to inform the kernel. We are using the ceph-disk command from ansible to 
prepare the disks. The partprobe failure seems harmless and our OSDs always 
activated successfully.

Do you have a copy of those errors by any chance. ceph-disk hammer on RHEL 
should use partx, not partprobe.

If the Infernalis version of ceph-disk is going to trap an error from part 
probe then we will be unable to prepare our OSDs and this becomes a bug.

Agreed.

Cheers

Regards
Paul
On 16/12/2015, 06:17, "ceph-users on behalf of Matt Taylor" 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com> on 
behalf of mtay...@mty.net.au<mailto:mtay...@mty.net.au>> wrote:
Hi all,

After recently upgrading to CentOS 7.2 and installing a new Ceph cluster
using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.

I have observed the same behaviour over multiple Ceph servers when
preparing disk's. All the servers are identical.

Disk's are zapping fine, however when running 'ceph-deploy disk
prepare', we're encountering the following error:

[ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
kvsrv02:/dev/sdr
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : 
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : 
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
[kvsrv02][DEBUG ] connection detected need for sudo
[kvsrv02][DEBUG ] connected to host: kvsrv02
[kvsrv02][DEBUG ] detect platform information from remote host
[kvsrv02][DEBUG ] detect machine type
[kvsrv02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
[kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
activate False
[kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
--fs-type xfs -- /dev/sdr
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_

Re: [ceph-users] Flapping OSDs, Large meta directories in OSDs

2015-12-01 Thread HEWLETT, Paul (Paul)
I believe that ‘filestore xattr use omap’ is no longer used in Ceph – can 
anybody confirm this?
I could not find any usage in the Ceph source code except that the value is set 
in some of the test software…

Paul


From: ceph-users 
> 
on behalf of Tom Christensen >
Date: Monday, 30 November 2015 at 23:20
To: "ceph-users@lists.ceph.com" 
>
Subject: Re: [ceph-users] Flapping OSDs, Large meta directories in OSDs

What counts as ancient?  Concurrent to our hammer upgrade we went from 
3.16->3.19 on ubuntu 14.04.  We are looking to revert to the 3.16 kernel we'd 
been running because we're also seeing an intermittent (its happened twice in 2 
weeks) massive load spike that completely hangs the osd node (we're talking 
about load averages that hit 20k+ before the box becomes completely 
unresponsive).  We saw a similar behavior on a 3.13 kernel, which resolved by 
moving to the 3.16 kernel we had before.  I'll try to catch one with debug_ms=1 
and see if I can see it we're hitting a similar hang.

To your comment about omap, we do have filestore xattr use omap = true in our 
conf... which we believe was placed there by ceph-deploy (which we used to 
deploy this cluster).  We are on xfs, but we do take tons of RBD snapshots.  If 
either of these use cases will cause lots of osd map size then, we may just be 
exceeding the limits of the number of rbd snapshots ceph can handle (we take 
about 4-5000/day, 1 per RBD in the cluster)

An interesting note, we had an OSD flap earlier this morning, and when it did, 
immediately after it came back I checked its meta directory size with du -sh, 
this returned immediately, and showed a size of 107GB.  The fact that it 
returned immediately indicated to me that something had just recently read 
through that whole directory and it was all cached in the FS cache.  Normally a 
du -sh on the meta directory takes a good 5 minutes to return.  Anyway, since 
it dropped this morning its meta directory size continues to shrink and is down 
to 93GB.  So it feels like something happens that makes the OSD read all its 
historical maps which results in the OSD hanging cause there are a ton of them, 
and then it wakes up and realizes it can delete a bunch of them...

On Mon, Nov 30, 2015 at 2:11 PM, Dan van der Ster 
> wrote:

The trick with debugging heartbeat problems is to grep back through the log to 
find the last thing the affected thread was doing, e.g. is 0x7f5affe72700 stuck 
in messaging, writing to the disk, reading through the omap, etc..

I agree this doesn't look to be network related, but if you want to rule it out 
you should use debug_ms=1.

Last week we upgraded a 1200 osd cluster from firefly to 0.94.5 and similarly 
started getting slow requests. To make a long story short, our issue turned out 
to be sendmsg blocking (very rarely), probably due to an ancient el6 kernel 
(these osd servers had ~800 days' uptime). The signature of this was 900s of 
slow requests, then an ms log showing "initiating reconnect". Until we got the 
kernel upgraded everywhere, we used a workaround of ms tcp read timeout = 60.
So, check your kernels, and upgrade if they're ancient. Latest el6 kernels work 
for us.

Otherwise, those huge osd leveldb's don't look right. (Unless you're using tons 
and tons of omap...) And it kinda reminds me of the other problem we hit after 
the hammer upgrade, namely the return of the ever growing mon leveldb issue. 
The solution was to recreate the mons one by one. Perhaps you've hit something 
similar with the OSDs. debug_osd=10 might be good enough to see what the osd is 
doing, maybe you need debug_filestore=10 also. If that doesn't show the 
problem, bump those up to 20.

Good luck,

Dan

On 30 Nov 2015 20:56, "Tom Christensen" 
> wrote:
>
> We recently upgraded to 0.94.3 from firefly and now for the last week have 
> had intermittent slow requests and flapping OSDs.  We have been unable to 
> nail down the cause, but its feeling like it may be related to our osdmaps 
> not getting deleted properly.  Most of our osds are now storing over 100GB of 
> data in the meta directory, almost all of that is historical osd maps going 
> back over 7 days old.
>
> We did do a small cluster change (We added 35 OSDs to a 1445 OSD cluster), 
> the rebalance took about 36 hours, and it completed 10 days ago.  Since that 
> time the cluster has been HEALTH_OK and all pgs have been active+clean except 
> for when we have an OSD flap.
>
> When the OSDs flap they do not crash and restart, they just go unresponsive 
> for 1-3 minutes, and then come back alive all on their own.  They get marked 
> down by peers, and cause some peering and then they just come back rejoin the 
> 

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread HEWLETT, Paul (Paul)
Flushing a GPT partition table using dd does not work as the table is 
duplicated at the end of the disk as well

Use the sgdisk –Z command

Paul

From: ceph-users 
> 
on behalf of Mykola >
Date: Thursday, 19 November 2015 at 18:43
To: German Anders >
Cc: ceph-users >
Subject: Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

I believe the error message says that there is no space left on the device for 
the second partition to be created. Perhaps try to flush gpt with old good dd.

Sent from Outlook Mail for 
Windows 10 phone


From: German Anders
Sent: Thursday, November 19, 2015 7:25 PM
To: Mykola Dvornik
Cc: ceph-users
Subject: Re: ceph osd prepare cmd on infernalis 9.2.0

I've already try that with no luck at all


On Thursday, 19 November 2015, Mykola Dvornik 
> wrote:
'Could not create partition 2 from 10485761 to 10485760'.

Perhaps try to zap the disks first?

On 19 November 2015 at 16:22, German Anders 
> 
wrote:
Hi cephers,
I had some issues while running the prepare osd command:
ceph version: infernalis 9.2.0
disk: /dev/sdf (745.2G)
  /dev/sdf1 740.2G
  /dev/sdf2 5G

# parted /dev/sdf
GNU Parted 2.3
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA INTEL SSDSC2BB80 (scsi)
Disk /dev/sdf: 800GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End SizeFile system  Name  Flags
 2  1049kB  5369MB  5368MB   ceph journal
 1  5370MB  800GB   795GB   btrfsceph data

cibn05:


$ ceph-deploy osd prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/local/bin/ceph-deploy osd 
prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  disk  : [('cibn05', 
'/dev/sdf', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   : 

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : btrfs
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cibn05:/dev/sdf:
[cibn05][DEBUG ] connection detected need for sudo
[cibn05][DEBUG ] connected to host: cibn05
[cibn05][DEBUG ] detect platform information from remote host
[cibn05][DEBUG ] detect machine type
[cibn05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cibn05
[cibn05][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cibn05][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host cibn05 disk /dev/sdf journal None 
activate False
[cibn05][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph 
--fs-type btrfs -- /dev/sdf
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is 
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is 
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is 
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf1 uuid path is 
/sys/dev/block/8:81/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf2 uuid path is 
/sys/dev/block/8:82/dm/uuid
[cibn05][WARNIN] 

Re: [ceph-users] maximum object size

2015-09-09 Thread HEWLETT, Paul (Paul)
By setting a parameter osd_max_write_size to 2047Š
This normally defaults to 90

Setting to 2048 exposes a bug in Ceph where signed overflow occurs...

Part of the problem is my expectations. Ilya pointed out that one can use
libradosstriper to stripe a large object over many OSD¹s. I expected this
to happen automatically for any object > osd_max_write_size (=90MB) but it
does not. Instead one has to set special attributes to trigger striping.

Additionally interaction with erasure coding is unclear - apparently the
error is reached when the total file size exceeds the limit - if EC is
enabled then maybe a better solution would be to test the size of the
chunk written to the OSD which will be only part of the total file size.
Or do I have that wrong?

If EC is being used then would the individual chunks after splitting the
file then be erasure coded ? I.e if we decide to split a large file into 5
striped chunks does ceph then EC the individual chunks?

Striping is not really documentedŠ

Paul

On 08/09/2015 17:53, "Somnath Roy" <somnath@sandisk.com> wrote:

>I think the limit is 90 MB from OSD side, isn't it ?
>If so, how are you able to write object till 1.99 GB ?
>Am I missing anything ?
>
>Thanks & Regards
>Somnath
>
>-Original Message-
>From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>HEWLETT, Paul (Paul)
>Sent: Tuesday, September 08, 2015 8:55 AM
>To: ceph-users@lists.ceph.com
>Subject: [ceph-users] maximum object size
>
>Hi All
>
>We have recently encountered a problem on Hammer (0.94.2) whereby we
>cannot write objects > 2GB in size to the rados backend.
>(NB not RadosGW, CephFS or RBD)
>
>I found the following issue
>https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_libra
>d
>os which seems to address this but no progress reported.
>
>What are the implications of writing such large objects to RADOS? What
>impact is expected on the XFS backend particularly regarding the size and
>location of the journal?
>
>Any prospect of progressing the issue reported in the enclosed link?
>
>Interestingly I could not find anywhere in the ceph documentation that
>describes the 2GB limitation. The implication of most of the website docs
>is that there is no limit on objects stored in Ceph. The only hint is
>that osd_max_write_size is a 32 bit signed integer.
>
>If we use erasure coding will this reduce the impact? I.e. 4+1 EC will
>only write 500MB to each OSD and then this value will be tested against
>the chunk size instead of the total file size?
>
>The relevant code in Ceph is:
>
>src/FileJournal.cc:
>
>  needed_space = ((int64_t)g_conf->osd_max_write_size) << 20;
>  needed_space += (2 * sizeof(entry_header_t)) + get_top();
>  if (header.max_size - header.start < needed_space) {
>derr << "FileJournal::create: OSD journal is not large enough to hold
>"
><< "osd_max_write_size bytes!" << dendl;
>ret = -ENOSPC;
>goto free_buf;
>  }
>
>src/osd/OSD.cc:
>
>// too big?
>if (cct->_conf->osd_max_write_size &&
>m->get_data_len() > cct->_conf->osd_max_write_size << 20) {
>// journal can't hold commit!
> derr << "handle_op msg data len " << m->get_data_len()
> << " > osd_max_write_size " << (cct->_conf->osd_max_write_size << 20)
> << " on " << *m << dendl;
>service.reply_op_error(op, -OSD_WRITETOOBIG);
>return;
>  }
>
>Interestingly the code in OSD.cc looks like a bug - the max_write value
>should be cast to an int64_t before shifting left 20 bits (which is done
>correctly in FileJournal.cc). Otherwise overflow may occur and negative
>values generated.
>
>
>Any comments welcome - any help appreciated.
>
>Regards
>Paul
>
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>PLEASE NOTE: The information contained in this electronic mail message is
>intended only for the use of the designated recipient(s) named above. If
>the reader of this message is not the intended recipient, you are hereby
>notified that you have received this message in error and that any
>review, dissemination, distribution, or copying of this message is
>strictly prohibited. If you have received this communication in error,
>please notify the sender by telephone or e-mail (as shown above)
>immediately and destroy any and all copies of this message in your
>possession (whether hard copies or electronically stored copies).
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] jemalloc and transparent hugepage

2015-09-09 Thread HEWLETT, Paul (Paul)
Hi Jan

If I can suggest that you look at:

http://engineering.linkedin.com/performance/optimizing-linux-memory-managem
ent-low-latency-high-throughput-databases


where LinkedIn ended up disabling some of the new kernel features to
prevent memory thrashing.
Search for Transparent Huge Pages..

RHEL7 has these now disabled by default - LinkedIn are using GraphDB which
is a log-structured system.

Paul

On 09/09/2015 10:54, "ceph-devel-ow...@vger.kernel.org on behalf of Jan
Schermer" 
wrote:

>I looked at THP before. It comes enabled on RHEL6 and on our KVM hosts it
>merges a lot (~300GB hugepages on a 400GB KVM footprint).
>I am probably going to disable it and see if it introduces any problems
>for me - the most important gain here is better processor memory lookup
>table (cache) utilization where it considerably lowers the number of
>entries. Not sure how it affects different workloads - HPC guys should
>have a good idea? I can only evaluate the effect on OSDs and KVM, but the
>problem is that going over the cache limit even by a tiny bit can have
>huge impact - theoretically...
>
>This issue sounds strange, though. THP should kick in and defrag/remerge
>the pages that are part-empty. Maybe it's just not aggressive enough?
>Does the "free" memory show as used (part of RSS of the process using the
>page)? I guess not because there might be more processes with memory in
>the same hugepage.
>
>This might actually partially explain the pagecache problem I mentioned
>there about a week ago (slow OSD startup), maybe kswapd is what has to do
>the work and defrag the pages when memory pressure is high!
>
>I'll try to test it somehow, hopefully then there will be cake.
>
>Jan
>
>> On 09 Sep 2015, at 07:08, Alexandre DERUMIER 
>>wrote:
>> 
>> They are a tracker here
>> 
>> https://github.com/jemalloc/jemalloc/issues/243
>> "Improve interaction with transparent huge pages"
>> 
>> 
>> 
>> - Mail original -
>> De: "aderumier" 
>> À: "Sage Weil" 
>> Cc: "ceph-devel" , "ceph-users"
>>
>> Envoyé: Mercredi 9 Septembre 2015 06:37:22
>> Objet: Re: [ceph-users] jemalloc and transparent hugepage
>> 
 Is this something we can set with mallctl[1] at startup?
>> 
>> I don't think it's possible.
>> 
>> TP hugepage are managed by kernel, not jemalloc.
>> 
>> (but a simple "echo never >
>>/sys/kernel/mm/transparent_hugepage/enabled" in init script is enough)
>> 
>> - Mail original -
>> De: "Sage Weil" 
>> À: "aderumier" 
>> Cc: "Mark Nelson" , "ceph-devel"
>>, "ceph-users" ,
>>"Somnath Roy" 
>> Envoyé: Mercredi 9 Septembre 2015 04:07:59
>> Objet: Re: [ceph-users] jemalloc and transparent hugepage
>> 
>> On Wed, 9 Sep 2015, Alexandre DERUMIER wrote:
> Have you noticed any performance difference with tp=never?
>>> 
>>> No difference. 
>>> 
>>> I think hugepage could speedup big memory sets like 100-200GB, but for
>>> 1-2GB they are no noticable difference.
>> 
>> Is this something we can set with mallctl[1] at startup?
>> 
>> sage 
>> 
>> [1] 
>>http://www.canonware.com/download/jemalloc/jemalloc-latest/doc/jemalloc.h
>>tml 
>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> - Mail original -
>>> De: "Mark Nelson" 
>>> À: "aderumier" , "ceph-devel"
>>>, "ceph-users" 
>>> Cc: "Somnath Roy" 
>>> Envoyé: Mercredi 9 Septembre 2015 01:49:35
>>> Objet: Re: [ceph-users] jemalloc and transparent hugepage
>>> 
>>> Excellent investigation Alexandre! Have you noticed any performance
>>> difference with tp=never?
>>> 
>>> Mark 
>>> 
>>> On 09/08/2015 06:33 PM, Alexandre DERUMIER wrote:
 I have done small benchmark with tcmalloc and jemalloc, transparent
hugepage=always|never.
 
 for tcmalloc, they are no difference.
 but for jemalloc, the difference is huge (around 25% lower with
tp=never). 
 
 jemmaloc 4.6.0+tp=never vs tcmalloc use 10% more RSS memory
 
 jemmaloc 4.0+tp=never almost use same RSS memory than tcmalloc !
 
 
 I don't have monitored memory usage in recovery, but I think it
should help too.
 
 
 
 
 tcmalloc 2.1 tp=always
 ---
 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
 
 root 67746 120 1.0 1531220 671152 ? Ssl 01:18 0:43 /usr/bin/ceph-osd
--cluster=ceph -i 0 -f
 root 67764 144 1.0 1570256 711232 ? Ssl 01:18 0:51 /usr/bin/ceph-osd
--cluster=ceph -i 1 -f
 
 root 68363 220 0.9 1522292 655888 ? Ssl 01:19 0:46 /usr/bin/ceph-osd
--cluster=ceph -i 0 -f
 root 68381 261 1.0 1563396 702500 ? Ssl 01:19 0:55 /usr/bin/ceph-osd
--cluster=ceph 

[ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
Hi All

We have recently encountered a problem on Hammer (0.94.2) whereby we
cannot write objects > 2GB in size to the rados backend.
(NB not RadosGW, CephFS or RBD)

I found the following issue
https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_librad
os which seems to address this but no progress reported.

What are the implications of writing such large objects to RADOS? What
impact is expected on the XFS backend particularly regarding the size and
location of the journal?

Any prospect of progressing the issue reported in the enclosed link?

Interestingly I could not find anywhere in the ceph documentation that
describes the 2GB limitation. The implication of most of the website docs
is that there is no limit on objects stored in Ceph. The only hint is that
osd_max_write_size is a 32 bit signed integer.

If we use erasure coding will this reduce the impact? I.e. 4+1 EC will
only write 500MB to each OSD and then this value will be tested against
the chunk size instead of the total file size?

The relevant code in Ceph is:

src/FileJournal.cc:

  needed_space = ((int64_t)g_conf->osd_max_write_size) << 20;
  needed_space += (2 * sizeof(entry_header_t)) + get_top();
  if (header.max_size - header.start < needed_space) {
derr << "FileJournal::create: OSD journal is not large enough to hold "
<< "osd_max_write_size bytes!" << dendl;
ret = -ENOSPC;
goto free_buf;
  }

src/osd/OSD.cc:

// too big?
if (cct->_conf->osd_max_write_size &&
m->get_data_len() > cct->_conf->osd_max_write_size << 20) {
// journal can't hold commit!
 derr << "handle_op msg data len " << m->get_data_len()
 << " > osd_max_write_size " << (cct->_conf->osd_max_write_size << 20)
 << " on " << *m << dendl;
service.reply_op_error(op, -OSD_WRITETOOBIG);
return;
  }

Interestingly the code in OSD.cc looks like a bug - the max_write value
should be cast to an int64_t before shifting left 20 bits (which is done
correctly in FileJournal.cc). Otherwise overflow may occur and negative
values generated.


Any comments welcome - any help appreciated.

Regards
Paul


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
I found the description in the source code. Apparently one sets attributes
on the object to force striping.

Regards
Paul

On 08/09/2015 17:39, "Ilya Dryomov" <idryo...@gmail.com> wrote:

>On Tue, Sep 8, 2015 at 7:30 PM, HEWLETT, Paul (Paul)
><paul.hewl...@alcatel-lucent.com> wrote:
>> Hi Ilya
>>
>> Thanks for that - libradosstriper is what we need - any notes available
>>on
>> usage?
>
>No, I'm afraid not.  include/radosstriper/libradosstriper.h and
>libradosstriper.hpp should be enough to get you started - there is
>a fair amount of detail in the comments.
>
>Thanks,
>
>Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

2015-06-30 Thread HEWLETT, Paul (Paul)
We are using Ceph (Hammer) on Centos7 and RHEL7.1 successfully.

One secret is to ensure that the disk is cleaned prior to ceph-disk
command. Because GPT tables are used one must use the Œsgdisk -Z¹ command
to purge the disk of all partition tables. We usually issue this command
in the RedHat kickstart file.

The second trick is not to use the mount command explicitly (as shown in
your post below).

The Œceph-disk prepare¹ command should automatically start the OSD.

Paul

On 29/06/2015 20:19, Bruce McFarland bruce.mcfarl...@taec.toshiba.com
wrote:

Do these issues occur in Centos 7 also?

 -Original Message-
 From: Bruce McFarland
 Sent: Monday, June 29, 2015 12:06 PM
 To: 'Loic Dachary'; 'ceph-users@lists.ceph.com'
 Subject: RE: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with
ver
 0.94.2
 
 Using the manual method of creating an OSD on RHEL 7.1 with Ceph 94.2
 turns up an issue with the ondisk fsid of the journal device. From a
quick
 web search I've found reference to this exact same issue from earlier
this
 year. Is there a version of Ceph that works with RHEL 7.1???
 
 [root@ceph0 ceph]# ceph-disk-prepare --cluster ceph --cluster-uuid
 b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs /dev/sdc /dev/sdb1
 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
 same device as the osd data The operation has completed successfully.
 partx: /dev/sdc: error adding partition 1
 meta-data=/dev/sdc1  isize=2048   agcount=4,
agsize=244188597
 blks
  =   sectsz=512   attr=2, projid32bit=1
  =   crc=0finobt=0
 data =   bsize=4096   blocks=976754385,
imaxpct=5
  =   sunit=0  swidth=0 blks
 naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
 log  =internal log   bsize=4096   blocks=476930, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none   extsz=4096   blocks=0, rtextents=0
 The operation has completed successfully.
 partx: /dev/sdc: error adding partition 1
 [root@ceph0 ceph]# mkdir /var/lib/ceph/osd/ceph-0
 [root@ceph0 ceph]# ll /var/lib/ceph/osd/ total 0 drwxr-xr-x. 2 root
root 6
 Jun 29 12:01 ceph-0
 [root@ceph0 ceph]# mount -t xfs /dev/sdc1 /var/lib/ceph/osd/ceph-0/
 [root@ceph0 ceph]# mount
 proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sysfs on /sys
type
 sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
 devtmpfs on /dev type devtmpfs
 (rw,nosuid,seclabel,size=57648336k,nr_inodes=14412084,mode=755)
 securityfs on /sys/kernel/security type securityfs
 (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs
 (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts
 (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
 tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
 tmpfs on /sys/fs/cgroup type tmpfs
 (rw,nosuid,nodev,noexec,seclabel,mode=755)
 cgroup on /sys/fs/cgroup/systemd type cgroup
 
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/sys
 temd-cgroups-agent,name=systemd)
 pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
 cgroup on /sys/fs/cgroup/cpuset type cgroup
 (rw,nosuid,nodev,noexec,relatime,cpuset)
 cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
 (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
 cgroup on /sys/fs/cgroup/memory type cgroup
 (rw,nosuid,nodev,noexec,relatime,memory)
 cgroup on /sys/fs/cgroup/devices type cgroup
 (rw,nosuid,nodev,noexec,relatime,devices)
 cgroup on /sys/fs/cgroup/freezer type cgroup
 (rw,nosuid,nodev,noexec,relatime,freezer)
 cgroup on /sys/fs/cgroup/net_cls type cgroup
 (rw,nosuid,nodev,noexec,relatime,net_cls)
 cgroup on /sys/fs/cgroup/blkio type cgroup
 (rw,nosuid,nodev,noexec,relatime,blkio)
 cgroup on /sys/fs/cgroup/perf_event type cgroup
 (rw,nosuid,nodev,noexec,relatime,perf_event)
 cgroup on /sys/fs/cgroup/hugetlb type cgroup
 (rw,nosuid,nodev,noexec,relatime,hugetlb)
 configfs on /sys/kernel/config type configfs (rw,relatime)
 /dev/mapper/rhel_ceph0-root on / type xfs
 (rw,relatime,seclabel,attr2,inode64,noquota)
 selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
 systemd-1 on /proc/sys/fs/binfmt_misc type autofs
 (rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
 debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on
 /dev/mqueue type mqueue (rw,relatime,seclabel) hugetlbfs on
 /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
 /dev/mapper/rhel_ceph0-home on /home type xfs
 (rw,relatime,seclabel,attr2,inode64,noquota)
 /dev/sda2 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
 binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
 fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
 /dev/sdc1 on /var/lib/ceph/osd/ceph-0 type xfs
 (rw,relatime,seclabel,attr2,inode64,noquota)
 [root@ceph0 ceph]# ceph-osd -i=0 --mkfs
 2015-06-29 

Re: [ceph-users] Ceph on RHEL7.0

2015-06-02 Thread HEWLETT, Paul (Paul)
Hi Ken

Are these packages compatible with Giant or Hammer?

We are currently running Hammer - can we use the RBD kernel module from
RH7.1 and is the elrepo version of cephFS compatible with Hammer?

Regards
Paul

On 01/06/2015 17:57, Ken Dreyer kdre...@redhat.com wrote:

For the sake of providing more clarity regarding the Ceph kernel module
situation on RHEL 7.0, I've removed all the files at
https://github.com/ceph/ceph-kmod-rpm and updated the README there.

The summary is that if you want to use Ceph's RBD kernel module on RHEL
7, you should use RHEL 7.1 or later. And if you want to use the kernel
CephFS client on RHEL 7, you should use the latest upstream kernel
packages from ELRepo.

Hope that clarifies things from a RHEL 7 kernel perspective.

- Ken


On 05/28/2015 09:16 PM, Luke Kao wrote:
 Hi Bruce,
 RHEL7.0 kernel has many issues on filesystem sub modules and most of
 them fixed only in RHEL7.1.
 So you should consider to go to RHEL7.1 directly and upgrade to at least
 kernel 3.10.0-229.1.2
 
 
 BR,
 Luke
 
 
 *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of
 Bruce McFarland [bruce.mcfarl...@taec.toshiba.com]
 *Sent:* Friday, May 29, 2015 5:13 AM
 *To:* ceph-users@lists.ceph.com
 *Subject:* [ceph-users] Ceph on RHEL7.0
 
 We¹re planning on moving from Centos6.5 to RHEL7.0 for Ceph storage and
 monitor nodes. Are there any known issues using RHEL7.0?
 
 Thanks
 
 
 
 
 This electronic message contains information from Mycom which may be
 privileged or confidential. The information is intended to be for the
 use of the individual(s) or entity named above. If you are not the
 intended recipient, be aware that any disclosure, copying, distribution
 or any other use of the contents of this information is prohibited. If
 you have received this electronic message in error, please notify us by
 post or telephone (to the numbers or correspondence address above) or by
 email (at the email address above) immediately.
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] systemd unit files and multiple daemons

2015-04-23 Thread HEWLETT, Paul (Paul)** CTR **
What about running multiple clusters on the same host?

There is a separate mail thread about being able to run clusters with different 
conf files on the same host.
Will the new systemd service scripts cope with this?

Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Gregory 
Farnum [g...@gregs42.com]
Sent: 22 April 2015 23:26
To: Ken Dreyer
Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] systemd unit files and multiple daemons

On Wed, Apr 22, 2015 at 2:57 PM, Ken Dreyer kdre...@redhat.com wrote:
 I could really use some eyes on the systemd change proposed here:
 http://tracker.ceph.com/issues/11344

 Specifically, on bullet #4 there, should we have a single
 ceph-mon.service (implying that users should only run one monitor
 daemon per server) or if we should support multiple ceph-mon@ services
 (implying that users will need to specify additional information when
 starting the service(s)). The version in our tree is ceph-mon@. James'
 work for Ubuntu Vivid is only ceph-mon [2]. Same thing for ceph-mds vs
 ceph-mds@.

 I'd prefer to keep Ubuntu downstream the same as Ceph upstream.

 What do we want to do for this?

 How common is it to run multiple monitor daemons or mds daemons on a
 single host?

For a real deployment, you shouldn't be running multiple monitors on a
single node in the general case. I'm not sure if we want to prohibit
it by policy, but I'd be okay with the idea.
For testing purposes (in ceph-qa-suite or using vstart as a developer)
it's pretty common though, and we probably don't want to have to
rewrite all our tests to change that. I'm not sure that vstart ever
uses the regular init system, but teuthology/ceph-qa-suite obviously
do!

For MDSes, it's probably appropriate/correct to support multiple
daemons on the same host. This can be either a fault tolerance thing,
or just a way of better using multiple cores if you're living on the
(very dangerous) edge.
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy : systemd unit files not deployed to a centos7 nodes

2015-04-17 Thread HEWLETT, Paul (Paul)** CTR **
I would be very keen for this to be implemented in Hammer and am willing to 
help test it...


Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Ken Dreyer 
[kdre...@redhat.com]
Sent: 17 April 2015 14:45
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy : systemd unit files not deployed to a
centos7 nodes

As you've seen, a set of systemd unit files has been committed to git,
but the packages do not yet use them.

There is an open ticket for this task,
http://tracker.ceph.com/issues/11344 . Feel free to add yourself as a
watcher on that if you are interested in the progress.

- Ken

On 04/17/2015 06:22 AM, Alexandre DERUMIER wrote:
 Oh,

 I didn't see that a sysvinit file was also deployed.

 works fine with /etc/init.d/ceph


 - Mail original -
 De: aderumier aderum...@odiso.com
 À: ceph-users ceph-users@lists.ceph.com
 Envoyé: Vendredi 17 Avril 2015 14:11:45
 Objet: [ceph-users] ceph-deploy : systemd unit files not deployed to a
 centos7 nodes

 Hi,

 I'm currently try to deploy a new ceph test cluster on centos7, (hammer)

 from ceph-deploy (on a debian wheezy).

 And it seem that systemd unit files are not deployed

 Seem that ceph git have systemd unit file
 https://github.com/ceph/ceph/tree/hammer/systemd

 I don't have look inside the rpm package.


 (This is my first install on centos, so I don't known if it's working with 
 previous releases)


 I have deployed with:

 ceph-deploy install --release hammer ceph1-{1,2,3}
 ceph-deploy new ceph1-{1,2,3}


 Is it normal ?
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-disk command raises partx error

2015-04-13 Thread HEWLETT, Paul (Paul)** CTR **
Hi Everyone

I am using the ceph-disk command to prepare disks for an OSD.
The command is:

ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid $CLUSTERUUID 
--fs-type xfs /dev/${1}

and this consistently raises the following error on RHEL7.1 and Ceph Hammer viz:

partx: specified range 1:0 does not make sense
partx: /dev/sdb: error adding partition 2
partx: /dev/sdb: error adding partitions 1-2
partx: /dev/sdb: error adding partitions 1-2

I have had similar errors on previous versions of Ceph and RHEL. We have 
decided to stick with Hammer/7.1 and I
am interested if anybody has any comment on this.

The error seems to do no harm so is probably cosmetic but on principle at least 
I would al least like to know if
I can safely ignore this.

Many thanks.

Regards

Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cascading Failure of OSDs

2015-04-09 Thread HEWLETT, Paul (Paul)** CTR **

I use the folowing:

cat /sys/class/net/em1/statistics/rx_bytes

for the em1 interface

all other stats are available

Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Carl-Johan 
Schenström [carl-johan.schenst...@gu.se]
Sent: 09 April 2015 07:34
To: Francois Lafont; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cascading Failure of OSDs

Francois Lafont wrote:

 Just in case it could be useful, I have noticed the -s option (on my
 Ubuntu) that offer an output probably easier to parse:

 # column -t is just to make it's nice for the human eyes.
 ifconfig -s | column -t

Since ifconfig is deprecated, one should use iproute2 instead.

ip -s link show p2p1 | awk '/(RX|TX):/{getline; print $3;}'

However, the sysfs interface is probably a better alternative. See 
https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net-statistics
 and https://www.kernel.org/doc/Documentation/ABI/README.

--
Carl-Johan Schenström
Driftansvarig / System Administrator
Språkbanken  Svensk nationell datatjänst /
The Swedish Language Bank  Swedish National Data Service
Göteborgs universitet / University of Gothenburg
carl-johan.schenst...@gu.se / +46 709 116769
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Giant 0.87 update on CentOs 7

2015-03-23 Thread HEWLETT, Paul (Paul)** CTR **
Hi Steffen

We have recently encountered the errors described below. Initially one must set 
check_obsoletes=1 in the yum priorities.conf file.

However subsequent yum updates cause problems.

The solution we use is to disable the epel repo by default:

  yum-config-manager --disable epel

and explicitly install libunwind:

 yum -y --enablerepo=epel libunwind

Then updates occur cleanly...

yum -y update

Additionally we specify eu.ceph.com in the ceph.repo file. This all works with 
RHEL7.

If one does not do this then the incorrect librbd1, librados2 rpms are 
installed and this triggers a dependency install of the (incorrect) firefly 
rpms.

To recover remove librbd1,librados2:

  yum remove librbd1 librados2

HTH

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353



From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Steffen W 
Sørensen [ste...@me.com]
Sent: 22 March 2015 22:22
To: Ceph Users
Subject: Re: [ceph-users] Giant 0.87 update on CentOs 7

:) Now disabling epel which seems the confusing Repo above just renders me with 
TOs from http://ceph.comhttp://ceph.com/… are Ceph.comhttp://ceph.com/ down 
currently?
http://eu.ceph.com answers currently… properly the trans-atlantic line or my 
provider :/



[root@n1 ~]# yum -y --disablerepo epel --disablerepo ceph-source update
Loaded plugins: fastestmirror, priorities
http://ceph.com/rpm-giant/el7/x86_64/repodata/repomd.xml: [Errno 12] Timeout on 
http://ceph.com/rpm-giant/el7/x86_64/repodata/repomd.xml: (28, 'Connection 
timed out after 30403 milliseconds')
Trying other mirror.
http://ceph.com/rpm-giant/el7/x86_64/repodata/repomd.xml: [Errno 12] Timeout on 
http://ceph.com/rpm-giant/el7/x86_64/repodata/repomd.xml: (28, 'Connection 
timed out after 30042 milliseconds')
Trying other mirror.
…

/Steffen
___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread HEWLETT, Paul (Paul)** CTR **
Hi Jesus

EPEL is required for the libunwind library.

If libunwind is copied to the ceph repo then EPEL would not be required.

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353



From: Jesus Chavez (jeschave) [jesch...@cisco.com]
Sent: 10 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **
Cc: Wido den Hollander; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine

So EPEL is not requiered?


Jesus Chavez
SYSTEMS ENGINEER-C.SALES

jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255

CCIE - 44433

On Mar 9, 2015, at 8:58 AM, HEWLETT, Paul (Paul)** CTR ** 
paul.hewl...@alcatel-lucent.commailto:paul.hewl...@alcatel-lucent.com wrote:

Hi Wildo

It seems that your move coincided with yet another change in the EPEL repo.

For anyone who is interested, I fixed this by:

1. ensuring that check_obsoletes=1 is in 
/etc/yum/pluginconf.d/priorities.conf
2. Install libunwind explicitly:

   yum install libunwind

3. Install ceph with epel disabled:

  yum install --disablerepo=epel ceph

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Wido den Hollander [w...@42on.commailto:w...@42on.com]
Sent: 09 March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.comhttp://eu.ceph.com mirror machine

On 03/09/2015 02:27 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
When did you make the change?


Yesterday

It worked on Friday albeit with these extra lines in ceph.repo:

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
gpgcheck=0

which I removed when I discovered this no longer existed.


Ah, I think I know. The rsync script probably didn't clean up those old
directories, since they don't exist here either:
http://ceph.com/rpms/rhel7/noarch/

That caused some confusion since this machine is a fresh sync from 
ceph.comhttp://ceph.com

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Wido den Hollander [w...@42on.commailto:w...@42on.com]
Sent: 09 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.comhttp://eu.ceph.com mirror machine

On 03/09/2015 12:54 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
Hi Wildo

Has something broken with this move? The following has worked for me repeatedly 
over the last 2 months:


It shouldn't have broken anything, but you never know.

The machine rsyncs the data from ceph.comhttp://ceph.com directly. The 
directories you
are pointing at do exist and contain data.

Anybody else noticing something?

This a.m. I tried to install ceph using the following repo file:

[root@citrus ~]# cat /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-giant/rhel7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-giant/rhel7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

and ceph now fails to install:

msg: Error: Package: 1:ceph-0.87.1-0.el7.x86_64 (ceph)
  Requires: python-ceph = 1:0.87.1-0.el7
  Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
  python-ceph = 1:0.86-0.el7
  Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
  python-ceph = 1:0.87-0.el7
  Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
  python-ceph = 1:0.87.1-0.el7
Error: Package: 1:ceph-common-0.87.1-0.el7.x86_64 (ceph)
  Requires: python-ceph = 1:0.87.1-0.el7
  Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
  python-ceph = 1:0.86-0.el7
  Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
  python-ceph = 1:0.87-0.el7
  Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
  python-ceph = 1:0.87.1-0.el7

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: ceph-users 
[ceph-users-boun...@lists.ceph.commailto:ceph-users-boun...@lists.ceph.com] 
on behalf of Wido den Hollander [w...@42on.commailto:w...@42on.com]
Sent: 09 March 2015 11:15
To: ceph-users
Subject: [ceph-users] New eu.ceph.comhttp://eu.ceph.com mirror machine

Hi,

Since the recent reports of rsync

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
Hi Wildo

If I disable the epel repo then the error changes:

[root@ninja ~]# yum install --disablerepo=epel ceph
Loaded plugins: langpacks, priorities, product-id, subscription-manager
10 packages excluded due to repository priority protections
Resolving Dependencies
.
-- Finished Dependency Resolution
Error: Package: gperftools-libs-2.1-1.el7.x86_64 (ceph)
   Requires: libunwind.so.8()(64bit)

So this is related to the EPEL repo breaking ceph again. I have 
check_obsoletes=1 as recommended
on this list a couple weeks ago.

Is there any chance you could copy the libunwind repo to eu.ceph.com ?

Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine

On 03/09/2015 02:27 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
 When did you make the change?


Yesterday

 It worked on Friday albeit with these extra lines in ceph.repo:

 [Ceph-el7]
 name=Ceph-el7
 baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
 enabled=1
 gpgcheck=0

 which I removed when I discovered this no longer existed.


Ah, I think I know. The rsync script probably didn't clean up those old
directories, since they don't exist here either:
http://ceph.com/rpms/rhel7/noarch/

That caused some confusion since this machine is a fresh sync from ceph.com

 Regards
 Paul Hewlett
 Senior Systems Engineer
 Velocix, Cambridge
 Alcatel-Lucent
 t: +44 1223 435893 m: +44 7985327353



 
 From: Wido den Hollander [w...@42on.com]
 Sent: 09 March 2015 12:15
 To: HEWLETT, Paul (Paul)** CTR **; ceph-users
 Subject: Re: [ceph-users] New eu.ceph.com mirror machine

 On 03/09/2015 12:54 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
 Hi Wildo

 Has something broken with this move? The following has worked for me 
 repeatedly over the last 2 months:


 It shouldn't have broken anything, but you never know.

 The machine rsyncs the data from ceph.com directly. The directories you
 are pointing at do exist and contain data.

 Anybody else noticing something?

 This a.m. I tried to install ceph using the following repo file:

 [root@citrus ~]# cat /etc/yum.repos.d/ceph.repo
 [ceph]
 name=Ceph packages for $basearch
 baseurl=http://ceph.com/rpm-giant/rhel7/$basearch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-noarch]
 name=Ceph noarch packages
 baseurl=http://ceph.com/rpm-giant/rhel7/noarch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-source]
 name=Ceph source packages
 baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS
 enabled=0
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 and ceph now fails to install:

 msg: Error: Package: 1:ceph-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7
 Error: Package: 1:ceph-common-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7

 Regards
 Paul Hewlett
 Senior Systems Engineer
 Velocix, Cambridge
 Alcatel-Lucent
 t: +44 1223 435893 m: +44 7985327353



 
 From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Wido den 
 Hollander [w...@42on.com]
 Sent: 09 March 2015 11:15
 To: ceph-users
 Subject: [ceph-users] New eu.ceph.com mirror machine

 Hi,

 Since the recent reports of rsync failing on eu.ceph.com I moved
 eu.ceph.com to a new machine.

 It went from physical to a KVM VM backed by RBD, so it's now running on
 Ceph.

 URLs or rsync paths haven't changed, it's still eu.ceph.com and
 available over IPv4 and IPv6.

 This Virtual Machine is dedicated for running eu.ceph.com, so hopefully
 rsync won't fail anymore.

 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
Hi Wildo

Has something broken with this move? The following has worked for me repeatedly 
over the last 2 months:

This a.m. I tried to install ceph using the following repo file:

[root@citrus ~]# cat /etc/yum.repos.d/ceph.repo 
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-giant/rhel7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-giant/rhel7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

and ceph now fails to install:

msg: Error: Package: 1:ceph-0.87.1-0.el7.x86_64 (ceph)
   Requires: python-ceph = 1:0.87.1-0.el7
   Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
   python-ceph = 1:0.86-0.el7
   Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
   python-ceph = 1:0.87-0.el7
   Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
   python-ceph = 1:0.87.1-0.el7
Error: Package: 1:ceph-common-0.87.1-0.el7.x86_64 (ceph)
   Requires: python-ceph = 1:0.87.1-0.el7
   Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
   python-ceph = 1:0.86-0.el7
   Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
   python-ceph = 1:0.87-0.el7
   Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
   python-ceph = 1:0.87.1-0.el7

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Wido den 
Hollander [w...@42on.com]
Sent: 09 March 2015 11:15
To: ceph-users
Subject: [ceph-users] New eu.ceph.com mirror machine

Hi,

Since the recent reports of rsync failing on eu.ceph.com I moved
eu.ceph.com to a new machine.

It went from physical to a KVM VM backed by RBD, so it's now running on
Ceph.

URLs or rsync paths haven't changed, it's still eu.ceph.com and
available over IPv4 and IPv6.

This Virtual Machine is dedicated for running eu.ceph.com, so hopefully
rsync won't fail anymore.

--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
When did you make the change?

It worked on Friday albeit with these extra lines in ceph.repo:

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
gpgcheck=0

which I removed when I discovered this no longer existed.

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine

On 03/09/2015 12:54 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
 Hi Wildo

 Has something broken with this move? The following has worked for me 
 repeatedly over the last 2 months:


It shouldn't have broken anything, but you never know.

The machine rsyncs the data from ceph.com directly. The directories you
are pointing at do exist and contain data.

Anybody else noticing something?

 This a.m. I tried to install ceph using the following repo file:

 [root@citrus ~]# cat /etc/yum.repos.d/ceph.repo
 [ceph]
 name=Ceph packages for $basearch
 baseurl=http://ceph.com/rpm-giant/rhel7/$basearch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-noarch]
 name=Ceph noarch packages
 baseurl=http://ceph.com/rpm-giant/rhel7/noarch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-source]
 name=Ceph source packages
 baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS
 enabled=0
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 and ceph now fails to install:

 msg: Error: Package: 1:ceph-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7
 Error: Package: 1:ceph-common-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7

 Regards
 Paul Hewlett
 Senior Systems Engineer
 Velocix, Cambridge
 Alcatel-Lucent
 t: +44 1223 435893 m: +44 7985327353



 
 From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Wido den 
 Hollander [w...@42on.com]
 Sent: 09 March 2015 11:15
 To: ceph-users
 Subject: [ceph-users] New eu.ceph.com mirror machine

 Hi,

 Since the recent reports of rsync failing on eu.ceph.com I moved
 eu.ceph.com to a new machine.

 It went from physical to a KVM VM backed by RBD, so it's now running on
 Ceph.

 URLs or rsync paths haven't changed, it's still eu.ceph.com and
 available over IPv4 and IPv6.

 This Virtual Machine is dedicated for running eu.ceph.com, so hopefully
 rsync won't fail anymore.

 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
Hi Wildo

It seems that your move coincided with yet another change in the EPEL repo.

For anyone who is interested, I fixed this by:

 1. ensuring that check_obsoletes=1 is in 
/etc/yum/pluginconf.d/priorities.conf
 2. Install libunwind explicitly:

yum install libunwind

 3. Install ceph with epel disabled:

   yum install --disablerepo=epel ceph

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine

On 03/09/2015 02:27 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
 When did you make the change?


Yesterday

 It worked on Friday albeit with these extra lines in ceph.repo:

 [Ceph-el7]
 name=Ceph-el7
 baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
 enabled=1
 gpgcheck=0

 which I removed when I discovered this no longer existed.


Ah, I think I know. The rsync script probably didn't clean up those old
directories, since they don't exist here either:
http://ceph.com/rpms/rhel7/noarch/

That caused some confusion since this machine is a fresh sync from ceph.com

 Regards
 Paul Hewlett
 Senior Systems Engineer
 Velocix, Cambridge
 Alcatel-Lucent
 t: +44 1223 435893 m: +44 7985327353



 
 From: Wido den Hollander [w...@42on.com]
 Sent: 09 March 2015 12:15
 To: HEWLETT, Paul (Paul)** CTR **; ceph-users
 Subject: Re: [ceph-users] New eu.ceph.com mirror machine

 On 03/09/2015 12:54 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
 Hi Wildo

 Has something broken with this move? The following has worked for me 
 repeatedly over the last 2 months:


 It shouldn't have broken anything, but you never know.

 The machine rsyncs the data from ceph.com directly. The directories you
 are pointing at do exist and contain data.

 Anybody else noticing something?

 This a.m. I tried to install ceph using the following repo file:

 [root@citrus ~]# cat /etc/yum.repos.d/ceph.repo
 [ceph]
 name=Ceph packages for $basearch
 baseurl=http://ceph.com/rpm-giant/rhel7/$basearch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-noarch]
 name=Ceph noarch packages
 baseurl=http://ceph.com/rpm-giant/rhel7/noarch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-source]
 name=Ceph source packages
 baseurl=http://ceph.com/rpm-giant/rhel7/SRPMS
 enabled=0
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 and ceph now fails to install:

 msg: Error: Package: 1:ceph-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7
 Error: Package: 1:ceph-common-0.87.1-0.el7.x86_64 (ceph)
Requires: python-ceph = 1:0.87.1-0.el7
Available: 1:python-ceph-0.86-0.el7.x86_64 (ceph)
python-ceph = 1:0.86-0.el7
Available: 1:python-ceph-0.87-0.el7.x86_64 (ceph)
python-ceph = 1:0.87-0.el7
Available: 1:python-ceph-0.87.1-0.el7.x86_64 (ceph)
python-ceph = 1:0.87.1-0.el7

 Regards
 Paul Hewlett
 Senior Systems Engineer
 Velocix, Cambridge
 Alcatel-Lucent
 t: +44 1223 435893 m: +44 7985327353



 
 From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Wido den 
 Hollander [w...@42on.com]
 Sent: 09 March 2015 11:15
 To: ceph-users
 Subject: [ceph-users] New eu.ceph.com mirror machine

 Hi,

 Since the recent reports of rsync failing on eu.ceph.com I moved
 eu.ceph.com to a new machine.

 It went from physical to a KVM VM backed by RBD, so it's now running on
 Ceph.

 URLs or rsync paths haven't changed, it's still eu.ceph.com and
 available over IPv4 and IPv6.

 This Virtual Machine is dedicated for running eu.ceph.com, so hopefully
 rsync won't fail anymore.

 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.
 Ceph trainer and consultant

 Phone: +31 (0)20 700 9902
 Skype: contact42on



--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com

Re: [ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
Thanks for that Travis.  Much appreciated.

Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Travis Rhoden [trho...@gmail.com]
Sent: 16 February 2015 15:35
To: HEWLETT, Paul (Paul)** CTR **
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Installation failure

Hi Paul,

Looking a bit closer, I do believe it is the same issue.  It looks
like python-rbd in EPEL (and others like python-rados) were updated in
EPEL on January 21st, 2015.  This update included some changes to how
dependencies were handled between EPEL and RHEL for Ceph.  See
http://pkgs.fedoraproject.org/cgit/ceph.git/commit/?h=epel7

Fedora and EPEL both split out the older python-ceph package into
smaller subsets (python-{rados,cephfs,rbd}), but these changes are not
upstream yet (from the ceph.com hosted packages).  So if repos enable
both ceph.com and EPEL, the EPEL packages will override the ceph.com
packages because the RPMs have obsoletes: python-ceph in them, even
though the EPEL packages are older.

It's a bit of a problematic transition period until the upstream
packaging splits in the same way.  I do believe that using
check_obsoletes=1 in /etc/yum/pluginconf.d/priorities.conf will take
care of the problem for you.  However, it may be the case that you
would need to make your ceph .repo files that point to rpm-giant be
priority=1.

That's my best advice of something to try for now.

Thanks,

 - Travis

On Mon, Feb 16, 2015 at 10:16 AM, HEWLETT, Paul (Paul)** CTR **
paul.hewl...@alcatel-lucent.com wrote:
 Hi Travis

 Thanks for the reply.

 My only doubt is that this was all working until this morning. Has anything 
 changed in the Ceph repository?

 I tried commenting out various repos but this did not work.
 If I delete the epel repos than ceph installation fails becuase tcmalloc and 
 leveldb are not found

 My repos are:

 [root@octopus ~]# ls -l /etc/yum.repos.d/
 total 40
 -rw-r--r-- 1 root root   700 Feb 16 12:08 ceph.repo
 -rw-r--r-- 1 root root   957 Nov 25 16:23 epel.repo
 -rw-r--r-- 1 root root  1056 Nov 25 16:23 epel-testing.repo
 -rw-r--r-- 1 root root 26533 Feb 16 11:55 redhat.repo

 and the contents of ceph.repo:

 [ceph]
 name=Ceph packages for $basearch
 baseurl=http://ceph.com/rpm-giant/el7/$basearch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-noarch]
 name=Ceph noarch packages
 baseurl=http://ceph.com/rpm-giant/el7/noarch
 enabled=1
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [ceph-source]
 name=Ceph source packages
 baseurl=http://ceph.com/rpm-giant/el7/SRPMS
 enabled=0
 priority=2
 gpgcheck=1
 type=rpm-md
 gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 [Ceph-el7]
 name=Ceph-el7
 baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
 enabled=1
 priority=2
 gpgcheck=0

 [root@octopus ~]# cat /etc/yum.repos.d/epel.repo
 [epel]
 name=Extra Packages for Enterprise Linux 7 - $basearch
 #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7arch=$basearch
 failovermethod=priority
 enabled=1
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

 [epel-debuginfo]
 name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
 #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7arch=$basearch
 failovermethod=priority
 enabled=0
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 gpgcheck=1

 [epel-source]
 name=Extra Packages for Enterprise Linux 7 - $basearch - Source
 #baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7arch=$basearch
 failovermethod=priority
 enabled=0
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 gpgcheck=1
 [root@octopus ~]# cat /etc/yum.repos.d/epel-testing.repo
 [epel-testing]
 name=Extra Packages for Enterprise Linux 7 - Testing - $basearch
 #baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-epel7arch=$basearch
 failovermethod=priority
 enabled=0
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

 [epel-testing-debuginfo]
 name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Debug
 #baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch/debug
 mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-debug-epel7arch=$basearch
 failovermethod=priority
 enabled=0
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 gpgcheck=1

 [epel-testing-source]
 name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Source
 #baseurl=http://download.fedoraproject.org/pub/epel/testing/7/SRPMS
 mirrorlist=https

[ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
Hi all

I have been installing ceph giant quite happily for the past 3 months on 
various systems and use
an ansible recipe to do so. The OS is RHEL7.

This morning on one of my test systems installation fails with:

[root@octopus ~]# yum install ceph ceph-deploy
Loaded plugins: langpacks, priorities, product-id, subscription-manager
Ceph-el7
 |  951 B  
00:00:00
ceph
 |  951 B  
00:00:00
ceph-noarch 
 |  951 B  
00:00:00
14 packages excluded due to repository priority protections
Package ceph-deploy-1.5.21-0.noarch already installed and latest version
Resolving Dependencies
-- Running transaction check
--- Package ceph.x86_64 1:0.87-0.el7.centos will be installed
-- Processing Dependency: librbd1 = 1:0.87-0.el7.centos for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: ceph-common = 1:0.87-0.el7.centos for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libcephfs1 = 1:0.87-0.el7.centos for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: python-ceph = 1:0.87-0.el7.centos for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: librados2 = 1:0.87-0.el7.centos for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: python-flask for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: python-requests for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: hdparm for package: 1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libtcmalloc.so.4()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libleveldb.so.1()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libcephfs.so.1()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: librados.so.2()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libboost_system-mt.so.1.53.0()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Processing Dependency: libboost_thread-mt.so.1.53.0()(64bit) for package: 
1:ceph-0.87-0.el7.centos.x86_64
-- Running transaction check
--- Package boost-system.x86_64 0:1.53.0-18.el7 will be installed
--- Package boost-thread.x86_64 0:1.53.0-18.el7 will be installed
--- Package ceph-common.x86_64 1:0.87-0.el7.centos will be installed
-- Processing Dependency: redhat-lsb-core for package: 
1:ceph-common-0.87-0.el7.centos.x86_64
--- Package gperftools-libs.x86_64 0:2.1-1.el7 will be installed
-- Processing Dependency: libunwind.so.8()(64bit) for package: 
gperftools-libs-2.1-1.el7.x86_64
--- Package hdparm.x86_64 0:9.43-5.el7 will be installed
--- Package leveldb.x86_64 0:1.12.0-5.el7 will be installed
--- Package libcephfs1.x86_64 1:0.87-0.el7.centos will be installed
--- Package librados2.x86_64 1:0.87-0.el7.centos will be installed
--- Package librbd1.x86_64 1:0.87-0.el7.centos will be installed
--- Package python-ceph-compat.x86_64 1:0.80.7-0.4.el7 will be installed
-- Processing Dependency: python-rbd = 1:0.80.7 for package: 
1:python-ceph-compat-0.80.7-0.4.el7.x86_64
-- Processing Dependency: python-rados = 1:0.80.7 for package: 
1:python-ceph-compat-0.80.7-0.4.el7.x86_64
-- Processing Dependency: python-cephfs = 1:0.80.7 for package: 
1:python-ceph-compat-0.80.7-0.4.el7.x86_64
--- Package python-flask.noarch 1:0.10.1-4.el7 will be installed
-- Processing Dependency: python-werkzeug for package: 
1:python-flask-0.10.1-4.el7.noarch
-- Processing Dependency: python-jinja2 for package: 
1:python-flask-0.10.1-4.el7.noarch
-- Processing Dependency: python-itsdangerous for package: 
1:python-flask-0.10.1-4.el7.noarch
--- Package python-requests.noarch 0:1.1.0-8.el7 will be installed
-- Processing Dependency: python-urllib3 for package: 
python-requests-1.1.0-8.el7.noarch
-- Running transaction check
--- Package libunwind.x86_64 0:1.1-3.el7 will be installed
--- Package python-cephfs.x86_64 1:0.80.7-0.4.el7 will be installed
-- Processing Dependency: libcephfs1 = 1:0.80.7 for package: 
1:python-cephfs-0.80.7-0.4.el7.x86_64
--- Package python-itsdangerous.noarch 0:0.23-2.el7 will be installed
--- Package python-jinja2.noarch 0:2.7.2-2.el7 will be installed
-- Processing Dependency: python-babel = 0.8 for package: 
python-jinja2-2.7.2-2.el7.noarch
-- Processing Dependency: python-markupsafe for package: 
python-jinja2-2.7.2-2.el7.noarch
--- Package python-rados.x86_64 1:0.80.7-0.4.el7 will be installed
-- Processing Dependency: librados2 = 1:0.80.7 for package: 
1:python-rados-0.80.7-0.4.el7.x86_64
--- Package python-rbd.x86_64 1:0.80.7-0.4.el7 will be installed
-- Processing Dependency: librbd1 = 

Re: [ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
Hi Travis

Thanks for the reply.

My only doubt is that this was all working until this morning. Has anything 
changed in the Ceph repository?

I tried commenting out various repos but this did not work.
If I delete the epel repos than ceph installation fails becuase tcmalloc and 
leveldb are not found

My repos are:

[root@octopus ~]# ls -l /etc/yum.repos.d/
total 40
-rw-r--r-- 1 root root   700 Feb 16 12:08 ceph.repo
-rw-r--r-- 1 root root   957 Nov 25 16:23 epel.repo
-rw-r--r-- 1 root root  1056 Nov 25 16:23 epel-testing.repo
-rw-r--r-- 1 root root 26533 Feb 16 11:55 redhat.repo

and the contents of ceph.repo:

[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-giant/el7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-giant/el7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-giant/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
priority=2
gpgcheck=0

[root@octopus ~]# cat /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
[root@octopus ~]# cat /etc/yum.repos.d/epel-testing.repo 
[epel-testing]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-epel7arch=$basearch
failovermethod=priority
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-testing-debuginfo]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-debug-epel7arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-testing-source]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-source-epel7arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353




From: Travis Rhoden [trho...@gmail.com]
Sent: 16 February 2015 15:00
To: HEWLETT, Paul (Paul)** CTR **
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Installation failure

Hi Paul,

Would you mind sharing/posting the contents of your .repo files for
ceph, ceph-el7, and ceph-noarch repos?

I see that python-rbd is getting pulled in from EPEL, which I don't
think is what you want.

My guess is that you need the fix documented in
http://tracker.ceph.com/issues/10476, though that was specifically
addressing Fedora downstream packaging of Ceph competing with current
upstream packaging hosted on ceph.com repos.  This may be something
similar with EPEL.

 - Travis

On Mon, Feb 16, 2015 at 7:19 AM, HEWLETT, Paul (Paul)** CTR **
paul.hewl...@alcatel-lucent.com wrote:
 Hi all

 I have been installing ceph giant quite happily for the past 3 months on
 various systems and use
 an ansible recipe to do so. The OS is RHEL7.

 This morning on one of my test systems installation fails with:

 [root@octopus ~]# yum install ceph ceph-deploy
 Loaded plugins: langpacks, priorities, product-id, subscription-manager
 Ceph-el7
 |  951 B  00:00:00
 ceph
 |  951 B  00:00:00
 ceph-noarch
 |  951 B  00:00:00
 14 packages excluded due to repository priority protections
 Package ceph-deploy-1.5.21-0.noarch already installed and latest version
 Resolving Dependencies
 -- Running