[ceph-users] subscribe to ceph-user list

2018-01-15 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

2017-12-11 Thread German Anders
Hi Patrick, Some thoughts about blk-mq: *(virtio-blk)* - it's activated by default on kernels >= 3.13 on driver virtio-blk - *The blk-mq feature is currently implemented, and enabled by default, in the following drivers: virtio-blk, mtip32xx, nvme, and rbd*. (

Re: [ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-11 Thread German Anders
d] > > KeyError: (15L,) > > > > > > > > German > > 2017-12-11 12:08 GMT-03:00 John Spray <jsp...@redhat.com>: > >> > >> On Mon, Dec 4, 2017 at 6:37 PM, German Anders <gand...@despegar.com> > >> wrote: > >> > Hi,

Re: [ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-11 Thread German Anders
status(cmd) File "/usr/lib/ceph/mgr/status/module.py", line 219, in handle_fs_status stats = pool_stats[pool_id] KeyError: (15L,) *German* 2017-12-11 12:08 GMT-03:00 John Spray <jsp...@redhat.com>: > On Mon, Dec 4, 2017 at 6:37 PM, German Anders <gand...@despegar.com&

[ceph-users] luminous 12.2.2 traceback (ceph fs status)

2017-12-04 Thread German Anders
Hi, I just upgrade a ceph cluster from version 12.2.0 (rc) to 12.2.2 (stable), and i'm getting a traceback while trying to run: *# ceph fs status* Error EINVAL: Traceback (most recent call last): File "/usr/lib/ceph/mgr/status/module.py", line 301, in handle_command return

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-12-04 Thread German Anders
Could anyone run the tests? and share some results.. Thanks in advance, Best, *German* 2017-11-30 14:25 GMT-03:00 German Anders <gand...@despegar.com>: > That's correct, IPoIB for the backend (already configured the irq > affinity), and 10GbE on the frontend. I would love

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-30 Thread German Anders
test.out 2>/dev/null Im looking for tps, qps and 95th perc, could anyone with a all-nvme cluster run the test and share the results? I would really appreciate the help :) Thanks in advance, Best, *German * 2017-11-29 19:14 GMT-03:00 Zoltan Arnold Nagy <zol...@linux.vnet.ibm.com>: >

Re: [ceph-users] Transparent huge pages

2017-11-29 Thread German Anders
Is possible that in Ubuntu with kernel version 4.12.14 at least, it comes by default with the parameter enabled in [madvise]? *German* 2017-11-28 12:07 GMT-03:00 Nigel Williams : > Given that memory is a key resource for Ceph, this advice about switching >

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
> > > -Original Message- > From: German Anders [mailto:gand...@despegar.com] > Sent: dinsdag 28 november 2017 19:34 > To: Luis Periquito > Cc: ceph-users > Subject: Re: [ceph-users] ceph all-nvme mysql performance tuning > > Thanks a lot Luis, I agree with you regarding th

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
running? Have you tuned > them to have a bigger cache? > > These are from what I've learned using filestore - I've yet to run full > tests on bluestore - but they should still apply... > > On Mon, Nov 27, 2017 at 5:10 PM, German Anders <gand...@despegar.com> >

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
as soon as get got those tests running. Thanks a lot, Best, *German* 2017-11-27 12:16 GMT-03:00 Nick Fisk <n...@fisk.me.uk>: > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* 27 November 2017 14:44 > *To:* Maged Mokhtar <

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
ecially all > NVMe, environment. > > > > David Byte > > Sr. Technology Strategist > > *SCE Enterprise Linux* > > *SCE Enterprise Storage* > > Alliances and SUSE Embedded > > db...@suse.com > > 918.528.4422 > > > > *From: *ceph-users <c

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
tests and param changes to see if we get better :) Thanks, Best, *German* 2017-11-27 11:36 GMT-03:00 Maged Mokhtar <mmokh...@petasan.org>: > On 2017-11-27 15:02, German Anders wrote: > > Hi All, > > I've a performance question, we recently install a brand new Ceph cluster

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
this server also with an old ceph cluster. we are going to upgrade the version and see if tests get better. Thanks *German* 2017-11-27 10:16 GMT-03:00 Wido den Hollander <w...@42on.com>: > > > Op 27 november 2017 om 14:02 schreef German Anders <gand...@despegar.com >

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
"debug_ms = 0/0" > as well since per-message log gathering takes a large hit on small IO > performance. > > On Mon, Nov 27, 2017 at 8:02 AM, German Anders <gand...@despegar.com> > wrote: > >> Hi All, >> >> I've a performance question, we recently i

[ceph-users] ceph all-nvme mysql performance tuning

2017-11-27 Thread German Anders
Hi All, I've a performance question, we recently install a brand new Ceph cluster with all-nvme disks, using ceph version 12.2.0 with bluestore configured. The back-end of the cluster is using a bond IPoIB (active/passive) , and for the front-end we are using a bonding config with active/active

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread German Anders
prometheus has a nice data exporter build in go, that then you can send to grafana or any other tool https://github.com/digitalocean/ceph_exporter *German* 2017-10-02 8:34 GMT-03:00 Osama Hasebou : > Hi Everyone, > > Is there a guide/tutorial about how to setup Ceph

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread German Anders
Try to work with the tunables: $ *ceph osd crush show-tunables* { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1,

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
ation > hook = /path/to/customized-ceph-crush-location" (see > https://github.com/ceph/ceph/blob/master/src/ceph-crush-location.in). > > Cheers, > Maxime > > On Wed, 13 Sep 2017 at 18:35 German Anders <gand...@despegar.com> wrote: > >> *# ceph health detail* &

Re: [ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
*# ceph health detail* HEALTH_OK *# ceph osd stat* 48 osds: 48 up, 48 in *# ceph pg stat* 3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650 GB avail *German* 2017-09-13 13:24 GMT-03:00 dE <de.tec...@gmail.com>: > On 09/13/2017 09:08 PM, German Anders wrot

[ceph-users] after reboot node appear outside the root root tree

2017-09-13 Thread German Anders
Hi cephers, I'm having an issue with a newly created cluster 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I reboot one of the nodes, and when it come back, it come outside of the root type on the tree: root@cpm01:~# ceph osd tree ID CLASS WEIGHT TYPE NAME

Re: [ceph-users] Upgrade target for 0.82

2017-06-27 Thread German Anders
Thanks a lot Wido Best, *German* 2017-06-27 16:08 GMT-03:00 Wido den Hollander <w...@42on.com>: > > > Op 27 juni 2017 om 20:56 schreef German Anders <gand...@despegar.com>: > > > > > > Hi Cephers, > > > >I want to upgrade an existing c

[ceph-users] Upgrade target for 0.82

2017-06-27 Thread German Anders
Hi Cephers, I want to upgrade an existing cluster (version 0.82), and I would like to know if there's any recommended upgrade-path and also the recommended target version. Thanks in advance, *German* ___ ceph-users mailing list

Re: [ceph-users] ceph-deploy to a particular version

2017-05-02 Thread German Anders
I think you can do *$ceph-deploy install --release --repo-url http://download.ceph.com/. .. *, also you can change the --release flag with --dev or --testing and specify the version, I've done with release and dev flags and work great :) hope it helps best,

Re: [ceph-users] Ceph UPDATE (not upgrade)

2017-04-26 Thread German Anders
to disable the packages in the > repo files and make it so that you have to include the packages to update > the ceph packages. > > On Wed, Apr 26, 2017 at 1:12 PM German Anders <gand...@despegar.com> > wrote: > >> Hi Massimiiano, >> >> I think you best go wi

Re: [ceph-users] Ceph UPDATE (not upgrade)

2017-04-26 Thread German Anders
that and get things fine :) hope it helps, Best, *German Anders* 2017-04-26 11:21 GMT-03:00 Massimiliano Cuttini <m...@phoenixweb.it>: > On a Ceph Monitor/OSD server can i run just: > > *yum update -y* > > in order to upgrade system and packages or

Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-25 Thread German Anders
com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- *German Anders* Storage Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] News on RDMA on future releases

2016-12-07 Thread German Anders
Hi all, I want to know if there's any news on future releases, regarding RDMA if it's going to be integrated or not, since RDMA should increase IOPS performance a lot, specially on small block sizes. Thanks in advance, Best, *German* ___ ceph-users

Re: [ceph-users] A VM with 6 volumes - hangs

2016-11-14 Thread German Anders
equests to OSD" > > But Ceph status is in OK state. > > Thanks > Swami > > On Mon, Nov 14, 2016 at 8:27 PM, German Anders <gand...@despegar.com> > wrote: > >> Could you share some info about the ceph cluster? logs? did you see >> anything di

Re: [ceph-users] A VM with 6 volumes - hangs

2016-11-14 Thread German Anders
Could you share some info about the ceph cluster? logs? did you see anything different from normal op on the logs? Best, *German* 2016-11-14 11:46 GMT-03:00 M Ranga Swami Reddy : > +ceph-devel > > On Fri, Nov 11, 2016 at 5:09 PM, M Ranga Swami Reddy

Re: [ceph-users] ceph on two data centers far away

2016-10-25 Thread German Anders
RBD snapshots and ship those snapshots to a remote location (separate > cluster or separate pool). Similar to RBD mirroring, in this situation > your client writes are not subject to that latency. > > On Thu, Oct 20, 2016 at 1:51 PM, German Anders <gand...@despegar.com> > wr

Re: [ceph-users] ceph on two data centers far away

2016-10-20 Thread German Anders
g RDB mirroring. > > 2016-10-20 9:54 GMT-07:00 German Anders <gand...@despegar.com>: > >> from curiosity I wanted to ask you what kind of network topology are you >> trying to use across the cluster? In this type of scenario you really need >> an ultra low late

Re: [ceph-users] ceph on two data centers far away

2016-10-20 Thread German Anders
from curiosity I wanted to ask you what kind of network topology are you trying to use across the cluster? In this type of scenario you really need an ultra low latency network, how far from each other? Best, *German* 2016-10-18 16:22 GMT-03:00 Sean Redmond : > Maybe

Re: [ceph-users] is the web site down ?

2016-10-12 Thread German Anders
I think that you can check it over here: http://www.dreamhoststatus.com/2016/10/11/dreamcompute- us-east-1-cluster-service-disruption/ *German Anders* Storage Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2016-10-12

Re: [ceph-users] Ceph InfiniBand Cluster - Jewel - Performance

2016-04-07 Thread German Anders
hVWih > XyBJ > =EF9A > -END PGP SIGNATURE- > -------- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Thu, Apr 7, 2016 at 1:43 PM, German Anders <gand...@despegar.com> > wrote: > > Hi Cephers, > &

[ceph-users] Ceph InfiniBand Cluster - Jewel - Performance

2016-04-07 Thread German Anders
Hi Cephers, I've setup a production environment Ceph cluster with the Jewel release (10.1.0 (96ae8bd25f31862dbd5302f304ebf8bf1166aba6)) consisting of 3 MON Servers and 6 OSD Servers: 3x MON Servers: 2x Intel Xeon E5-2630v3@2.40Ghz 384GB RAM 2x 200G Intel DC3700 in RAID-1 for OS 1x InfiniBand

Re: [ceph-users] Scrubbing a lot

2016-03-30 Thread German Anders
Ok, but I've kernel 3.19.0-39-generic, so the new version is supposed to work right?, and I'm still getting issues while trying to map the RBD: $ *sudo rbd --cluster cephIB create e60host01vX --size 100G --pool rbd -c /etc/ceph/cephIB.conf* $ *sudo rbd -p rbd bench-write e60host01vX --io-size

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
; "--image-feature layering" to the "rbd create" command-line (or by updating > your config as documented in the release notes). > > -- > > Jason Dillaman > > > - Original Message - > > > From: "German Anders" <gand...@despegar.com>

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
creating the image. > > > > Originalmeddelande > Från: Samuel Just <sj...@redhat.com> > Datum: 2016-03-29 22:24 (GMT+01:00) > Till: German Anders <gand...@despegar.com> > Kopia: ceph-users <ceph-users@lists.ceph.com> > Rubrik: Re: [ceph-users

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
am > > > > On Tue, Mar 29, 2016 at 1:19 PM, German Anders <gand...@despegar.com> > wrote: > >> I've just upgrade to jewel, and the scrubbing seems to been > corrected... but > >> now I'm not able to map an rbd on a host (before I was able to), > basica

Re: [ceph-users] Scrubbing a lot

2016-03-29 Thread German Anders
osd.4.log", "log_to_stderr": "true", "mds_data": "\/var\/lib\/ceph\/mds\/ceph-4", "mon_cluster_log_file": "default=\/var\/log\/ceph\/ceph.$channel.log cluster=\/var\/log\/ceph\/ceph.log", "m

[ceph-users] Crush Map tunning recommendation and validation

2016-03-23 Thread German Anders
Hi all, I had a question, I'm in the middle of a new ceph deploy cluster and I've 6 OSD servers between two racks, so rack1 would have osdserver1,3 and 5, and rack2 osdserver2,4 and 6. I've edited the following crush map and I want to know if it's ok and also if the objects would be stored one on

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
any tuning is not going to help because the > bottleneck is my disks and CPU. > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Tue, Nov 24, 2015 at 10:26 AM, German Anders wrote: > > Thanks a lot Robert for the expl

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
5By+NGKWzVNOWR1X0+46i > Xq2zvZQzc9MPtGHMmnm1dkJ+d6imfLzTf099njZ+Wl1xbagnQiKbiwKL8T/k > d3OClf514rV4i7FtwOoB8NQcUMUjaeZGmPVDhmVt7fRYz/+rARkN/jwXH4qG > x/Dk > =/88f > -END PGP SIGNATURE----- > ---- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
Yes, I'm wondering if this is my top performance threshold with this kind of setup, although I'll assume that IB perf would be better.. :( *German* 2015-11-24 14:24 GMT-03:00 Mark Nelson <mnel...@redhat.com>: > On 11/24/2015 09:05 AM, German Anders wrote: > >> Thanks a lot for

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
any ports. We've seen this in > lab environments, especially with bonded ethernet. > > Mark > > On 11/24/2015 07:22 AM, German Anders wrote: > >> After doing some more in deep research and tune some parameters I've >> gain a little bit more of performance: >> &

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
l-To-All > network tests are also useful, in that sometimes network issues can crop up > only when there's lots of traffic across many ports. We've seen this in > lab environments, especially with bonded ethernet. > > Mark > > On 11/24/2015 07:22 AM, German Anders wrote: > >

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
rver. > > "iperf -c -P " on the client > > This will give you an idea of how your network is doing. All-To-All > network tests are also useful, in that sometimes network issues can crop up > only when there's lots of traffic across many ports. We've seen this in >

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-24 Thread German Anders
> > > > > > German > > > > 2015-11-23 13:32 GMT-03:00 Mark Nelson : > >> > >> Hi German, > >> > >> I don't have exactly the same setup, but on the ceph community cluster I > >> have tests with: > >> > >> 4

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
re you using unconnected mode or connected mode? With connected mode > you can up your MTU to 64K which may help on the network side. > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Mon, Nov 23, 2015 a

[ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
Hi all, I want to know if there's any improvement or update regarding ceph 0.94.5 with accelio, I've an already configured cluster (with no data on it) and I would like to know if there's a way to 'modify' the cluster in order to use accelio. Any info would be really appreciated. Cheers,

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
egory Farnum <gfar...@redhat.com>: > On Mon, Nov 23, 2015 at 10:05 AM, German Anders <gand...@despegar.com> > wrote: > > Hi all, > > > > I want to know if there's any improvement or update regarding ceph 0.94.5 > > with accelio, I've an already configured

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
Got it Robert, It was my mistake, I put post-up instead of pre-up, now it changed ok, I'll do new tests with this config and let you know. Regards, *German* 2015-11-23 15:36 GMT-03:00 German Anders <gand...@despegar.com>: > Hi Robert, > > Thanks for the response. I was configur

Re: [ceph-users] Ceph 0.94.5 with accelio

2015-11-23 Thread German Anders
on. > > 3) Assuming you are using IPoIB, try some iperf tests to see how your > network is doing. > > Mark > > > On 11/23/2015 10:17 AM, German Anders wrote: > >> Thanks a lot for the quick update Greg. This lead me to ask if there's >> anything out the

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread German Anders
mykola.dvor...@gmail.com> > Date: Thursday, 19 November 2015 at 18:43 > To: German Anders <gand...@despegar.com> > Cc: ceph-users <ceph-users@lists.ceph.com> > Subject: Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0 > > I believe the error messag

[ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
Hi all, I've finished the install of a new ceph cluster with infernalis 9.2.0 release. But I'm getting the following error msg: $ ceph -w cluster 29xx-3xxx-xxx9-xxx7-b8xx health HEALTH_WARN 64 pgs degraded 64 pgs stale 64 pgs stuck degraded

Re: [ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
question is > unsatisfiable. Check what the rule is doing. > -Greg > > > On Friday, November 20, 2015, German Anders <gand...@despegar.com> wrote: > >> Hi all, I've finished the install of a new ceph cluster with infernalis >> 9.2.0 release. But I'm get

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
I've already try that with no luck at all On Thursday, 19 November 2015, Mykola Dvornik <mykola.dvor...@gmail.com> wrote: > *'Could not create partition 2 from 10485761 to 10485760'.* > > Perhaps try to zap the disks first? > > On 19 November 2015 at 16:22, German Anders

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread German Anders
I've a similar problem while trying to run the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted)

[ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
Hi cephers, I had some issues while running the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted)

[ceph-users] Bcache and Ceph Question

2015-11-17 Thread German Anders
Hi all, Is there any way to use bcache in an already configured Ceph cluster? I've both OSD and Journals inside the same OSD daemon, and I want to try bcache in front of the OSD daemon and also move in the bcache device the journal, so for example I got: /dev/sdc --> SSD disk /dev/sdc1 --> 1st

[ceph-users] Performance output con Ceph IB with fio examples

2015-11-17 Thread German Anders
Hi cephers, Is there anyone out there using Ceph (any version) with Infiniband FDR topology network (both public and cluster), that could share some performance results? To be more specific, running something like this on a RBD volume mapped to a IB host: # fio --rw=randread --bs=4m --numjobs=4

[ceph-users] Question about OSD activate with ceph-deploy

2015-11-13 Thread German Anders
Hi all, I'm having some issues while trying to run the osd activate command with ceph-deploy tool (1.5.28), the osd prepare command run fine, but then... osd: sdf1 journal: /dev/sdc1 $ ceph-deploy osd activate cibn01:sdf1:/dev/sdc1 [ceph_deploy.conf][DEBUG ] found configuration file at:

[ceph-users] v0.94.4 Hammer released upgrade

2015-10-20 Thread German Anders
trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following error msg while trying to restart the mon daemons ($ sudo restart ceph-mon-all): 2015-10-20 08:56:37.410321 7f59a8c9d8c0 0 ceph version 0.94.4 (95292699291242794510b39ffde3f4df67898d3a), process ceph-mon, pid 6821

Re: [ceph-users] v0.94.4 Hammer released

2015-10-20 Thread German Anders
eated with data owned by user ceph and will run with > reduced privileges, but upgraded daemons will continue to run as > root. > > > > Udo > > On 20.10.2015 14:59, German Anders wrote: > > trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following >

Re: [ceph-users] v0.94.4 Hammer released

2015-10-20 Thread German Anders
trying to upgrade from hammer 0.94.3 to 0.94.4 I'm getting the following error msg while trying to restart the mon daemons: 2015-10-20 08:56:37.410321 7f59a8c9d8c0 0 ceph version 0.94.4 (95292699291242794510b39ffde3f4df67898d3a), process ceph-mon, pid 6821 2015-10-20 08:56:37.429036 7f59a8c9d8c0

[ceph-users] Error after upgrading to Infernalis

2015-10-16 Thread German Anders
Hi all, I'm trying to upgrade a ceph cluster (prev hammer release 0.94.3) to the last release of *infernalis* (9.1.0-61-gf2b9f89). So far so good while upgrading the mon servers, all work fine. But then when trying to upgrade the OSD servers I got an error while trying to start the osd services

[ceph-users] error while upgrading to infernalis last release on OSD serv

2015-10-15 Thread German Anders
Hi all, I'm trying to upgrade a ceph cluster (prev hammer release) to the last release of infernalis. So far so good while upgrading the mon servers, all work fine. But then when trying to upgrade the OSD servers I got an error while trying to start the osd services again: What I did is first to

[ceph-users] Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
Hi all, I would like to know if with this new release of Infernalis is there somewhere a procedure in order to implement xio messager with ib and ceph. Also if it's possible to change an existing ceph cluster to this kind of new setup (the existing cluster does not had any production data yet).

[ceph-users] Fwd: Proc for Impl XIO mess with Infernalis

2015-10-14 Thread German Anders
rman* <gand...@despegar.com> -- Forwarded message ------ From: German Anders <gand...@despegar.com> Date: 2015-10-14 12:46 GMT-03:00 Subject: Proc for Impl XIO mess with Infernalis To: ceph-users <ceph-users@lists.ceph.com> Hi all, I would like to know if

Re: [ceph-users] ceph-deploy prepare btrfs osd error

2015-09-07 Thread German Anders
> done > > > > There appears to be an issue with zap not wiping the partitions correctly. > http://tracker.ceph.com/issues/6258 > > > > Yours seems slightly different though. Curious, what size disk are you > trying to use? > > > > Cheers, > > > >

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-05 Thread German Anders
! Best regards German On Saturday, September 5, 2015, Christian Balzer <ch...@gol.com> wrote: > > Hello, > > On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote: > > > Hi cephers, > > > >I've the following scheme: > > > > 7x OSD serve

[ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
Hi cephers, I've the following scheme: 7x OSD servers with: 4x 800GB SSD Intel DC S3510 (OSD-SSD) 3x 120GB SSD Intel DC S3500 (Journals) 5x 3TB SAS disks (OSD-SAS) The OSD servers are located on two separate Racks with two power circuits each. I would like to know what is the

[ceph-users] ceph osd prepare btrfs

2015-09-04 Thread German Anders
Trying to do a prepare on a osd with btrfs, and getting this error: [cibosd04][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdc [cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
n performance. If you can get by with just the > SAS disks for now and make a more informed decision about the cache tiering > when Infernalis is released then that might be your best bet. > > > > Otherwise you might just be best using them as a basic SSD only Pool. > > > > N

[ceph-users] Ceph new mon deploy v9.0.3-1355

2015-09-02 Thread German Anders
Hi cephers, trying to deploying a new ceph cluster with master release (v9.0.3) and when trying to create the initial mons and error appears saying that "admin_socket: exception getting command descriptions: [Errno 2] No such file or directory", find the log: ... [ceph_deploy.mon][INFO ] distro

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
planning to > deploy that (i.e how many nodes/OSDs/SSD or HDDs/ EC or Replication etc. > etc.). > > > > Thanks & Regards > > Somnath > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, Sept

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
need to build all the components probably, not sure if it is added as git > submodule or not, Vu , could you please confirm ? > > > > Since we are working to make this solution work at scale, could you please > give us some idea what is the scale you are looking at for future > deploym

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
Hash: SHA256 > > Accelio and Ceph are still in heavy development and not ready for production. > > - > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote: > Hi cepher

Re: [ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
> thanks, > > -vu > > > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* Tuesday, September 01, 2015 12:00 PM > *To:* Somnath Roy > > *Cc:* ceph-users > *Subject:* Re: [ceph-users] Accelio & Ceph

[ceph-users] Accelio & Ceph

2015-09-01 Thread German Anders
Hi cephers, I would like to know the status for production-ready of Accelio & Ceph, does anyone had a home-made procedure implemented with Ubuntu? recommendations, comments? Thanks in advance, Best regards, *German* ___ ceph-users mailing list

[ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Hi cephers, What's the recommended version for new productive clusters? Thanks in advanced, Best regards, *German* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph version for productive clusters?

2015-08-31 Thread German Anders
Thanks a lot Kobi *German* 2015-08-31 14:20 GMT-03:00 Kobi Laredo <kobi.lar...@dreamhost.com>: > Hammer should be very stable at this point. > > *Kobi Laredo* > *Cloud Systems Engineer* | (*408) 409-KOBI* > > On Mon, Aug 31, 2015 at 8:51 AM, German Anders <gand...@des

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
stuff like disable barriers if you go with some cheaper drives that need it.) I'm not a CRUSH expert, there are more tricks to do before you set this up. Jan On 27 Aug 2015, at 18:31, German Anders gand...@despegar.com wrote: Hi Jan, Thanks for responding the email, regarding the cluster

[ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
Hi all, I'm planning to deploy a new Ceph cluster with IB FDR 56Gb/s and I've the following HW: *3x MON Servers:* 2x Intel Xeon E5-2600@v3 8C 256GB RAM 1xIB FRD ADPT-DP (two ports for PUB network) 1xGB ADPT-DP Disk Layout: SOFT-RAID: SCSI1 (0,0,0) (sda) - 120.0 GB ATA

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
need higher-grade SSDs. You can save money on memory. What will be the role of this cluster? VM disks? Object storage? Streaming?... Jan On 27 Aug 2015, at 17:56, German Anders gand...@despegar.com wrote: Hi all, I'm planning to deploy a new Ceph cluster with IB FDR 56Gb/s and I've

Re: [ceph-users] Disk/Pool Layout

2015-08-27 Thread German Anders
inline. A lot of it depends on your workload, but I'd say you almost certainly need higher-grade SSDs. You can save money on memory. What will be the role of this cluster? VM disks? Object storage? Streaming?... Jan On 27 Aug 2015, at 17:56, German Anders wrote: Hi all

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
-1 type host step emit } # end crush map *German* 2015-07-02 8:15 GMT-03:00 Lionel Bouton lionel+c...@bouton.name: On 07/02/15 12:48, German Anders wrote: The idea is to cache rbd at a host level. Also could be possible to cache at the osd level. We have high iowait and we need

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
, Emmanuel Florac eflo...@intellique.com javascript:; wrote: Le Wed, 1 Jul 2015 17:13:03 -0300 German Anders gand...@despegar.com javascript:; écrivait: Hi cephers, Is anyone out there that implement enhanceIO in a production environment? any recommendation? any perf output to share

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
yeah 3TB SAS disks *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-02 9:04 GMT-03:00 Jan Schermer j...@schermer.cz: And those disks are spindles? Looks like there’s simply too

[ceph-users] any recommendation of using EnhanceIO?

2015-07-01 Thread German Anders
Hi cephers, Is anyone out there that implement enhanceIO in a production environment? any recommendation? any perf output to share with the diff between using it and not? Thanks in advance, *German* ___ ceph-users mailing list

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
I would probably go with less size osd disks, 4TB is to much to loss in case of a broken disk, so maybe more osd daemons with less size, maybe 1TB or 2TB size. 4:1 relationship is good enough, also i think that 200G disk for the journals would be ok, so you can save some money there, the osd's of

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
in that size cluster. We're still cessing out this part of the PoC engagement. ~~shane On 7/1/15, 5:05 PM, ceph-users on behalf of German Anders ceph-users-boun...@lists.ceph.com on behalf of gand...@despegar.com wrote: ask the other guys on the list, but for me to lose 4TB of data

[ceph-users] infiniband implementation

2015-06-29 Thread German Anders
hi cephers, Want to know if there's any 'best' practice or procedure to implement Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any crush tunning parameters, etc. The Ceph cluster has: - 8 OSD servers - 2x Intel Xeon E5 8C with HT - 128G RAM - 2x 200G Intel

Re: [ceph-users] infiniband implementation

2015-06-29 Thread German Anders
that jumps out at me is using the S3700 for OS but the S3500 for journals. I would use the S3700 for journals and S3500 for the OS. Looks pretty good other than that! -- *From: *German Anders gand...@despegar.com *To: *ceph-users ceph-users@lists.ceph.com *Sent

Re: [ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
Hi Lincoln, how are you? It's with RBD Thanks a lot, Best regards, *German* 2015-06-24 11:53 GMT-03:00 Lincoln Bryant linco...@uchicago.edu: Hi German, Is this with CephFS, or RBD? Thanks, Lincoln On Jun 24, 2015, at 9:44 AM, German Anders gand...@despegar.com wrote: Hi all

[ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
Hi all, Is there any IO botleneck reported on kernel 3.18.3-031803-generic? since I'm having a lot of iowait and the cluster is really getting slow, and actually there's no much going on. I've read some time ago that there were some issues with kern 3.18, so I would like to know what's the

Re: [ceph-users] kernel 3.18 io bottlenecks?

2015-06-24 Thread German Anders
-mq was introduced which brings two other limitations:- 1. Max queue depth of 128 2. IO’s sizes are restricted/split to 128kb *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *German Anders *Sent:* 24 June 2015 15:45 *To:* ceph-users *Subject

Re: [ceph-users] krbd splitting large IO's into smaller IO's

2015-06-10 Thread German Anders
0.000.60 544.6019.20 40348.00 148.08 118.31 217.00 17.33 217.22 1.67 90.80 Thanks in advance, Best regards, *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-06

Re: [ceph-users] High IO Waits

2015-06-10 Thread German Anders
Thanks a lot Nick, I'll try with more PGs and if I don't see any improve I'll add more OSD servers to the cluster. Best regards, *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-06-10

  1   2   >