Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
You need to repair pg. This is the first sign that your hard drive was fail under. ceph pg repair *14.a5a * ceph pg repair *14.aa8* 2014-04-18 12:09 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn: Dear everyone, I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate, Therefore has some

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Oh, sorry, confused with inconsistent. :) 2014-04-18 12:13 GMT+04:00 Ирек Фасихов malm...@gmail.com: You need to repair pg. This is the first sign that your hard drive was fail under. ceph pg repair *14.a5a * ceph pg repair *14.aa8* 2014-04-18 12:09 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Ceph detects that a placement group is missing a necessary period of history from its log. If you see this state, report a bug, and try to start any failed OSDs that may contain the needed information. 2014-04-18 12:15 GMT+04:00 Ирек Фасихов malm...@gmail.com: Oh, sorry, confused

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
You OSD restarts all disks on which is your unfinished pgs? (22,23,82) 2014-04-18 12:35 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn: Thank Ирек Фасихов for my reply. I restarted osds that contains incomplete pgs, but still false :( On 04/18/2014 03:16 PM, Ирек Фасихов wrote: Ceph detects

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
*; .. On 04/18/2014 03:42 PM, Ирек Фасихов wrote: You OSD restarts all disks on which is your unfinished pgs? (22,23,82) 2014-04-18 12:35 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn: Thank Ирек Фасихов for my reply. I restarted osds that contains incomplete pgs, but still false :( On 04/18/2014 03

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Is there any data to: ls -lsa /var/lib/ceph/osd/ceph-82/current/14.7c8_*/ ls -lsa /var/lib/ceph/osd/ceph-26/current/14.7c8_*/ 2014-04-18 14:36 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn: Hi Ирек Фасихов I send it to you :D, Thank you! { state: incomplete, epoch: 42880, up

Re: [ceph-users] pgs stuck unclean in a pool without name

2014-04-18 Thread Ирек Фасихов
Show command please: ceph osd tree. 2014-04-18 14:51 GMT+04:00 Cedric Lemarchand ced...@yipikai.org: Hi, I am facing a strange behaviour where a pool is stucked, I have no idea how this pool appear in the cluster in the way I have not played with pool creation, *yet*. #

Re: [ceph-users] pgs stuck unclean in a pool without name

2014-04-18 Thread Ирек Фасихов
This pools are created automatically when there is a start S3 (ceph-radosgw). By default, your configuration file, indicate the number of pgs = 333. But it's a lot for your configuration. 2014-04-18 15:28 GMT+04:00 Cedric Lemarchand ced...@yipikai.org: Hi, Le 18/04/2014 13:14, Ирек Фасихов

Re: [ceph-users] Errors while mapping the created image (Numerical result out of range)

2014-04-16 Thread Ирек Фасихов
Show command output dmesg. 2014-04-16 12:18 GMT+04:00 Srinivasa Rao Ragolu srag...@mvista.com: Hi All, I could successfully able to create ceph cluster on our proprietary distribution with manual ceph commands *ceph.conf* [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon

Re: [ceph-users] rbd: add failed: (34) Numerical result out of range ( Please help me)

2014-04-16 Thread Ирек Фасихов
Show command output rbd ls -l. 2014-04-16 13:59 GMT+04:00 Srinivasa Rao Ragolu srag...@mvista.com: Hi Wido, Output of info command is given below root@mon:/etc/ceph# * rbd info samplerbd: error opening image sample: (95) Operation not supported2014-04-16 09:57:24.575279 7f661c6e5780 -1

[ceph-users] Russian-speaking community CephRussian!

2014-04-16 Thread Ирек Фасихов
Hi,All. I created the Russian-speaking community CephRussian in Google+https://plus.google.com/communities/104570726102090628516! Welcome! URL: https://plus.google.com/communities/104570726102090628516 -- С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757

Re: [ceph-users] Russian-speaking community CephRussian!

2014-04-16 Thread Ирек Фасихов
Loic,thanks for the link! 2014-04-16 18:46 GMT+04:00 Loic Dachary l...@dachary.org: Hi Ирек, If you organize meetups, feel free to add yourself to https://wiki.ceph.com/Community/Meetups :-) Cheers On 16/04/2014 13:22, Ирек Фасихов wrote: Hi,All. I created the Russian-speaking

[ceph-users] CephS3 and s3fs.

2014-04-14 Thread Ирек Фасихов
Hi,All. Does anyone have experience with s3fs+CephS3? I shows an error when uploading a file: kataklysm@linux-41gj:~ s3fs infas /home/kataklysm/s3/ -o url= http://s3.x-.ru; kataklysm @ linux-41gj: ~ rsync-av - progress temp / s3 sending incremental file list rsync: failed to set times

Re: [ceph-users] Dell R515/510 with H710 PERC RAID | JBOD

2014-04-03 Thread Ирек Фасихов
You need to use Dell OpenManage: https://linux.dell.com/repo/hardware/. 2014-04-04 7:26 GMT+04:00 Punit Dambiwal hypu...@gmail.com: Hi, I want to use Dell R515/R510 for the OSD node purpose 1. 2*SSD for OS purpose (Raid 1) 2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD)

Re: [ceph-users] Backport rbd.ko to 2.6.32 Linux Kernel

2014-04-01 Thread Ирек Фасихов
anything before back porting rbd.ko to 2.6.32 linux kernel. Thanks, Vilobh From: Ирек Фасихов malm...@gmail.com Date: Monday, March 31, 2014 at 10:56 PM To: Vilobh Meshram vilob...@yahoo-inc.com Cc: ceph-users@lists.ceph.com ceph-users@lists.ceph.com Subject: Re: [ceph-users] Backport rbd.ko

Re: [ceph-users] Backport rbd.ko to 2.6.32 Linux Kernel

2014-03-31 Thread Ирек Фасихов
Not backport for 2.6.32 and in the future is not planned. 2014-04-01 9:19 GMT+04:00 Vilobh Meshram vilob...@yahoo-inc.com: What is the procedure to back port rbd.ko to 2.6.32 Linux Kernel ? Thanks, Vilobh ___ ceph-users mailing list

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ирек Фасихов
Ilya, hi. Maybe you have the required patches for the kernel? 2014-03-25 14:51 GMT+04:00 Ирек Фасихов malm...@gmail.com: Yep, so works. 2014-03-25 14:45 GMT+04:00 Ilya Dryomov ilya.dryo...@inktank.com: On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов malm...@gmail.com wrote: Hmmm, create

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
Hi, Ilya. I added the files(crushd and osddump) to a folder in GoogleDrive. https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymcusp=sharing 2014-03-25 0:19 GMT+04:00 Ilya Dryomov ilya.dryo...@inktank.com: On Mon, Mar 24, 2014 at 9:46 PM, Ирек Фасихов malm...@gmail.com wrote

[ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Created cache pool for documentation: http://ceph.com/docs/master/dev/cache-pool/ *ceph osd pool create cache 100* *ceph osd tier add rbd cache* *ceph osd tier cache-mode cache writeback* *ceph osd tier set-overlay rbd cache* *ceph osd pool set cache hit_set_type bloom* *ceph osd pool set

Re: [ceph-users] firefly timing

2014-03-18 Thread Ирек Фасихов
I'm ready to test the tiering. 2014-03-18 11:07 GMT+04:00 Stefan Priebe - Profihost AG s.pri...@profihost.ag: Hi Sage, i really would like to test the tiering. Is there any detailed documentation about it and how it works? Greets, Stefan Am 18.03.2014 05:45, schrieb Sage Weil: Hi

Re: [ceph-users] Replication lag in block storage

2014-03-15 Thread Ирек Фасихов
Which model you have hard drives? 2014-03-14 21:59 GMT+04:00 Greg Poirier greg.poir...@opower.com: We are stressing these boxes pretty spectacularly at the moment. On every box I have one OSD that is pegged for IO almost constantly. ceph-1: Device: rrqm/s wrqm/s r/s w/s

Re: [ceph-users] Fluctuating I/O speed degrading over time

2014-03-07 Thread Ирек Фасихов
What you model SSD disk? 07 марта 2014 г. 13:50 пользователь Indra Pramana in...@sg.or.id написал: Hi, I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs with SSD drives and I noted that the I/O speed, especially write access to the cluster is degrading over time. When we

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Ирек Фасихов
are not affected... Are there other ideas on this problem? Thank you. 2014-02-07 Konrad Gutkowski konrad.gutkow...@ffs.pl: Hi, W dniu 07.02.2014 o 08:14 Ирек Фасихов malm...@gmail.com pisze: [...] Why might such a low speed sequential read? Do ideas on this issue? Iirc you need to set

[ceph-users] RBD+KVM problems with sequential read

2014-02-06 Thread Ирек Фасихов
Hi All. Hosts: Dell R815x5, 128 GB RAM, 25 OSD + 5 SSD(journal+system). Network: 2x10Gb+LACP Kernel: 2.6.32 QEMU emulator version 1.4.2, Copyright (c) 2003-2008 Fabrice Bellard POOLs: root@kvm05:~# ceph osd dump | grep 'rbd' pool 5 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash

Re: [ceph-users] Calculating required number of PGs per pool

2014-01-24 Thread Ирек Фасихов
Hi. Please read: http://ceph.com/docs/master/rados/operations/placement-groups/ 2014/1/24 Graeme Lambert glamb...@adepteo.net Hi, I've got 6 OSDs and I want 3 replicas per object, so following the function that's 200 PGs per OSD, which is 1,200 overall. I've got two RBD pools and the

Re: [ceph-users] Low write speed

2014-01-17 Thread Ирек Фасихов
Hi, Виталий. Whether a sufficient number of PGS? 2014/1/17 Никитенко Виталий v1...@yandex.ru Good day! Please help me solve the problem. There are the following scheme : Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage connected to the second server . The second

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Ирек Фасихов
I use H700 on Dell R815, 4 nodes. No problem performance. Configuration: 1 SSD Intel 530 - OS and Journal. 5 OSD HDD 600G: certified DELL - WD/HITACHI/SEAGATE. Size replication=2. Iops ~ 4k no VM. 15 янв. 2014 г. 15:47 пользователь Alexandre DERUMIER aderum...@odiso.com написал: Hello List,

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Ирек Фасихов
Kernel Patch for Intel S3700, Intel 530... diff --git a/drivers/scsi/sd.c b/drivers//scsi/sd.c --- a/drivers/scsi/sd.c 2013-09-14 12:53:21.0 +0400 +++ b/drivers//scsi/sd.c2013-12-19 21:43:29.0 +0400 @@ -137,6 +137,7 @@ char *buffer_data; struct

Re: [ceph-users] ceph start error

2014-01-11 Thread Ирек Фасихов
global_init: *unable to open config file from search list*/etc/ceph/ceph.conf 2014/1/11 You, Rong rong@intel.com Hi, I encounter a problem when startup the ceph cluster. When run the command: service ceph -a start, The process always hang up. The error result is:

Re: [ceph-users] ceph start error

2014-01-11 Thread Ирек Фасихов
the ceph.conf in the directory /etc/ceph/. How can I resolve it? *From:* Ирек Фасихов [mailto:malm...@gmail.com] *Sent:* Saturday, January 11, 2014 10:24 PM *To:* You, Rong *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] ceph start error global_init: *unable to open config file

Re: [ceph-users] libvirt qemu/kvm/rbd inside VM read slow

2014-01-10 Thread Ирек Фасихов
You need to use VirtIO. target dev='*vdb*' bus='*virtio*'/ 2014/1/10 steffen.thorha...@iti.cs.uni-magdeburg.de Hi, I'm using a 1GBit network. 16 osd on 8 hosts with xfs and journal on ssd. I have a read performance problem in a libvirt kvm/qemu/rbd VM on a ceph client host. All

Re: [ceph-users] libvirt qemu/kvm/rbd inside VM read slow

2014-01-10 Thread Ирек Фасихов
geschreven: On 01/10/2014 01:21 PM, Ирек Фасихов wrote: You need to use VirtIO. target dev='*vdb*' bus='*virtio*'/ with this parameter ist not a real performance increase: dd if=/dev/zero of=zerofile-2 bs=1G count=8 8+0 records in 8+0 records out 8589934592 bytes (8.6 GB) copied

Re: [ceph-users] HEALTH_WARN too few pgs per osd (3 min 20)

2013-12-27 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/operations/placement-groups/ 2013/12/27 German Anders gand...@despegar.com Hi to All, I've the following warning message (WARN) in my cluster: ceph@ceph-node04:~$ sudo ceph status cluster 50ae3778-dfe3-4492-9628-54a8918ede92 * health

Re: [ceph-users] rbd: add failed: (1) Operation not permitted

2013-12-27 Thread Ирек Фасихов
*sudo rbd map ceph-pool/RBDTest -n client.admin -k /home/ceph/ceph-cluster-prd/ceph.client.admin.keyring* 2013/12/27 German Anders gand...@despegar.com Hi Cephers, I had a basic question, I've already setup up a Ceph cluster with 55 OSD's daemons running and 3 MON with a total of 7TB

Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Ирек Фасихов
recommended 1 GB of RAM on one OSD disk. 21 дек. 2013 г. 17:54 пользователь hemant burman hemant.bur...@gmail.com написал: Can someone please help out here? On Sat, Dec 21, 2013 at 9:47 AM, hemant burman hemant.bur...@gmail.comwrote: Hello, We have boxes with 24 Drives, 2TB each and want

Re: [ceph-users] problem with delete or rename a pool

2013-11-28 Thread Ирек Фасихов
Hi ceph osd pool delete --help OR ceph osd pool delete -h 2013/11/29 You, RongX rongx@intel.com Hi, I have made a mistake, and create a pool named -help, Execute command ceph osd lspools, and returns: 0 data,1 metadata,2 rbd,3 testpool1,4 testpool2,5

Re: [ceph-users] PG state diagram

2013-11-25 Thread Ирек Фасихов
Yes, I would like to see this graph. Thanks 2013/11/25 Regola, Nathan (Contractor) nathan_reg...@cable.comcast.com Is there a vector graphics file (or a higher resolution file of some type) of the state diagram on the page below, as I can't read the text. Thanks, Nate

Re: [ceph-users] ceph osd thrash?

2013-11-11 Thread Ирек Фасихов
Thank, Greg. 12 нояб. 2013 г. 4:00 пользователь Gregory Farnum g...@inktank.com написал: On Mon, Nov 11, 2013 at 2:16 AM, Ирек Фасихов malm...@gmail.com wrote: Hello community. I do not understand the argument: ceph osd thrash. Why the need for this option? Description

Re: [ceph-users] Help with CRUSH maps

2013-10-31 Thread Ирек Фасихов
https://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds See rule ssd-primary 31 окт. 2013 г. 17:29 пользователь Alexis GÜNST HORN alexis.gunsth...@outscale.com написал: Hello to all, Here is my ceph osd tree output : # idweight type name

Re: [ceph-users] Ceph PG's incomplete and OSD drives lost

2013-10-30 Thread Ирек Фасихов
2013/10/31 Иван Кудрявцев kudryavtsev...@bw-sw.com Hello, List. I met very big trouble during ceph upgrade from bobtail to cuttlefish. My OSDs started to crash to stale so LA went to 100+ on node, after I stop OSD I unable to launch it again because of errors. So, I started to reformat

[ceph-users] interested questions

2013-10-29 Thread Ирек Фасихов
Hi, All. I am interested in the following questions: 1.Does the amount of HDD performance cluster? 2.Is there any experience of implementing KVM virtualization and Ceph on the same server? Thank! -- С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757

Re: [ceph-users] Ceph OSDs not using private network

2013-10-22 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/configuration/network-config-ref/ 22 окт. 2013 г. 18:22 пользователь Abhay Sachan abhay...@gmail.com написал: Hi All, I have a ceph cluster setup with 3 nodes which has 1Gbps public network and 10Gbps private cluster network which is not accessible from public

Re: [ceph-users] ceph uses too much disk space!!

2013-10-06 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/operations/placement-groups/ 2013/10/5 Linux Chips linux.ch...@gmail.com Hi every one; we have a small testing cluster, one node with 4 OSDs of 3TB each. i created one RBD image of 4TB. now the cluster is nearly full: # ceph df GLOBAL: SIZE