You need to repair pg. This is the first sign that your hard drive was fail
under.
ceph pg repair *14.a5a *
ceph pg repair *14.aa8*
2014-04-18 12:09 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn:
Dear everyone,
I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate, Therefore
has some
Oh, sorry, confused with inconsistent. :)
2014-04-18 12:13 GMT+04:00 Ирек Фасихов malm...@gmail.com:
You need to repair pg. This is the first sign that your hard drive was
fail under.
ceph pg repair *14.a5a *
ceph pg repair *14.aa8*
2014-04-18 12:09 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn
Ceph detects that a placement group is missing a necessary period of
history from its log. If you see this state, report a bug, and try to start
any failed OSDs that may contain the needed information.
2014-04-18 12:15 GMT+04:00 Ирек Фасихов malm...@gmail.com:
Oh, sorry, confused
You OSD restarts all disks on which is your unfinished pgs? (22,23,82)
2014-04-18 12:35 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn:
Thank Ирек Фасихов for my reply.
I restarted osds that contains incomplete pgs, but still false :(
On 04/18/2014 03:16 PM, Ирек Фасихов wrote:
Ceph detects
*;
..
On 04/18/2014 03:42 PM, Ирек Фасихов wrote:
You OSD restarts all disks on which is your unfinished pgs? (22,23,82)
2014-04-18 12:35 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn:
Thank Ирек Фасихов for my reply.
I restarted osds that contains incomplete pgs, but still false :(
On 04/18/2014 03
Is there any data to:
ls -lsa /var/lib/ceph/osd/ceph-82/current/14.7c8_*/
ls -lsa /var/lib/ceph/osd/ceph-26/current/14.7c8_*/
2014-04-18 14:36 GMT+04:00 Ta Ba Tuan tua...@vccloud.vn:
Hi Ирек Фасихов
I send it to you :D,
Thank you!
{ state: incomplete,
epoch: 42880,
up
Show command please: ceph osd tree.
2014-04-18 14:51 GMT+04:00 Cedric Lemarchand ced...@yipikai.org:
Hi,
I am facing a strange behaviour where a pool is stucked, I have no idea
how this pool appear in the cluster in the way I have not played with pool
creation, *yet*.
#
This pools are created automatically when there is a start S3
(ceph-radosgw). By default, your configuration file, indicate the number of
pgs = 333. But it's a lot for your configuration.
2014-04-18 15:28 GMT+04:00 Cedric Lemarchand ced...@yipikai.org:
Hi,
Le 18/04/2014 13:14, Ирек Фасихов
Show command output dmesg.
2014-04-16 12:18 GMT+04:00 Srinivasa Rao Ragolu srag...@mvista.com:
Hi All,
I could successfully able to create ceph cluster on our proprietary
distribution with manual ceph commands
*ceph.conf*
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon
Show command output rbd ls -l.
2014-04-16 13:59 GMT+04:00 Srinivasa Rao Ragolu srag...@mvista.com:
Hi Wido,
Output of info command is given below
root@mon:/etc/ceph#
* rbd info samplerbd: error opening image sample: (95) Operation not
supported2014-04-16 09:57:24.575279 7f661c6e5780 -1
Hi,All.
I created the Russian-speaking community CephRussian in
Google+https://plus.google.com/communities/104570726102090628516!
Welcome!
URL: https://plus.google.com/communities/104570726102090628516
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
Loic,thanks for the link!
2014-04-16 18:46 GMT+04:00 Loic Dachary l...@dachary.org:
Hi Ирек,
If you organize meetups, feel free to add yourself to
https://wiki.ceph.com/Community/Meetups :-)
Cheers
On 16/04/2014 13:22, Ирек Фасихов wrote:
Hi,All.
I created the Russian-speaking
Hi,All.
Does anyone have experience with s3fs+CephS3?
I shows an error when uploading a file:
kataklysm@linux-41gj:~ s3fs infas /home/kataklysm/s3/ -o url=
http://s3.x-.ru;
kataklysm @ linux-41gj: ~ rsync-av - progress temp / s3
sending incremental file list
rsync: failed to set times
You need to use Dell OpenManage:
https://linux.dell.com/repo/hardware/.
2014-04-04 7:26 GMT+04:00 Punit Dambiwal hypu...@gmail.com:
Hi,
I want to use Dell R515/R510 for the OSD node purpose
1. 2*SSD for OS purpose (Raid 1)
2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD)
anything before back porting rbd.ko to 2.6.32 linux kernel.
Thanks,
Vilobh
From: Ирек Фасихов malm...@gmail.com
Date: Monday, March 31, 2014 at 10:56 PM
To: Vilobh Meshram vilob...@yahoo-inc.com
Cc: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Backport rbd.ko
Not backport for 2.6.32 and in the future is not planned.
2014-04-01 9:19 GMT+04:00 Vilobh Meshram vilob...@yahoo-inc.com:
What is the procedure to back port rbd.ko to 2.6.32 Linux Kernel ?
Thanks,
Vilobh
___
ceph-users mailing list
Ilya, hi. Maybe you have the required patches for the kernel?
2014-03-25 14:51 GMT+04:00 Ирек Фасихов malm...@gmail.com:
Yep, so works.
2014-03-25 14:45 GMT+04:00 Ilya Dryomov ilya.dryo...@inktank.com:
On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов malm...@gmail.com wrote:
Hmmm, create
Hi, Ilya.
I added the files(crushd and osddump) to a folder in GoogleDrive.
https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymcusp=sharing
2014-03-25 0:19 GMT+04:00 Ilya Dryomov ilya.dryo...@inktank.com:
On Mon, Mar 24, 2014 at 9:46 PM, Ирек Фасихов malm...@gmail.com wrote
Created cache pool for documentation:
http://ceph.com/docs/master/dev/cache-pool/
*ceph osd pool create cache 100*
*ceph osd tier add rbd cache*
*ceph osd tier cache-mode cache writeback*
*ceph osd tier set-overlay rbd cache*
*ceph osd pool set cache hit_set_type bloom*
*ceph osd pool set
I'm ready to test the tiering.
2014-03-18 11:07 GMT+04:00 Stefan Priebe - Profihost AG
s.pri...@profihost.ag:
Hi Sage,
i really would like to test the tiering. Is there any detailed
documentation about it and how it works?
Greets,
Stefan
Am 18.03.2014 05:45, schrieb Sage Weil:
Hi
Which model you have hard drives?
2014-03-14 21:59 GMT+04:00 Greg Poirier greg.poir...@opower.com:
We are stressing these boxes pretty spectacularly at the moment.
On every box I have one OSD that is pegged for IO almost constantly.
ceph-1:
Device: rrqm/s wrqm/s r/s w/s
What you model SSD disk?
07 марта 2014 г. 13:50 пользователь Indra Pramana in...@sg.or.id
написал:
Hi,
I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
with SSD drives and I noted that the I/O speed, especially write access to
the cluster is degrading over time. When we
are not affected...
Are there other ideas on this problem?
Thank you.
2014-02-07 Konrad Gutkowski konrad.gutkow...@ffs.pl:
Hi,
W dniu 07.02.2014 o 08:14 Ирек Фасихов malm...@gmail.com pisze:
[...]
Why might such a low speed sequential read? Do ideas on this issue?
Iirc you need to set
Hi All.
Hosts: Dell R815x5, 128 GB RAM, 25 OSD + 5 SSD(journal+system).
Network: 2x10Gb+LACP
Kernel: 2.6.32
QEMU emulator version 1.4.2, Copyright (c) 2003-2008 Fabrice Bellard
POOLs:
root@kvm05:~# ceph osd dump | grep 'rbd'
pool 5 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash
Hi.
Please read: http://ceph.com/docs/master/rados/operations/placement-groups/
2014/1/24 Graeme Lambert glamb...@adepteo.net
Hi,
I've got 6 OSDs and I want 3 replicas per object, so following the
function that's 200 PGs per OSD, which is 1,200 overall.
I've got two RBD pools and the
Hi, Виталий.
Whether a sufficient number of PGS?
2014/1/17 Никитенко Виталий v1...@yandex.ru
Good day! Please help me solve the problem. There are the following scheme
:
Server ESXi with 1Gb NICs. it has local store store2Tb and two isci
storage connected to the second server .
The second
I use H700 on Dell R815, 4 nodes. No problem performance.
Configuration:
1 SSD Intel 530 - OS and Journal.
5 OSD HDD 600G: certified DELL - WD/HITACHI/SEAGATE.
Size replication=2. Iops ~ 4k no VM.
15 янв. 2014 г. 15:47 пользователь Alexandre DERUMIER aderum...@odiso.com
написал:
Hello List,
Kernel Patch for Intel S3700, Intel 530...
diff --git a/drivers/scsi/sd.c b/drivers//scsi/sd.c
--- a/drivers/scsi/sd.c 2013-09-14 12:53:21.0 +0400
+++ b/drivers//scsi/sd.c2013-12-19 21:43:29.0 +0400
@@ -137,6 +137,7 @@
char *buffer_data;
struct
global_init: *unable to open config file from search list*/etc/ceph/ceph.conf
2014/1/11 You, Rong rong@intel.com
Hi,
I encounter a problem when startup the ceph cluster.
When run the command: service ceph -a start,
The process always hang up. The error result is:
the ceph.conf in the directory /etc/ceph/. How can I
resolve it?
*From:* Ирек Фасихов [mailto:malm...@gmail.com]
*Sent:* Saturday, January 11, 2014 10:24 PM
*To:* You, Rong
*Cc:* ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users] ceph start error
global_init: *unable to open config file
You need to use VirtIO.
target dev='*vdb*' bus='*virtio*'/
2014/1/10 steffen.thorha...@iti.cs.uni-magdeburg.de
Hi,
I'm using a 1GBit network. 16 osd on 8 hosts with xfs and journal on ssd.
I have a read performance problem in a libvirt kvm/qemu/rbd VM
on a ceph client host. All
geschreven:
On 01/10/2014 01:21 PM, Ирек Фасихов wrote:
You need to use VirtIO.
target dev='*vdb*' bus='*virtio*'/
with this parameter ist not a real performance increase:
dd if=/dev/zero of=zerofile-2 bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied
http://ceph.com/docs/master/rados/operations/placement-groups/
2013/12/27 German Anders gand...@despegar.com
Hi to All,
I've the following warning message (WARN) in my cluster:
ceph@ceph-node04:~$ sudo ceph status
cluster 50ae3778-dfe3-4492-9628-54a8918ede92
* health
*sudo rbd map ceph-pool/RBDTest -n client.admin -k
/home/ceph/ceph-cluster-prd/ceph.client.admin.keyring*
2013/12/27 German Anders gand...@despegar.com
Hi Cephers,
I had a basic question, I've already setup up a Ceph cluster with 55
OSD's daemons running and 3 MON with a total of 7TB
recommended 1 GB of RAM on one OSD disk.
21 дек. 2013 г. 17:54 пользователь hemant burman hemant.bur...@gmail.com
написал:
Can someone please help out here?
On Sat, Dec 21, 2013 at 9:47 AM, hemant burman hemant.bur...@gmail.comwrote:
Hello,
We have boxes with 24 Drives, 2TB each and want
Hi
ceph osd pool delete --help
OR
ceph osd pool delete -h
2013/11/29 You, RongX rongx@intel.com
Hi,
I have made a mistake, and create a pool named -help,
Execute command ceph osd lspools, and returns:
0 data,1 metadata,2 rbd,3 testpool1,4 testpool2,5
Yes, I would like to see this graph.
Thanks
2013/11/25 Regola, Nathan (Contractor) nathan_reg...@cable.comcast.com
Is there a vector graphics file (or a higher resolution file of some type)
of the state diagram on the page below, as I can't read the text.
Thanks,
Nate
Thank, Greg.
12 нояб. 2013 г. 4:00 пользователь Gregory Farnum g...@inktank.com
написал:
On Mon, Nov 11, 2013 at 2:16 AM, Ирек Фасихов malm...@gmail.com wrote:
Hello community.
I do not understand the argument: ceph osd thrash.
Why the need for this option?
Description
https://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
See rule ssd-primary
31 окт. 2013 г. 17:29 пользователь Alexis GÜNST HORN
alexis.gunsth...@outscale.com написал:
Hello to all,
Here is my ceph osd tree output :
# idweight type name
2013/10/31 Иван Кудрявцев kudryavtsev...@bw-sw.com
Hello, List.
I met very big trouble during ceph upgrade from bobtail to cuttlefish.
My OSDs started to crash to stale so LA went to 100+ on node, after I stop
OSD I unable to launch it again because of errors. So, I started to
reformat
Hi, All.
I am interested in the following questions:
1.Does the amount of HDD performance cluster?
2.Is there any experience of implementing KVM virtualization and Ceph on
the same server?
Thank!
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
http://ceph.com/docs/master/rados/configuration/network-config-ref/
22 окт. 2013 г. 18:22 пользователь Abhay Sachan abhay...@gmail.com
написал:
Hi All,
I have a ceph cluster setup with 3 nodes which has 1Gbps public network
and 10Gbps private cluster network which is not accessible from public
http://ceph.com/docs/master/rados/operations/placement-groups/
2013/10/5 Linux Chips linux.ch...@gmail.com
Hi every one;
we have a small testing cluster, one node with 4 OSDs of 3TB each. i
created one RBD image of 4TB. now the cluster is nearly full:
# ceph df
GLOBAL:
SIZE
43 matches
Mail list logo