Hi,
After some hardware errors one of pg in our backup server is 'incomplete'.
I do export pg without problems like here:
https://ceph.com/community/incomplete-pgs-oh-my/
After remove pg from all osd's and import pg to one of osd pg is still
'incomplete'.
I want to recover only some pice of
I am also having same issue can somebody help me out. But for me it is
HTTP/1.1 404 Not Found.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
On 06/18/2015 12:54 PM, Alexandre DERUMIER wrote:
Hi,
for read benchmark
with fio, what is the iodepth ?
my fio 4k randr results with
iodepth=1 : bw=6795.1KB/s, iops=1698
iodepth=2 : bw=14608KB/s, iops=3652
iodepth=4 : bw=32686KB/s, iops=8171
iodepth=8 : bw=76175KB/s, iops=19043
On 06/18/2015 12:23 PM, Mark Nelson wrote:
so.. in order to increase performance, do I need to change the ssd
drives?
I'm just guessing, but because your read performance is slow as well,
you may multiple issues going on. The Intel 530 being slow at O_DSYNC
writes is one of them, but it's
can you please let me know if you solved this issue please
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Shane,
We (Bloomberg) have many large clusters and we currently use Ubuntu. We
have just recently upgraded to Trusty (14.04). Our new super object store
that we're building out is using Trusty but we may switch to RHEL because
of other departments joining in - final decision has not been made.
The journal should be a raw partition and should not have any filesystem on it.
Inside your /var/lib/ceph/osd/ceph-# you should make symlink to the
journal partition that you are going to use for that osd.
On Thu, Jun 18, 2015 at 2:36 AM, Shane Gibson shane_gib...@symantec.com wrote:
All - I am
Hello Everyone,
I have setup a new cluster with Ceph-hammer version (0.94.2 The install went
through fine without any issues but from the admin node I am not able to
execute any of the Ceph commands
Error:
root@ceph-main:/cephcluster# ceph auth export
2015-06-18 12:43:28.922367 7f54d286b700
Do you have admin keyring in /etc/ceph directory?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:35 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Hammer 0.94.2: Error when
Hello Naga,
The keyring file is present under a folder I created for ceph. Are you saying
the same needs to be copied to the /etc/ceph folder ?
Regards
Teclus
From: B, Naga Venkata [mailto:nag...@hp.com]
Sent: Thursday, June 18, 2015 10:37 PM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM
And also this needs the correct permission set as otherwise it will give this
error.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of B,
Naga Venkata
Sent: Thursday, June 18, 2015 10:07 AM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco);
Has there been any testing/feedback on using the 8-core intel atom c2750 with
EC pools? Or any use case really? There are some enticing 1U 12x3.5’ chassis
out there with with atom processor. The idea of low-power, dense, EC pool
storage has a lot of appeal. We’re looking to build out a
Dear Ceph Community,
We are fetching from our own encrypted data bags the mon and osd bootstrap
keyring values. We are successful in setting the mon_secret to a preset value,
but fail to do so for the /var/lib/ceph/boostrap-osd keyring.
Similar to how we set mon_secret, we set osd_secret. We
Hello everybody,
I thought I would share the benchmarks from these four ssd's I tested
(see attachment)
I do still have some question:
#1 *Data Set Management TRIM supported (limit 1 block)
vs
*Data Set Management TRIM supported (limit 8 blocks)
and how this effects Ceph
For the permissions use sudo chmod +r /etc/ceph/ceph.client.admin.keyring
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:21 AM
To: B, Naga Venkata; ceph-users@lists.ceph.com
Hello,
On Thu, 18 Jun 2015 17:48:12 +0200 Jelle de Jong wrote:
Hello everybody,
I thought I would share the benchmarks from these four ssd's I tested
(see attachment)
Neither of these are DC level SSDs of course, though the HyperX at least
supposedly can handle 2.5 DWPD.
Alas that info
Hey Cephers,
So it looks like we have the list of approved attendees for the Ceph
Hackathon in Hilsboro, OR that Intel is being kind enough to host.
http://pad.ceph.com/p/hackathon_2015-08
If you are not on that list and would like to be, please contact me as
soon as possible to see if we can
Hey cephers,
The schedule and videoconference details have been added to the CDS Jewel page.
http://tracker.ceph.com/projects/ceph/wiki/CDS_Jewel
If you see any problems with my timezone math or have a scheduling
conflict that wont allow you to attend your blueprint session, please
let me know.
On 15 June 2015 at 13:09, Gregory Farnum g...@gregs42.com wrote:
On Mon, Jun 15, 2015 at 4:03 AM, Roland Giesler rol...@giesler.za.net
wrote:
I have a small cluster of 4 machines and quite a few drives. After
about 2
- 3 weeks cephfs fails. It's not properly mounted anymore in
Hi,
I've just noticed an odd behaviour with the btrfs OSDs. We monitor the
amount of disk writes on each device, our granularity is 10s (every 10s
the monitoring system collects the total amount of sector written and
write io performed since boot and computes both the B/s and IO/s).
With only
I just realized I forgot to add a proper context :
this is with Firefly 0.80.9 and the btrfs OSDs are running on kernel
4.0.5 (this was happening with previous kernel versions according to our
monitoring history), xfs OSDs run on 4.0.5 or 3.18.9. There are 23 OSDs
total and 2 of them are using
On 06/17/2015 08:30 PM, Somnath Roy wrote:
However, I'd rather not set the level to 0/0, as that would disable all
logging from the MONs
I don't think so. All the error scenarios and stack trace (in case of crash)
are supposed to be logged with log level 0. But, generally, we need the
Oh that's very good to know. Are there details posted anywhere?
Mark
On 06/18/2015 02:46 AM, Dan van der Ster wrote:
Thanks, that's a nice article.
We're pretty happy with the SSDs he lists as Good, but note that
they're not totally immune to these type of issues -- indeed we've
found that
On 06/18/2015 04:49 AM, Jacek Jarosiewicz wrote:
On 06/17/2015 04:19 PM, Mark Nelson wrote:
SSD's are INTEL SSDSC2BW240A4
Ah, if I'm not mistaken that's the Intel 530 right? You'll want to see
this thread by Stefan Priebe:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg05667.html
Thanks, that's a nice article.
We're pretty happy with the SSDs he lists as Good, but note that
they're not totally immune to these type of issues -- indeed we've
found that bcache can crash a DC S3700, and Intel confirmed it was a
firmware bug.
Cheers, Dan
On Wed, Jun 17, 2015 at 8:36 PM,
All - I am building my first ceph cluster, and doing it the hard way,
manually without the aid of ceph-deploy. I have successfully built the
mon cluster and am now adding OSDs.
My main question:
How do I prepare the Journal prior to the prepare/activate stages of the
OSD creation?
More
Those are strange numbers, where are you getting them from? Test the drives
directly with fio with every combination, that’s should tell you what’s
happening
Jan
On 18 Jun 2015, at 07:52, Mateusz Skała mateusz.sk...@budikom.net wrote:
Thanks for answer,
I made some test, first leave
Hi Yuan,Thanks for the answer.
Our main use case is to replace AWS S3 with object storage in private cloud,
very preferably with S3 compatible API. But we also know that we want to
perform some machine learning and data processing by Spark in not so far future
on the data residing in the object
On 06/17/2015 04:19 PM, Mark Nelson wrote:
SSD's are INTEL SSDSC2BW240A4
Ah, if I'm not mistaken that's the Intel 530 right? You'll want to see
this thread by Stefan Priebe:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg05667.html
In fact it was the difference in Intel 520 and
Hi,
for read benchmark
with fio, what is the iodepth ?
my fio 4k randr results with
iodepth=1 : bw=6795.1KB/s, iops=1698
iodepth=2 : bw=14608KB/s, iops=3652
iodepth=4 : bw=32686KB/s, iops=8171
iodepth=8 : bw=76175KB/s, iops=19043
iodepth=16 :bw=173651KB/s, iops=43412
iodepth=32 :bw=336719KB/s,
30 matches
Mail list logo