wrote:
Thanks Georgios
I will wait.
- Karan Singh -
On 14 Jul 2014, at 15:37, Georgios Dimitrakakis wrote:
Hi Karan!
Due to the late reception of the login info I 've also missed
a very big part of the webinar.
They did send me an e-mail though saying that they will let me know
as soon
Hi all!
I am setting UP a new cluster with 10 OSDs
and the state is degraded!
# ceph health
HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean
#
There are only the default pools
# ceph osd lspools
0 data,1 metadata,2 rbd,
with each one having 512 pg_num and 512 pgp_num
# ceph osd dump
be set to 1
-
so that the cluster would still work with at least one PG being up.
After I've changed the min_size to 1 the cluster sorted itself out.
Try doing this for your pools.
Andrei
-
FROM: Georgios Dimitrakakis
TO: ceph-users@lists.ceph.com
SENT: Saturday, 29
Hi!
I had a very similar issue a few days ago.
For me it wasn't too much of a problem since the cluster was new
without data and I could force recreate the PGs. I really hope that in
your case it won't be necessary to do the same thing.
As a first step try to reduce the min_size from 2 to 1
Hi all!
I have a CEPH installation with radosgw and the radosgw.log in the
/var/log/ceph directory is empty.
In the ceph.conf I have
log file = /var/log/ceph/radosgw.log
debug ms = 1
debug rgw = 20
under the: [client.radosgw.gateway]
Any ideas?
Best,
George
Hi!
On CentOS 6.6 I have installed CEPH and ceph-radosgw
When I try to (re)start the ceph-radosgw service I am getting the
following:
# service ceph-radosgw restart
Stopping radosgw instance(s)...[ OK ]
Starting radosgw instance(s)...
/usr/bin/dirname: extra
I was thinking the same thing for the following implementation:
I would like to have an RBD volume mounted and accessible at the same
time by different VMs (using OCFS2).
Therefore I was also thinking that I had to put VMs on the internal
CEPH network by adding a second NIC and plugging that
Hi all!
I am using AWS SDK JS v.2.0.29 to perform a multipart upload into
Radosgw with ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3) and I am getting a 403 error.
I believe that the id which is send to all requests and has been
urlencoded by the aws-sdk-js doesn't match
For example if I try to perform the same multipart upload at an older
version ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
I can see the upload ID in the apache log as:
PUT
/test/.dat?partNumber=25uploadId=I3yihBFZmHx9CCqtcDjr8d-RhgfX8NW
HTTP/1.1 200 - -
It would be nice to see where and how uploadId
is being calculated...
Thanks,
George
For example if I try to perform the same multipart upload at an older
version ceph version 0.72.2
(a913ded2ff138aefb8cb84d347d72164099cfd60)
I can see the upload ID in the apache log as:
PUT
I 've just created issues #10271
Best,
George
On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks,
Yehuda
On Fri, Dec 5, 2014 at 7:17 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote
to the repositories?
Regards,
George
On Mon, 08 Dec 2014 19:47:59 +0200, Georgios Dimitrakakis wrote:
I 've just created issues #10271
Best,
George
On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks
at 8:38 AM, Yehuda Sadeh yeh...@redhat.com
wrote:
I don't think it has been fixed recently. I'm looking at it now, and
not sure why it hasn't triggered before in other areas.
Yehuda
On Thu, Dec 11, 2014 at 5:55 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
This issue seems very similar
, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update the ceph-radosgw package I get
to build.
Yehuda
On Thu, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update
the
dash character that you were using cannot be used safely in that
context. Maybe tilde ('~') would could work.
Yehuda
On Fri, Dec 12, 2014 at 2:41 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Dear Yehuda,
I have installed the patched version as you can see:
$ radosgw --version
ceph
This is very silly of me...
The file wasn't writable by apache.
I am writing it down for future reference.
G.
Hi all!
I have a CEPH installation with radosgw and the radosgw.log in the
/var/log/ceph directory is empty.
In the ceph.conf I have
log file = /var/log/ceph/radosgw.log
debug ms
that I see are to fix the client library, and/or
to
modify the character to one that does not require escaping. Sadly
the
dash character that you were using cannot be used safely in that
context. Maybe tilde ('~') would could work.
Yehuda
On Fri, Dec 12, 2014 at 2:41 AM, Georgios Dimitrakakis
Hi all!
I have a single CEPH node which has two network interfaces.
One is configured to be accessed directly by the internet (153.*) and
the other one is configured on an internal LAN (192.*)
For the moment radosgw is listening on the external (internet)
interface.
Can I configure
.
Thanks,
Yehuda
On Fri, Dec 12, 2014 at 5:59 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
How silly of me!!!
I 've just noticed that the file isn't writable by the apache!
I 'll be back with the logs...
G.
I 'd be more than happy to provide to you all the info but for some
unknown
of the external IP.
In my case, I only have Apache bound to the internal interface. My
load balancer has an external and internal IP, and Im able to talk to
it on both interfaces.
On Mon, Dec 15, 2014 at 2:00 PM, Georgios Dimitrakakis wrote:
Hi all!
I have a single CEPH node which has two network
Hi! I am facing the exact same problem!!!
I am also on a CentOS 6.5 64bit system
Does anyone has any suggestions? Where to look? What to check??
zhongku did you manage to solve this problem?
On the other hand if I use python as shown here:
http://ceph.com/docs/master/radosgw/s3/python/ I can
On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling requiretty in visudo on all nodes.
There is no requiretty in the sudoers file, or indeed any file
under /etc.
The manpage says that requiretty is off by default, but I
Hi Craig,
I am also interested at the Zabbix templates and scripts if you can
publish them.
Regards,
G.
On Mon, 30 Jun 2014 18:15:12 -0700, Craig Lewis wrote:
You should check out Calamari (https://github.com/ceph/calamari [3]),
Inktanks monitoring and administration tool.
I started
The same here...Neither do I or my colleagues
G.
On Thu, 10 Jul 2014 16:55:22 +0200 (CEST), Alexandre DERUMIER wrote:
Hi,
sorry to spam the mailing list,
but they are a inktank mellanox webinar in 10minutes,
and I don't have receive access since I have been registered
yesterday (same for my
That makes two of us...
G.
On Thu, 10 Jul 2014 17:12:08 +0200 (CEST), Alexandre DERUMIER wrote:
Ok, sorry, we have finally receive the login a bit late.
Sorry again to have spam the mailing list
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: ceph-users
12:40:54 +0300, Karan Singh wrote:
Hey i have missed the webinar , is this available for later review or
slides.
- Karan -
On 10 Jul 2014, at 18:27, Georgios Dimitrakakis wrote:
That makes two of us...
G.
On Thu, 10 Jul 2014 17:12:08 +0200 (CEST), Alexandre DERUMIER wrote:
Ok, sorry, we
Dear all,
I am following this guide http://ceph.com/docs/master/radosgw/config/
to setup Object Storage on CentOS 6.5.
My problem is that when I try to start the service as indicated here:
http://ceph.com/docs/master/radosgw/config/#restart-services-and-start-the-gateway
I get nothing
#
AQCdkHZR2NBYMBAATe/rqIwCI96LTuyS3gmMXp==
or ceph auth list for all keys.
Key-genaration is doing by get-or-create key like this (but in this
case
for bootstap-osd):
ceph auth get-or-create-key client.bootstrap-osd mon allow profile
bootstrap-osd
Udo
On 15.02.2014 15:35, Georgios Dimitrakakis wrote:
Dear all
Could someone help me with the following error when I try to add
keyring entries:
# ceph -k /etc/ceph/ceph.client.admin.keyring auth add
client.radosgw.gateway -i /etc/ceph/keyring.radosgw.gateway
Error EINVAL: entity client.radosgw.gateway exists but key does not
match
#
Best,
G.
I managed to solve my problem by deleting the key from the list and
re-adding it!
Best,
G.
On Mon, 17 Feb 2014 10:46:36 +0200, Georgios Dimitrakakis wrote:
Could someone help me with the following error when I try to add
keyring entries:
# ceph -k /etc/ceph/ceph.client.admin.keyring auth
Could someone check this: http://pastebin.com/DsCh5YPm
and let me know what am I doing wrong?
Best,
G.
On Sat, 15 Feb 2014 20:27:16 +0200, Georgios Dimitrakakis wrote:
1) ceph -s is working as expected
# ceph -s
cluster c465bdb2-e0a5-49c8-8305-efb4234ac88a
health HEALTH_OK
the IRC
channel!
Best,
G.
On Mon, 17 Feb 2014 11:44:37 +0200, Georgios Dimitrakakis wrote:
Could someone check this: http://pastebin.com/DsCh5YPm
and let me know what am I doing wrong?
Best,
G.
On Sat, 15 Feb 2014 20:27:16 +0200, Georgios Dimitrakakis wrote:
1) ceph -s is working
Dear all,
do I need to put or do something special in order to enable CORS support in
CEPH?
Are there any links on how to test it?
Thanks for your help!
G.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi!
I have installed ceph and created two osds and was very happy with that
but apparently not everything was correct.
Today after a system reboot the cluster comes up and for a few moments
it seems that it's ok (using the ceph health command) but after a few
seconds the ceph health
ceph.client.admin.keyring and
ceph.bootstrap-osd.keyring
should be matched in all the cluster nodes.
Best of luck.
Srinivas.
On Wed, Mar 5, 2014 at 3:04 PM, Georgios Dimitrakakis wrote:
Hi!
I have installed ceph and created two osds and was very happy with
that but apparently not everything
happens next...
Best,
G.
On Wed, 05 Mar 2014 11:50:57 +0100, Wido den Hollander wrote:
On 03/05/2014 11:21 AM, Georgios Dimitrakakis wrote:
My setup consists of two nodes.
The first node (master) is running:
-mds
-mon
-osd.0
and the second node (CLIENT) is running:
-osd.1
Therefore I
, 2014 at 3:44 PM, Georgios Dimitrakakis wrote:
My setup consists of two nodes.
The first node (master) is running:
-mds
-mon
-osd.0
and the second node (CLIENT) is running:
-osd.1
Therefore I ve restarted ceph services on both nodes
Leaving the ceph -w running for as long as it can after
, Mar 5, 2014 at 3:44 PM, Georgios Dimitrakakis wrote:
My setup consists of two nodes.
The first node (master) is running:
-mds
-mon
-osd.0
and the second node (CLIENT) is running:
-osd.1
Therefore I ve restarted ceph services on both nodes
Leaving the ceph -w running for as long as it can after
Hi Christian,
On Fri, 30 Jan 2015 01:22:53 +0200 Georgios Dimitrakakis wrote:
Urged by a previous post by Mike Winfield where he suffered a
leveldb
loss
I would like to know which files are critical for CEPH operation
and
must
be backed-up regularly and how are you people doing
/master/start/quick-start-preflight/#open-required-ports
John
On Sat, Feb 7, 2015 at 4:33 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
I am integrating my OpenStack Cluster with CEPH in order to be able
to
provide volumes for the instances!
I have managed to perform all
://ceph.com/docs/master/start/quick-start-preflight/#open-required-ports
John
On Sat, Feb 7, 2015 at 4:33 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
I am integrating my OpenStack Cluster with CEPH in order to be able
to
provide volumes for the instances!
I have managed to perform
Hi all!
I would like to expand our CEPH Cluster and add a second OSD node.
In this node I will have ten 4TB disks dedicated to CEPH.
What is the proper way of putting them in the already available CEPH
node?
I guess that the first thing to do is to prepare them with ceph-deploy
and mark
-cluster
[3]
Jiri
On 15/01/2015 06:36, Georgios Dimitrakakis wrote:
Hi all!
I would like to expand our CEPH Cluster and add a second OSD node.
In this node I will have ten 4TB disks dedicated to CEPH.
What is the proper way of putting them in the already available
CEPH node?
I guess
at 3:58 PM, Georgios Dimitrakakis wrote:
Hi Craig!
For the moment I have only one node with 10 OSDs.
I want to add a second one with 10 more OSDs.
Each OSD in every node is a 4TB SATA drive. No SSD disks!
The data ara approximately 40GB and I will do my best to have zero
or at least very very
in would be the best time to switch back to host level
replication. The more data you have, the more painful that change
will become.
On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis wrote:
Hi Jiri,
thanks for the feedback.
My main concern is if its better to add each OSD one-by-one and
wait
.
I belive that this was the action that solved my problems.not quiet
confident though :-(
Thanks a lot to everyone that spend some time to deal with my problem!
All the best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
Sage,
correct me if I am wrong but this is when you
have some more VMs/servers/clients on 192.*
network... ?
On 14 March 2015 at 19:38, Georgios Dimitrakakis wrote:
Andrija,
I have two cards!
One on 15.12.* and one on 192.*
Obviously the 15.12.* is the external network (real public IP
address e.g used to access the node via SSH)
Thats why I
Indeed it is!
Thanks!
George
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or
development. a new page was added to the documentation:
http://ceph.com/docs/master/releases/ [1] It is a matrix
Hi Italo,
Check the S3 Bucket OPS at :
http://ceph.com/docs/master/radosgw/s3/bucketops/
or use any of the examples provided in Python
(http://ceph.com/docs/master/radosgw/s3/python/) or PHP
(http://ceph.com/docs/master/radosgw/s3/php/) or JAVA
; run 'new' to
create a new cluster
Regards,
George
Hi,
I think ceph-deploy mon add (instead of create) is what you should be
using.
Cheers
On 13/03/2015 22:25, Georgios Dimitrakakis wrote:
On an already available cluster I 've tried to add a new monitor!
I have used ceph-deploy mon
March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start/quick-ceph-deploy/#adding-monitors
[4]
Should I first create and then add?? What is the proper order???
Should
the first one from scratch?
What would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015
are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start/quick-ceph-deploy/#adding-monitors
[4]
Should I first create
to
create a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy
from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs
would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015-03-14 08:16:42.736674
...and provision new MONs or
OSDs,
etc.
Message:
[ceph_deploy][ERROR ] RuntimeError: mon keyring not found; run new to
create a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall
a new cluster...
...means (if Im not mistaken) that you are runnign ceph-deploy from
NOT original folder...
On 13 March 2015 at 23:03, Georgios Dimitrakakis wrote:
Not a firewall problem!! Firewall is disabled ...
Loic I ve tried mon create because of this:
http://ceph.com/docs/v0.80.5/start
Yes Sage!
Priority is to fix things!
Right now I don't have a healthy monitor!
Can I remove all of them and add the first one from scratch?
What would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph
of data
writen to some particualr OSD, will generate 3 x 1GB of more writes,
to the replicas... - which ideally will take place over separate NICs
to speed up things...
On 14 March 2015 at 17:43, Georgios Dimitrakakis wrote:
Hi all!!
What is the meaning of public_network in ceph.conf
Hi all!!
What is the meaning of public_network in ceph.conf?
Is it the network that OSDs are talking and transferring data?
I have two nodes with two IP addresses each. One for internal network
192.168.1.0/24
and one external 15.12.6.*
I see the following in my logs:
osd.0 is down since
network) and
thus
speed up.
If i.e. replica count on pool is 3, that means, each 1GB of data
writen to some particualr OSD, will generate 3 x 1GB of more writes,
to the replicas... - which ideally will take place over separate
NICs
to speed up things...
On 14 March 2015 at 17:43, Georgios
= yy
[mon.zz]
mon_addr = x.x.x.x:6789
host = zz
On 14 March 2015 at 19:14, Georgios Dimitrakakis wrote:
I thought that it was easy but apparently its not!
I have the following in my conf file
mon_host = 192.168.1.100,192.168.1.101,192.168.1.102
public_network = MAILSCANNER WARNING: NUMERICAL
cards in servers, then you may use first 1G for
client traffic, and second 1G for OSD-to-OSD replication...
best
On 14 March 2015 at 19:33, Georgios Dimitrakakis wrote:
Andrija,
Thanks for you help!
In my case I just have one 192.* network, so should I put that for
both?
Besides monitors do I
?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 12, 2015 7:39 PM, Georgios Dimitrakakis wrote:
Hi Robert!
Thanks for the feedback! I am aware of the fact that the number of
the monitors should be odd
but this is a very basic setup just to test CEPH functionality
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here is the output:
#/etc/init.d/ceph restart mon
=== mon.master ===
=== mon.master ===
Stopping Ceph mon.master on master...kill 10766...done
=== mon.master ===
I forgot to say that the monitors form a quorum and the cluster's
health is OK
so there aren't any serious troubles other than the annoying message.
Best,
George
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to restart
CEPH a monitor a strange monitor is appearing!
Here
PM, Georgios Dimitrakakis wrote:
I forgot to say that the monitors form a quorum and the clusters
health is OK
so there arent any serious troubles other than the annoying
message.
Best,
George
Hi all!
I have updated from 0.80.8 to 0.80.9 and every time I try to
restart
CEPH a monitor
Daniel,
on CentOS the logrotate script was not invoked incorrectly because it
was called everywhere as radosgw:
e.g.
service radosgw reload /dev/null or
initctl reload radosgw cluster=$cluster id=$id 2/dev/null || :
but there isn't any radosgw service!
I had to change it into ceph-radosgw
Urged by a previous post by Mike Winfield where he suffered a leveldb
loss
I would like to know which files are critical for CEPH operation and
must
be backed-up regularly and how are you people doing it?
Any points much appreciated!
Regards,
G.
Hi all!
I had a CEPH Cluster with 10x OSDs all of them in one node.
Since the cluster was built from the beginning with just one OSDs node
the crushmap had as a default
the replication to be on OSDs.
Here is the relevant part from my crushmap:
# rules
rule replicated_ruleset {
Indeed it is not necessary to have any OSD entries in the Ceph.conf
file
but what happens in the event of a disk failure resulting in changing
the mount device?
For what I can see is that OSDs are mounted from entries in /etc/mtab
(I am on CentOS 6.6)
like this:
/dev/sdj1
://www.rsyslog.com;] start
May 11 11:54:56 srv-lab-ceph-node-01 rsyslogd: rsyslogd's groupid
changed to 103
May 11 11:54:57 srv-lab-ceph-node-01 rsyslogd: rsyslogd's userid
changed to 100
Sorry for noise, guys. Georgios, in any way, thanks for helping.
2015-05-10 12:44 GMT+03:00 Georgios Dimitrakakis
. Georgios, in any way, thanks for helping.
2015-05-10 12:44 GMT+03:00 Georgios Dimitrakakis
gior...@acmac.uoc.gr:
Timofey,
may be your best chance is to connect directly at the server and see
what is
going on.
Then you can try debug why the problem occurred. If you don't want
to wait
until
disks between servers (if you take the journals with it). Its magic!
But I think I just gave away the secret.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On May 7, 2015 5:16 AM, Georgios Dimitrakakis wrote:
Indeed it is not necessary to have any OSD entries
susceptible to errors.
Best regards,
George
Can you try
ceph osd pool rename new-name
On Tue, May 5, 2015 at 12:43 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi all!
Somehow I have a pool without a name...
$ ceph osd lspools
3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8
Hi all!
Somehow I have a pool without a name...
$ ceph osd lspools
3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10
.intent-log,11 .usage,12 .users,13 .users.email,14 .users.swift,15
.users.uid,16 .rgw.root,17 .rgw.buckets.index,18 .rgw.buckets,19
.rgw.buckets.extra,20
cluster and can test that happen
with clients, if crushmap like it was injected.
2015-05-10 8:23 GMT+03:00 Georgios Dimitrakakis
gior...@acmac.uoc.gr:
Hi Timofey,
assuming that you have more than one OSD hosts and that the
replicator
factor is equal (or less) to the number of the hosts why
Hi Timofey,
assuming that you have more than one OSD hosts and that the replicator
factor is equal (or less) to the number of the hosts why don't you just
change the crushmap to host replication?
You just need to change the default CRUSHmap rule from
step chooseleaf firstn 0 type osd
to
please excuse any typos.
On May 11, 2015 5:32 AM, Georgios Dimitrakakis wrote:
Hi Robert,
just to make sure I got it correctly:
Do you mean that the /etc/mtab entries are completely ignored and
no matter what the order
of the /dev/sdX device is Ceph will just mount correctly the
osd/ceph-X
Hi!
Do you by any chance have your OSDs placed at a local directory path
rather than on a non utilized physical disk?
If I remember correctly from a similar setup that I had performed in
the past the ceph df command accounts for the entire disk and not just
for the OSD data directory. I am
On Tue, 19 May 2015 13:45:50 +0300, Georgios Dimitrakakis wrote:
Hi!
The QEMU Venom vulnerability (http://venom.crowdstrike.com/) got my
attention and I would
like to know what are you people doing in order to have the latest
patched QEMU version
working with Ceph RBD?
In my case I am using the qemu
:33 PM, Georgios Dimitrakakis wrote:
I am trying to build the packages manually and I was wondering
is the flag --enable-rbd enough to have full Ceph functionality?
Does anybody know what else flags should I include in order to
have the same
functionality as the original CentOS package plus
/7Server/en/RHEV/SRPMS/
[21]
On May 19, 2015 2:47 PM, Georgios Dimitrakakis wrote:
Erik,
are you talking about the ones here :
http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/
[20] ???
From what I see the version is rather small 0.12.1.2-2.448
How one can verify that it has been
(EMBARGOED CVE-2015-3456 qemu-kvm: qemu: floppy disk controller
flaw [rhel-6.6.z])
HTH.
Cheers,
Brad
HTH.
Cheers,
Brad
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, May 19, 2015 at 3:47 PM, Georgios Dimitrakakis wrote:
Erik
Pavel,
unfortunately there isn't a way to rename a pool usign its ID as I
have learned myself the hard way since I 've faced a few months ago the
exact same issue.
It would be a good idea for developers to also include a way to
manipulate (rename, delete, etc.) pools using the ID which is
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/stories
On 26.05.2015, at 19:12, Georgios Dimitrakakis
gior
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/stories
On 26.05.2015, at 19:12, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform to verify that?
In our installation we have a VM with 30 RBD volumes mounted which are
all exported via NFS to other VMs.
No one has complaint for the moment but
Jan,
this is very handy to know! Thanks for sharing with us!
People, do you believe that it would be nice to have a place where we
can gather either good practices or problem resolutions or tips from the
community? We could have a voting system and those with the most votes
(or above a
All,
I was wondering if anyone has integrated his CEPH installation with
Zenoss monitoring software and is willing to share his knowledge.
Best regards,
George
___
ceph-users mailing list
ceph-users@lists.ceph.com
spect you should
be able to see how the rbd image was prefixed/named at the time of
the delete.
HTH,
Brad
If you go through yours OSDs and look for the directories for PG
index 20, you might find some fragments from the deleted volume, but
it's a long shot...
On Aug 8, 2016, at 4:39 PM, Georg
Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven:
If all the system disk does is handle the o/s (ie osd journals are
on dedicated or osd drives as well), no problem. Just rebuild the
system and copy the ceph.conf back in when you re-install ceph.Â
Keep a spare copy of your
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
ou may be able to find the rbd objects.
On Mon, Aug 8, 2016 at 7:28 PM, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first
let me
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph version 0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2 of them are the OSD nodes as well) all with ceph
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes
As a closure I would like to thank all people who contributed with
their knowledge in my problem although the final decision was not to try
any sort of recovery since the effort required would have been
tremendous with unambiguous results (to say at least).
Jason, Ilya, Brad, David, George,
Hello Ceph community!
I would like some help with a new CEPH installation.
I have install Jewel on CentOS7 and after the reboot my OSDs are not
mount automatically and as a consequence ceph is not operating
normally...
What can I do?
Could you please help me solve the problem?
Regards,
1 - 100 of 121 matches
Mail list logo