Hi,
On 17 Sep 2014, at 06:11, shiva rkreddy
shiva.rkre...@gmail.commailto:shiva.rkre...@gmail.com wrote:
Thanks Dan. Is there any preferred filesystem filesystem for the leveldb files?
I understand that the filesystem should be of same type on both /var and ssd
partition.
Should it be ext4,
The results are with journal and data configured in the same SSD ?
yes
Also, how are you configuring your journal device, is it a block device ?
yes.
~ceph-deploy osd create node:sdb
# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
Hi Cephalopods,
Browsing the list archives, I know this has come up before, but I thought
I'd check in for an update.
I'm in an environment where it would be useful to run a file system per
department in a single cluster (or at a pinch enforcing some client / fs
tree security). Has there been
Hi all,
Anyone have successful in replicating data across two zones of Federated
gateway configuration. I am getting TypeError: unhashable type: 'list'
error. I am not seeing data part getting replicated.
verbose log :
application/json; charset=UTF-8
Wed, 17 Sep 2014 09:59:22 GMT
/admin/log
Hi David,
We haven't written any code for the multiple filesystems feature so
far, but the new fs new/fs rm/fs ls management commands were
designed with this in mind -- currently only supporting one
filesystem, but to allow slotting in the multiple filesystems feature
without too much disruption.
Thanks, I did check on that too as I'd seen this before and this was
the usual drill, but alas, no, that wasn't the problem. This cluster
is having other issues too, though, so I probably need to look into
those first.
Cheers,
Florian
On Mon, Sep 15, 2014 at 7:29 PM, Gregory Farnum
Thanks John - It did look like it was heading in that direction!
I did wonder if a 'fs map' 'fs unmap' would be useful too; filesystem
backups, migrations between clusters async DR could be facilitated by
moving underlying pool objects around between clusters.
Dave
On Wed, Sep 17, 2014 at
Hi,
I have a ceph cluster running 0.80.1 on Ubuntu 14.04. I have 3 monitors
and 4 OSD nodes currently.
Everything has been running great up until today where I've got an issue
with the monitors.
I moved mon03 to a different switchport so it would have temporarily lost
connectivity.
Since then,
On Wed, Sep 17, 2014 at 1:58 PM, James Eckersall
james.eckers...@gmail.com wrote:
Hi,
I have a ceph cluster running 0.80.1 on Ubuntu 14.04. I have 3 monitors and
4 OSD nodes currently.
Everything has been running great up until today where I've got an issue
with the monitors.
I moved
Hi Craig,
just dug this up in the list archives.
On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis cle...@centraldesktop.com wrote:
In the interest of removing variables, I removed all snapshots on all pools,
then restarted all ceph daemons at the same time. This brought up osd.8 as
well.
So
Hi,
Thanks for the advice.
I feel pretty dumb as it does indeed look like a simple networking issue.
You know how you check things 5 times and miss the most obvious one...
J
On 17 September 2014 16:04, Florian Haas flor...@hastexo.com wrote:
On Wed, Sep 17, 2014 at 1:58 PM, James Eckersall
Hi Florian,
On 17 Sep 2014, at 17:09, Florian Haas flor...@hastexo.com wrote:
Hi Craig,
just dug this up in the list archives.
On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis cle...@centraldesktop.com
wrote:
In the interest of removing variables, I removed all snapshots on all pools,
On Wed, Sep 17, 2014 at 5:24 PM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
Hi Florian,
On 17 Sep 2014, at 17:09, Florian Haas flor...@hastexo.com wrote:
Hi Craig,
just dug this up in the list archives.
On Fri, Mar 28, 2014 at 2:04 AM, Craig Lewis cle...@centraldesktop.com
wrote:
Hi,
(Sorry for top posting, mobile now).
That's exactly what I observe -- one sleep per PG. The problem is that the
sleep can't simply be moved since AFAICT the whole PG is locked for the
duration of the trimmer. So the options I proposed are to limit the number of
snaps trimmed per call to
On Wed, Sep 17, 2014 at 5:42 PM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
From: Florian Haas flor...@hastexo.com
Sent: Sep 17, 2014 5:33 PM
To: Dan Van Der Ster
Cc: Craig Lewis cle...@centraldesktop.com;ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RGW hung, 2 OSDs using 100%
On Wed, Sep 17, 2014 at 5:21 PM, James Eckersall
james.eckers...@gmail.com wrote:
Hi,
Thanks for the advice.
I feel pretty dumb as it does indeed look like a simple networking issue.
You know how you check things 5 times and miss the most obvious one...
J
No worries at all .:)
Cheers,
Hi,
Now I feel dumb for jumping to the conclusion that it was a simple
networking issue - it isn't.
I've just checked connectivity properly and I can ping and telnet 6789 from
all mon servers to all other mon servers.
I've just restarted the mon03 service and the log is showing the following:
That looks like the beginning of an mds creation to me. What's your
problem in more detail, and what's the output of ceph -s?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Sep 15, 2014 at 5:34 PM, Shun-Fa Yang shu...@gmail.com wrote:
Hi all,
I'm installed ceph v
Hey everyone! We just posted the agenda for next week’s Ceph Day in San Jose:
http://ceph.com/cephdays/san-jose/
This Ceph Day will be held in a beautiful facility provided by our friends at
Brocade. We have a lot of great speakers from Brocade, Red Hat, Dell, Fujitsu,
HGST, and Supermicro,
Hi,
any suggestions ?
Regards,
Subhadip
---
On Wed, Sep 17, 2014 at 9:05 AM, Subhadip Bagui i.ba...@gmail.com wrote:
Hi
I'm getting the below error while installing ceph in admin
I am trying to use Ceph as a data store with OpenNebula 4.6 and
have followed the instructions in OpenNebula's documentation
at
http://docs.opennebula.org/4.8/administration/storage/ceph_ds.html
and compared them against the using libvirt with ceph
http://ceph.com/docs/master/rbd/libvirt/
We
Subhadip,
I updated the master branch of the preflight docs here:
http://ceph.com/docs/master/start/ We did encounter some issues that
were resolved with those preflight steps.
I think it might be either requiretty or SELinux. I will keep you
posted. Let me know if it helps.
On Wed, Sep 17,
Does radosgw-admin have authentication keys available and with
appropriate permissions?
http://ceph.com/docs/master/radosgw/config/#create-a-user-and-keyring
On Fri, Sep 12, 2014 at 3:13 AM, Santhosh Fernandes
santhosh.fernan...@gmail.com wrote:
Hi,
Anyone help me why my radosgw-admin pool
Hi,
From the ones we managed to configure in our lab here. I noticed that using
image format raw instead of qcow2 worked for us.
Regards,
Luke
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steven
Timm
Sent: Thursday, 18 September, 2014
Hi john,
I specify the name then I got this error.
#radosgw-admin pools list -n client.radosgw.in-west-1
could not list placement set: (2) No such file or directory
Regards,
Santhosh
On Thu, Sep 18, 2014 at 3:44 AM, John Wilkins john.wilk...@inktank.com
wrote:
Does radosgw-admin have
dear,
my ceph cluster worked for about two weeks, mds crashed every 2-3 days,
Now it stuck on replay , looks like replay crash and restart mds process again
what can i do for this?
1015 = # ceph -s
cluster 07df7765-c2e7-44de-9bb3-0b13f6517b18
health HEALTH_ERR 56 pgs inconsistent; 56
26 matches
Mail list logo