Hi,
On 11/09/2015 02:07 PM, Burkhard Linke wrote:
Hi,
*snipsnap*
Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use
ceph-fuse with patches for improved page cache handling, but the
problem also occur with the official hammer packages from
download.ceph.com
I've tested
On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke
wrote:
> Hi,
>
> On 11/09/2015 02:07 PM, Burkhard Linke wrote:
>>
>> Hi,
>
> *snipsnap*
>
>>
>>
>> Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use
>> ceph-fuse with patches for
Hi,
On 11/09/2015 11:49 AM, Ilya Dryomov wrote:
*snipsnap*
You can install an ubuntu kernel from a newer ubuntu release, or pretty
much any mainline kernel from kernel-ppa.
Ubuntu Trusty has backported kernels from newer releases, e.g.
linux-generic-lts-vivid. By using this packages you will
Since the RBD python bindings use the C++ librbd interface and the librbd
rollback interface requires you to provide an instance of class
ProgressContext, you could add a new class to the API which derives from
ProgressContext and bridges the interface to a simple C-style function
callback.
Hi,
I'm currently investigating a lockup problem involving CephFS and SQLite
databases. Applications lock up if the same database is accessed from
multiple hosts.
I was able to narrow the problem down to two host:
host A:
sqlite3
.schema
host B:
sqlite3
.schema
If both .schema commands
Hi Timofey,
With Nick's, Jan's, RedHat's and others' help we have a stable and, in my
best judgement, well performing system using SCST as the iSCSI delivery
framework. SCST allows the use of Linux page cache when utilizing the
vdisk_fileio backend. LIO should be able to do this to using FILEIO
Great thanks, Alex, you give me a hope, i'll try SCST later in
configuration what you suggest
2015-11-09 16:25 GMT+03:00 Alex Gorbachev :
> Hi Timofey,
>
> With Nick's, Jan's, RedHat's and others' help we have a stable and, in my
> best judgement, well performing system
This is currently not a possibility, but there is active research into
providing improved/persistent client-side caching for RBD use-cases. In the
meantime as an alternative, you can expose a portion of your SSD to each VM
that needs higher IOPS and apply dm-cache / bcache on the RBD and SSD
Hi,
I am currently operating a multi-node Ceph cluster with the "Hammer"
release under CentOS 7 with writeback cache tiering on SSDs as described
here:
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
It is not a known problem. Mind filing a ticket @
http://tracker.ceph.com/ so we can track the fix for this?
On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de wrote:
>
>
> Dear Ceph-users,
>
> I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install
>
Hi Greg,
I’ve tested the patch below on top of the 0.94.5 hammer sources, and it
works beautifully. No more active+clean+replay stuck PGs.
Thanks!
Andras
On 10/27/15, 4:46 PM, "Andras Pataki" wrote:
>Yes, this definitely sounds plausible (the
Hi Jason,
What is the worst case if I made cache pool from local ASDa owner by all
compute nodes? Is using block cache inside VM has compatibility issue with
live migration?
Best regards,
On Nov 9, 2015 9:04 PM, "Jason Dillaman" wrote:
> This is currently not a
On Mon, Nov 9, 2015 at 9:42 AM, Deneau, Tom wrote:
> I don't have much experience with crush rules but wanted one that does the
> following:
>
> On a 3-node cluster, I wanted a rule where I could have an erasure-coded pool
> of k=3,m=2
> and where the first 3 chunks (the
Hi all,
I cannot find ceph-deploy in the debian catalogs. I have these in my
sources:
deb http://ceph.com/debian-hammer/ jessie main
# ceph-deploy not yet in jessie repo
deb http://ceph.com/debian-hammer wheezy main
I also see ceph-deploy in the repo.
If your goal is to localize reads and writes to the same node as a given VM (at
least that's how I read your intent), creating a cache tier across your
hypervisor hosts will not achieve that result since you can expect your data to
be distributed across the pool as directed by the CRUSH map.
On 09-11-15 16:25, Gregory Farnum wrote:
> The daemons print this in their debug logs on every boot. (There might
> be a minimum debug level required, but I think it's at 0!)
> -Greg
True, but in this case all logs were lost. I had no boot/OS disks available.
I got a fresh install of a OS with
Upgrading the cluster to Ceph version 0.94.5 seems to have resolved the
problem. TEMP data is now only a small fraction of the overall usage.
On 09.11.2015 14:18, Jan Siersch wrote:
> Hi,
>
> I am currently operating a multi-node Ceph cluster with the "Hammer"
> release under CentOS 7 with
Hi,
On 11/09/2015 04:03 PM, Gregory Farnum wrote:
On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke
wrote:
Hi,
On 11/09/2015 02:07 PM, Burkhard Linke wrote:
Hi,
*snipsnap*
Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use
Hi,
I would like to invite you to our next MeetUp in Berlin on November 23:
http://www.meetup.com/de/Ceph-Berlin/events/222906642/
Marcel Wallschläger will talk about Ceph in a research environment.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
On 11/07/2015 09:44 AM, Oliver Dzombic wrote:
> setting inode64 in osd_mount_options_xfs might help a little.
sorry, inode64 is the default mount option with xfs.
Björn
___
ceph-users mailing list
ceph-users@lists.ceph.com
The daemons print this in their debug logs on every boot. (There might
be a minimum debug level required, but I think it's at 0!)
-Greg
On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander wrote:
> Hi,
>
> Recently I got my hands on a Ceph cluster which was pretty damaged due
> to a
Hello Ceph Geeks
Need your comments with my understanding on straw2.
- Is Straw2 better than straw ?
- Is it straw2 recommended for production usage ?
I have a production Ceph Firefly cluster , that i am going to upgrade to
Ceph hammer pretty soon. Should i use straw2 for all my ceph
Hi,
Recently I got my hands on a Ceph cluster which was pretty damaged due
to a human error.
I had no ceph.conf nor did I have any original Operating System data.
With just the MON/OSD data I had to rebuild the cluster by manually
re-writing the ceph.conf and installing Ceph.
The problem was,
Dear Ceph-users,
I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install
--release infernalis host1 host2 ..." fails with:
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh
--replacepkgs
Hello,
This is the second time I experienced this so I thought and post to get some
perspective. When this first happened, I suspected the kernel, and upgraded
from 3.18.22 to 3.18.23.
Scenario:
- lab scenario
- single osd host — osdhost01. Supermicro X8DTE-F - 2x X5570 + 48G RAM + 20x
On 11/09/2015 05:27 PM, Vickey Singh wrote:
> Hello Ceph Geeks
>
> Need your comments with my understanding on straw2.
>
>- Is Straw2 better than straw ?
It is not persé better then straw(1).
straw2 distributes data better when not all OSDs are equally sized/weighted.
>- Is it straw2
It seems that the pool .log increases in its size as ceph runs over time. I've
using 20 placement groups (pgs) for the .log pool. Now it complains that
"HEALTH_WARN pool .log has too few pgs". I don't have a good understanding on
when ceph will remove the old log entries by itself. I saw some
Alex, are you use ESXi?
If yes, you use iSCSI Software adapter?
If yes, you use active/passive, fixed, RoundRobin MPIO?
Do you tune something on Initiator side?
If possible can you give more details? Please
2015-11-09 17:41 GMT+03:00 Timofey Titovets :
> Great thanks, Alex,
On Mon, Nov 9, 2015 at 10:44 AM, Bogdan SOLGA wrote:
> Hello Adam!
>
> Thank you very much for your advice, I will try setting the tunables to
> 'firefly'.
Won't work. OS Recommendations page clearly states that firefly
tunables are supported starting with 3.15. 3.13,
On Tue, Nov 10, 2015 at 7:34 AM, Ken Dreyer wrote:
> It is not a known problem. Mind filing a ticket @
> http://tracker.ceph.com/ so we can track the fix for this?
>
> On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de wrote:
>>
>>
>> Dear Ceph-users,
>>
Hello Adam!
Thank you very much for your advice, I will try setting the tunables to
'firefly'.
As there seem to be a few features which would require the 4.1 kernel... is
there any 'advised' Linux distribution on which Ceph is known to work best?
According to this
Oops, sorry Dan, I would like to send my message to the list.
Sorry.
> On Mon, Nov 9, 2015 at 11:55 AM, Francois Lafont
>>
>> 1. Ok, so, the rank of my monitors are 0, 1, 2 but the its ID are 1, 2, 3
>> (ID chosen by himself because the hosts are called ceph01, ceph02 and
>> ceph03 and these ID
Hello,
We've got two problems trying to update our cluster to infernalis-
ceph-deploy install --release infernalis neb-kvm00
[neb-kvm00][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[neb-kvm00][INFO ] Running command: sudo rpm -Uvh
Hello,
I filed a new ticket:
http://tracker.ceph.com/issues/13739
Regards,
Kevin
[ceph-users] Problem with infernalis el7 package (10-Nov-2015 1:57)
From: Bob R
To:ceph-users@lists.ceph.com
Hello,
We've got two problems trying to update our cluster to infernalis-
ceph-deploy install
34 matches
Mail list logo