On Feb 23, 2012, Sage Weil wrote:
> On Tue, 21 Feb 2012, Alexandre Oliva wrote:
>> This was supposed to fix bug 1946, and likely bug 1849 too, but it looks
>> like something's still missing for a complete fix. fuse-unmounting
>> between touching a dir and creating a snapshot seems to help get co
On Fri, Feb 24, 2012 at 00:58, madhusudhana
wrote:
> 1. In my cluster, all OSD's are mkfs'ed with btrfs
> 2. Below is what i can see with ceph -s output. Is that mean, only one MDS
> is operation and another one is standby ?
> mds e5: 1/1/1 up {0=ceph-node-1=up:active}, 1 up:standby
Yes,
I created ticket http://tracker.newdream.net/issues/2100 for this.
On Fri, Feb 24, 2012 at 10:31, Tommi Virtanen
wrote:
> On Fri, Feb 24, 2012 at 07:38, Jim Schutt wrote:
>> I've finally figured out what is going on with this behaviour.
>> Memory usage was on the right track.
>>
>> It turns out
On Fri, Feb 24, 2012 at 07:38, Jim Schutt wrote:
> I've finally figured out what is going on with this behaviour.
> Memory usage was on the right track.
>
> It turns out to be an unfortunate interaction between the
> number of OSDs/server, number of clients, TCP socket buffer
> autotuning, the pol
On Fri, Feb 24, 2012 at 10:34 AM, Martin Mailand wrote:
> Hi John,
> I tried them a few weeks ago, they are developed for crowbar version 1.1 and
> doesn't seem to work with 1.2. If I want to create a proposal, the next page
> is white and an error is log.
> The barclamp installs one ceph-mon node
On Feb 24, 2012, at 3:33 AM, "Дениска-редиска" wrote:
> running cluster of 3 nodes:
>
> lv-test-2 ~ # ceph -s
> 2012-02-24 13:10:35.481248pg v726: 594 pgs: 594 active+clean; 120 MB
> data, 683 MB used, 35448 MB / 37967 MB avail
> 2012-02-24 13:10:35.484463 mds e177: 3/3/3 up
> {0=shark1=u
Hi John,
I tried them a few weeks ago, they are developed for crowbar version 1.1
and doesn't seem to work with 1.2. If I want to create a proposal, the
next page is white and an error is log.
The barclamp installs one ceph-mon node, and several ceph-store nodes.
The glue to connect your virtua
On 02/02/2012 10:52 AM, Gregory Farnum wrote:
On Thu, Feb 2, 2012 at 7:29 AM, Jim Schutt wrote:
I'm currently running 24 OSDs/server, one 1TB 7200 RPM SAS drive
per OSD. During a test I watch both OSD servers with both
vmstat and iostat.
During a "good" period, vmstat says the server is susta
On 2012. February 23. 10:43:02 Tommi Virtanen wrote:
> 2012/2/13 Székelyi Szabolcs :
> >> Okay, that sounds like a bug then. The two interesting things would
> >> be a ceph-fuse log (--debug-client 10 --debug-ms 1 --log-file
> >> /path/to/log) and an mds log (debug mds = 20, debug ms = 1 in [mds]
running cluster of 3 nodes:
lv-test-2 ~ # ceph -s
2012-02-24 13:10:35.481248pg v726: 594 pgs: 594 active+clean; 120 MB data,
683 MB used, 35448 MB / 37967 MB avail
2012-02-24 13:10:35.484463 mds e177: 3/3/3 up
{0=shark1=up:active,1=lv-test-1=up:active,2=lv-test-2=up:active}
201
Tommi Virtanen dreamhost.com> writes:
>
> On Wed, Feb 22, 2012 at 23:12, madhusudhana
> gmail.com> wrote:
> > 1. can you please let me know how I can make only 1 MDS active ?
>
> You can see that in "ceph -s" output, the "mds" line should have just
> one entry like "0=a=up:active" with the wor
11 matches
Mail list logo