Hi Allen,
> -Original Message-
> From: Allen Samuels [mailto:allen.samu...@sandisk.com]
> Sent: Thursday, July 23, 2015 2:41 AM
> To: Sage Weil; Wang, Zhiqiang
> Cc: sj...@redhat.com; ceph-devel@vger.kernel.org
> Subject: RE: The design of the eviction improvement
>
> I'm very concerned a
Hi Tom,
Have you tried cd src; make rados?
Regards,
Igor.
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Deneau, Tom
Sent: Wednesday, July 22, 2015 10:13 PM
To: ceph-devel
Subject: building just src/tools/rados
Is there
> -Original Message-
> From: Sage Weil [mailto:sw...@redhat.com]
> Sent: Thursday, July 23, 2015 2:51 AM
> To: Allen Samuels
> Cc: Wang, Zhiqiang; sj...@redhat.com; ceph-devel@vger.kernel.org
> Subject: RE: The design of the eviction improvement
>
> On Wed, 22 Jul 2015, Allen Samuels wrote
no special
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
ioengine=./ceph-int/src/.libs/libfio_ceph_objectstore.so
invalidate=0 # mandatory
rw=write
#bs=4k
[filestore]
iodepth=1
# create a journaled filestore
objectstore=filestore
director
Yes the cost of the insertions with the current scheme is probably prohibitive.
Wouldn't it approach the same amount of time as just having atime turned on in
the file system?
My concern about the memory is mostly that we ensure whatever algorithm is
selected degrades gracefully when you get h
On Wed, 22 Jul 2015, Allen Samuels wrote:
> Don't we need to double-index the data structure?
>
> We need it indexed by atime for the purposes of eviction, but we need it
> indexed by object name for the purposes of updating the list upon a
> usage.
If you use the same approach the agent uses n
Hi Haomai,
Sorry for the late response, I was out of the office. I'm afraid I haven't run
into that segfault. The io_ops should be set at the very beginning when it
calls get_ioengine(). All I can suggest is that you verify that your job file
is pointing to the correct fio_ceph_objectstore.so.
Is there a make command that would build just the src/tools or even just
src/tools/rados ?
-- Tom Deneau
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.ht
Don't we need to double-index the data structure?
We need it indexed by atime for the purposes of eviction, but we need it
indexed by object name for the purposes of updating the list upon a usage.
Allen Samuels
Software Architect, Systems and Software Solutions
2880 Junction Avenue, San Jo
On Wed, 22 Jul 2015, Allen Samuels wrote:
> I'm very concerned about designing around the assumption that objects
> are ~1MB in size. That's probably a good assumption for block and HDFS
> dominated systems, but likely a very poor assumption about many object
> and file dominated systems.
>
> I
Hi,
- "Allen Samuels" wrote:
> I'm very concerned about designing around the assumption that objects
> are ~1MB in size. That's probably a good assumption for block and HDFS
> dominated systems, but likely a very poor assumption about many object
> and file dominated systems.
++
>
> If I
I'm very concerned about designing around the assumption that objects are ~1MB
in size. That's probably a good assumption for block and HDFS dominated
systems, but likely a very poor assumption about many object and file dominated
systems.
If I understand the proposals that have been discussed,
On 07/19/2015 05:28 AM, Loic Dachary wrote:
> I think it achieves the same thing and is less error prone in the case of
> backports. The risk is that upgrading from v0.94.2-34 to the version with
> this change will fail because the conditions are satisified (it thinks all
> versions after v0.94
8AM PST as usual! Topics today include a new ceph_test_rados benchmark
being added to CBT. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.c
On Wed, 22 Jul 2015, Wido den Hollander wrote:
> Hi,
>
> I was just testing with a cluster on VMs and I noticed that
> undersized+degraded+peering PGs do not trigger a HEALTH_ERR state. Why
> is that?
>
> In my opinion any PG which is not active+? should trigger a HEALTH_ERR
> state since I/O is
On Wed, 22 Jul 2015, Wang, Zhiqiang wrote:
> > The part that worries me now is the speed with which we can load and
> > manage such a list. Assuming it is several hundred MB, it'll take a
> > while to load that into memory and set up all the pointers (assuming a
> > conventional linked list str
I'll definitely take a look at make-debs.sh, looks promising. Thanks for the
hint.
I can see it's using ccache, let's see how fast it is :) What build times are
you experiencing ?
On Wed, 22 Jul 2015 08:04:44 +
"Zhou, Yuan" wrote:
> I'm also using make-debs.sh to generate the binaries for
On 2015-07-22 09:03, Stefan Priebe - Profihost AG wrote:
> That would be really important. I've seen that this one was already in
> upstream/firefly-backports. What's the purpose of that branch?
That is where the Stable Releases and Backports team stages backports
and does integration testing on t
Hi,
I was just testing with a cluster on VMs and I noticed that
undersized+degraded+peering PGs do not trigger a HEALTH_ERR state. Why
is that?
In my opinion any PG which is not active+? should trigger a HEALTH_ERR
state since I/O is blocking at that point.
Is that a sane thing to do or am I mis
I'm also using make-debs.sh to generate the binaries for some local deployment.
Note that if you need the *tests.deb you'll need to change this scripts a bit.
@@ -58,8 +58,8 @@ tar -C $releasedir -zxf $releasedir/ceph_$vers.orig.tar.gz
#
cp -a debian $releasedir/ceph-$vers/debian
cd $releasedi
Am 21.07.2015 um 22:50 schrieb Josh Durgin:
> Yes, I'm afraid it sounds like it is. You can double check whether the
> watch exists on an image by getting the id of the image from 'rbd info
> $pool/$image | grep block_name_prefix':
>
> block_name_prefix: rbd_data.105674b0dc51
>
> The id is t
21 matches
Mail list logo