On 06/10/2015 07:15 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
OK, easy question...
Building Debian packages from git is wonderfully easy, RPMs seem
not so easy.
I got it to kind of work, but I feel like I'm doing it the Hard Way (tm).
mkdir -p ~/ceph
On 06/10/2015 01:06 AM, Ken Dreyer wrote:
On 06/09/2015 11:19 AM, Owen Synge wrote:
we can be remove many hard coded values replaced with variable and that
probably will only grow in number for example
%if 0%{?rhel} || 0%{?fedora}
Hi,
I was comparing response of S3 and Ceph RGW for when we try to create
a bucket which already exists for the same account.
S3 (non-default region) throws an error with:
HTTP response code : 409 Conflict
error code : BucketAlreadyOwnedByYou
but on the other hand ceph gives a 200 OK while
8AM PST as usual! Discussion topics for this week include:
- Throttling cache tier promotion.
- SimpleMessenger fastpath.
Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join
Hi,
I was comparing response of S3 and Ceph RGW for when we try to create
a bucket which already exists for the same account.
S3 (non-default region) throws an error with:
HTTP response code : 409 Conflict
error code : BucketAlreadyOwnedByYou
but on the other hand ceph gives a 200 OK while
modinfo libceph prints the module name Ceph filesystem for Linux,
which is same as the real fs module ceph. It's confusing.
Signed-off-by: Hong Zhiguo zhiguoh...@tencent.com
---
net/ceph/ceph_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/ceph_common.c
I need to try out the performance on qemu soon and may come back to you if I
need some qemu setting trick :-)
Sure no problem.
(BTW, I can reach around 200k iops in 1 qemu vm with 5 virtio disks with 1
iothread by disk)
- Mail original -
De: Somnath Roy somnath@sandisk.com
À:
Hi,
We are trying to build CEPH on RHEL7.1. But facing some issues with the build
with Giant branch.
Enabled the redhat server rpms and redhat ceph storage rpm channels along with
optional, extras and supplementary. But we are not able to find gperftools,
leveldb and yasm rpms in the channels.
Hey all,
Given that there was some confusion and difficulty surrounding
Blueprint submission, I'm going to leave the submission window open
until the end of the week. Please get your blueprints added to the new
tracker wiki by then so I can build a schedule and release it next
week. Many requests
- Original Message -
From: Varada Kari varada.k...@sandisk.com
To: ceph-devel ceph-devel@vger.kernel.org
Cc: ceph-users ceph-us...@ceph.com
Sent: Wednesday, June 10, 2015 3:33:08 AM
Subject: [ceph-users] CEPH on RHEL 7.1
Hi,
We are trying to build CEPH on RHEL7.1. But facing
Here's how I do it.
1. Git clone
2. ./do_autogen.sh
3. ./configure --without-radosgw --without-fuse --without-tcmalloc
--without-libatomic-ops --without-libxfs
4. # The configure step above creates a ceph.spec with the proper version
number, which you can then copy:
cp ceph.spec
sorry, I missed the __sock_create hunk during my merge from our 3.13
kernel tree.
Now I took it and tested rbd map/write/read/unmap in docker container again.
I'll post the updated patch again.
On Wed, Jun 10, 2015 at 9:30 PM, Ilya Dryomov idryo...@gmail.com wrote:
On Wed, Jun 10, 2015 at 4:01
On Wed, Jun 10, 2015 at 4:13 PM, Hong Zhiguo honk...@gmail.com wrote:
modinfo libceph prints the module name Ceph filesystem for Linux,
which is same as the real fs module ceph. It's confusing.
Signed-off-by: Hong Zhiguo zhiguoh...@tencent.com
---
net/ceph/ceph_common.c | 2 +-
1 file
Hey all,
Jaime from the OpenNebula team has offered up a speaking slot for Ceph
at the upcoming event on 29 June (short notice) in Boston. If anyone
is interested in giving a Ceph talk please let me know ASAP and I can
help get you set up. Thanks.
--
Best Regards,
Patrick McGarry
Director
in current implementaion init_net is always used.
But in most cases, if a user does a rbd map or ceph mount in
a container, it's expected to use the container network namespace.
This patch saves the container's netns in ceph_options on a rbd map
or ceph mount. And use the netns other than
On Tue, Jun 9, 2015 at 7:52 PM, Shishir Gowda shishir.go...@sandisk.com wrote:
Hi All,
We have uploaded the blueprint for the enhancements we are proposing for ceph
tier’ing functionality for Jewel release @
http://tracker.ceph.com/projects/ceph/wiki/Tiering-enhacement
Soliciting
Hi Alexandre,
Thanks for sharing the data.
I need to try out the performance on qemu soon and may come back to you if I
need some qemu setting trick :-)
Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alexandre DERUMIER
Sent:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Brad,
I've been able to do this just fine. I'm looking for a way to build
right out of a git branch where the tarball isn't on the downloads
page.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jun 10, 2015 at 8:30 AM, Ken Dreyer kdre...@redhat.com wrote:
Here's how I do it.
1. Git clone
2. ./do_autogen.sh
3. ./configure --without-radosgw --without-fuse --without-tcmalloc
--without-libatomic-ops --without-libxfs
4. # The configure step above creates a ceph.spec with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
They are using the rules file in /debian. Debian doesn't have the
requirement to be built in a specific location (i.e.
rpmbuild/{BUILD|BUILDROOT|etc) so you can build a Debian package right
out of any directory. In fact most of the time, I untar (or
iirc we return 409 in case you're trying to recreate the bucket in a different
region. I don't see why we should return it if the user tries to create it in
the same region it exists in. Amazon does not return 409 if a bucket is
recreated on their main region (where it already exists), so I'm
Hi All,
A couple of folks have asked for a recording of the performance meeting
this week as there was an excellent discussion today regarding
simplemessenger optimization with Sage.
Here's a link to the recording: https://bluejeans.com/s/8knV/
You can access this recording and all previous
On 06/10/2015 09:44 AM, Robert LeBlanc wrote:
This is really helpful, I was able to figure out what was needed in a
tarball manually, but this is what I was really looking for to create
the tarball.
Is there anyway to skip this step like in the deb build for really
fast builds? I guess if I
Somnath Roy Somnath.Roy at sandisk.com writes:
Tom,
Good that you brought this up !
I was also seeing the small writes during reads but forgot to mention in
my last report on EC.
Basically, Ceph code base seems to be issuing small writes during reads
and it is basically going to both
Hi Yuan/Jian
I was going through your following blueprint.
http://tracker.ceph.com/projects/ceph/wiki/Hadoop_over_Ceph_RGW_status_update
This is very interesting. I have some query though.
1. Did you guys benchmark RGW + S3 interface integrated with Hadoop. This
should work as is today. Are
Just had a quick look, the idea behind it seems to want to give
a flexible, very fine-grained object-level behavior control,
for example, how long an object will stay in a pool.
however, it is not very convincing that whether it worth the
effort to do this fine-grained control, the benefit may
Thanks Jian !
What about my first question :-) ? Are you seeing any shortcomings with that ?
Dumb question may be (not much knowledge on Hadoop front ) , but I was asking
why to write a new filesystem interface to plugin with Hadoop, why not plug in
RGWProxy somewhere in between may be like
Thanks Yuan ! This is helpful.
Regards
Somnath
-Original Message-
From: Zhou, Yuan [mailto:yuan.z...@intel.com]
Sent: Wednesday, June 10, 2015 8:44 PM
To: Somnath Roy; Zhang, Jian; ceph-devel
Subject: RE: Regarding hadoop over RGW blueprint
Hi Somnath,
The background was a bit
Somnath,
For you second question, our blueprint is targeting to solve the scenario that
people trying to run multiple cluster (geographically distributed), which only
has a dedicated proxy server have access to the storage cluster, that's one of
the biggest advantage of this blueprints.
For
Hi Somnath,
The background was a bit complicated. This was part of the MOC project, which
aims to setup an open-exchange cloud between several private cloud inside
several universities.
30 matches
Mail list logo