1 PM PST / 9 PM GMT
https://bluejeans.com/908675367
On Fri, Jan 18, 2019 at 10:31 AM Noah Watkins wrote:
>
> We'll be discussing SEO for the Ceph documentation site today at the
> DocuBetter meeting. Currently when Googling or DuckDuckGoing for
> Ceph-related things you may see
We'll be discussing SEO for the Ceph documentation site today at the
DocuBetter meeting. Currently when Googling or DuckDuckGoing for
Ceph-related things you may see results from master, mimic, or what's
a dumpling? The goal is figure out what sort of approach we can take
to make these results
On Mon, Jun 5, 2017 at 11:04 AM Gregory Farnum <gfar...@redhat.com> wrote:
> On Mon, Jun 5, 2017 at 10:43 AM Noah Watkins <noahwatk...@gmail.com>
> wrote:
>
>>
>> Fixing it would require we persist the entire returned bufferlist, which
> isn't feasible in g
wrote:
>
>
> On Mon, Jun 5, 2017 at 10:43 AM Noah Watkins <noahwatk...@gmail.com> wrote:
>>
>> I haven't taken the time to really grok why the limitation exists
>> (e.g. i'd be interested in to know if it's fundamental). There is a
>> comment here:
>>
edu> wrote:
>> Unfortunately, this isn't a bug. Rados clears any returned data from
>> an object class method if the operation also writes to the object.
>
> Do you have any idea why RADOS behaves like this?
>
>
>
> On Sat, Jun 3, 2017 at 9:30 AM, Noah Watkins <noahw
Comments inline
> -- Forwarded message --
> From: Zheyuan Chen
> Date: Sat, Jun 3, 2017 at 1:45 PM
> Subject: [ceph-users] Bug report: unexpected behavior when executing
> Lua object class
> To: ceph-users@lists.ceph.com
>
> Bug 1: I can not get returned output
hough in general either applications at this
level (ie. invoking object classes) are already trusted, or an
application could assert a known version of a set of objects that is
enforced automatically.
> Nick
>
>> -Original Message-
>> From: Noah Watkins [mailto:noahwatk..
that means currently loading modules from the local FS is
> restricted?
>
> Thanks
> Nick
>
>
>> -Original Message-
>> From: Noah Watkins [mailto:noahwatk...@gmail.com]
>> Sent: 16 February 2017 22:17
>> To: Nick Fisk <n...@fisk.me.uk>
>> Cc: ceph-
Hi Nick,
First thing to note is that in Kraken that object classes not whitelisted
need to be enabled explicitly. This is in the Kraken release notes (
http://docs.ceph.com/docs/master/release-notes/):
tldr: add 'osd class load list = *' and 'osd class default list = *' to
ceph.conf.
- The ‘osd
in order.
>
> Just tested it out and it works as expected. Let me know if you have any
> issues.
>
> On Tue, Jun 14, 2016 at 5:57 PM, Noah Watkins <noahwatk...@gmail.com> wrote:
>> Yeh, I'm still seeing the problem, too Thanks.
>>
>> On Tue, Jun
t;
> > :)
> >
> > Would you mind trying this again and see if you are good?
> >
> > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins <noahwatk...@gmail.com>
> wrote:
> >> Installing Jewel with ceph-deploy has been working for weeks. Today I
> >> s
Installing Jewel with ceph-deploy has been working for weeks. Today I
started to get some dependency issues:
[b61808c8624c][DEBUG ] The following packages have unmet dependencies:
[b61808c8624c][DEBUG ] ceph : Depends: ceph-mon (= 10.2.1-1trusty) but it
is not going to be installed
On Sat, Apr 30, 2016 at 2:55 PM, Adam Tygart wrote:
> Supposedly cephfs-hadoop worked and/or works on hadoop 2. I am in the
> process of getting it working with cdh5.7.0 (based on hadoop 2.6.0).
> I'm under the impression that it is/was working with 2.4.0 at some
> point in time.
>
Hi Jose,
I believe what you are referring to is using Hadoop over Ceph via the
VFS implementation of the Ceph client vs the user-space libcephfs
client library. The current Hadoop plugin for Ceph uses the client
library. You could run Hadoop over Ceph using a local Ceph mount
point, but it would
the apt-get errors. It
does seem like the install proceeds successfully, and that the ceph
setup will proceed once the extra arg to mon create-initial is
removed.
Here's hoping that is indeed nothing to worry about. :)
- Travis
On Tue, Jul 21, 2015 at 2:27 PM, Noah Watkins noahwatk
The docker/distribution project runs a continuous integration VM using
CircleCI, and part of the VM setup installs Ceph packages using
ceph-deploy. This has been working well for quite a while, but we are
seeing a failure running `ceph-deploy install --release hammer`. The
snippet is here where it
Nevermind. I see that `ceph-deploy mon create-initial` has stopped
accepting the trailing hostname which was causing the failure. I don't
know if those problems above I showed are actually anything to worry
about :)
On Tue, Jul 21, 2015 at 3:17 PM, Noah Watkins noahwatk...@gmail.com wrote
I'll take a shot at answering this:
Operations are atomic in the sense that there are no partial failures.
Additionally, access to an object should appear to be serialized. So, two
in-flight operations A and B will be applied in either A,B or B,A order. If
ordering is important (e.g. the
packages
from Fedora downstream repos and the ceph.com upstream repos. That's
not supposed to happen.
- Travis
On Wed, Jan 7, 2015 at 2:15 PM, Noah Watkins noah.watk...@inktank.com
wrote:
I'm trying to install Firefly on an up-to-date FC20 box. I'm getting
the following errors:
[nwatkins
I'm trying to install Firefly on an up-to-date FC20 box. I'm getting
the following errors:
[nwatkins@kyoto cluster]$ ../ceph-deploy/ceph-deploy install --release
firefly kyoto
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/nwatkins/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked
I've posted a preliminary patch set to support a libcephfs io engine in fio:
http://github.com/noahdesu/fio cephfs
You can use this right now to generate load through libcephfs, but the
plugin needs a bit more work before it goes upstream (patches
welcome), but feel free to play around with
Make sure you are intializing the sub-modules.. the autogen.sh script
should probably notify users when these are missing and/or initialize
them automatically..
git submodule init
git submodule update
or alternatively, git clone --recursive ...
On Fri, Jul 25, 2014 at 11:48 AM, Deven Phillips
Oh, it looks like autogen.sh is smart about that now. If you using the
latest master, my suggestion may not be the solution.
On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins noah.watk...@inktank.com wrote:
Make sure you are intializing the sub-modules.. the autogen.sh script
should probably
... The
src/lib3/ directory was empty and when I tried to use submodules to update
it I got errors about non-empty directories... Trying to fix that now..
Thanks!
Deven
On Fri, Jul 25, 2014 at 2:51 PM, Noah Watkins noah.watk...@inktank.com
wrote:
Make sure you are intializing the sub-modules
This strikes me as a difference in semantics between HDFS and CephFS,
and like Greg said it's probably based on HBase assumptions. It'd be
really helpful to find out what the exception is. If you are building
the Hadoop bindings from scratch, you can instrument `listStatus` in
?
The error itself looks like a missing dependency, but that exception
being thrown might also be tirggered by other problems while loading
the bindings.
On Wed, Mar 19, 2014 at 8:43 AM, Gurvinder Singh
gurvindersinghdah...@gmail.com wrote:
On 03/19/2014 03:51 PM, Noah Watkins wrote:
On Wed, Mar 19, 2014
namefs.AbstractFileSystem.glusterfs.impl/name
valueorg.apache.hadoop.fs.glusterfs.GlusterFS/value
/property
Apparently rather than the `fs.ceph.impl` property in 2.x
On Wed, Mar 19, 2014 at 9:06 AM, Gurvinder Singh
gurvindersinghdah...@gmail.com wrote:
On 03/19/2014 04:50 PM, Noah Watkins wrote:
Since
Hi Gurviner,
There is a pull request for Hadoop 2 support here
https://github.com/noahdesu/cephfs-hadoop/pull/1
I have not yet tested it personally, but it looks OK to me.
Data locality support in Ceph is supported.
On 2/18/14, 3:15 AM, Gurvinder Singh wrote:
Hi,
I am planning to test
Hi Kesten,
It's a little difficult to tell what the source of the problem is, but
looking at the gist you referenced, I don't see anything that would
indicate that Ceph is causing the issue. For instance,
hadoop-mapred-tasktracker-xxx-yyy-hdfs01.log looks like Hadoop daemons
are having problems
Most (all?) the network message structures are located in:
https://github.com/ceph/ceph/tree/master/src/messages
On Jan 9, 2014, at 7:44 AM, Bruce Lee lwl...@gmail.com wrote:
Hi all,
I am new here and glad to see you guys.
Thanks for your hard work for providing a more stable,
The default configuration for a Ceph build should produce a static
rados library. If you actually want to build _only_ librados, that
might require a bit automake tweeks.
nwatkins@kyoto:~$ ls -l projects/ceph_install/lib/
total 691396
-rw-r--r-- 1 nwatkins nwatkins 219465940 Jan 6 09:56
You'll need to register the new pool with the MDS:
ceph mds add_data_pool pool id
On Thu, Jan 2, 2014 at 9:48 PM, 鹏 wkp4...@126.com wrote:
Hi all;
today, I want to use the fuction of ceph_open_layout() in libcephFs.h
I creat a new pool success,
# rados mkpool data1
and then I edit
A little info about wip-port.
The wip-port branch lags behind master a bit, usually a week or two
depending on what I've got going on. There are testers for OSX and
FreeBSD, and bringing in windows patches would probably be a nice
staging place for them, as I suspect the areas of change will
I don't think there is any inherent limitation to using RADOS or RBD
as a backend for an a non-CephFS file system, as CephFS is inherently
built on top of RADOS (though I suppose it doesn't directly use
librados). However, the challenge would be in configuring and tuning
the two independent
There are users that have/are running HBase on top of Ceph. The setup should be
no different than the standard HBase setup instructions, with the exception
that when configuring the Hadoop file system, you specify CephFS instead
(typically in core-site.xml).
Currently the documentation for
Generally these steps need to be taken:
1) Compile the custom methods into a shared library
2) Place the library in the class load path of the OSD
3) Invoke the methods via librados exec method
The easiest way to do this is to use the ceph build system by adding your
module to
Can you try again but with openjdk or oracle java? We haven't tested
with gcj, but I'll take a look and see if we can suppor that, too.
Thanks!
On Mon, Nov 11, 2013 at 4:29 AM, 皓月 suzhenh...@qq.com wrote:
i configure with --enable cephfs-java,then i make.there is an error.
export
On Fri, Oct 18, 2013 at 7:31 PM, 鹏 wkp4...@126.com wrote:
hi Noah
That is a stupid mistake which I make! the reason is start-all.sh not
start the datanode! so I use the shell start-mapr.sh to start it!
by the way! is the ceph repalce HDFS is respace the name node???
thank you ,Noah!
peng
Hi Alek,
The Lua branch is definitely ready for testing now. I'm putting together a
blog post about it and ill shoot a note to mailing list when that's
complete.
On Oct 19, 2013 11:45 AM, Alek Paunov a...@declera.com wrote:
On 18.10.2013 22:23, Noah Watkins wrote:
As far as constructing
Kai,
It looks like libcephfs-java (the CephFS Java bindings) are not in your
classpath. Where did you install them?
-Noah
On Thu, Oct 17, 2013 at 11:30 PM, log1024 log1...@yeah.net wrote:
Hi Peng
The conf in my cluster is almost the same with yours, but when i run
#bin/hadoop fs -ls /
It
Peng,
I'm glad you were able to get it working. You'll have to provide some more
information to start debugging why it is slow. How is your Ceph cluster
configured? Also, have a look at the jobtracker statistics and see if any
tasks are failing.
On Thu, Oct 17, 2013 at 8:17 PM, 鹏
The --with-hadoop option has been removed. The Ceph Hadoop bindings are now
located in git://github.com/ceph/hadoop-common cepfs/branch-1.0, and the
required CephFS Java bindings can be built from the Ceph Git repository
using the --enable-cephfs-java configure option.
On Wed, Oct 16, 2013 at
On Tue, Oct 15, 2013 at 2:13 AM, 鹏 wkp4...@126.com wrote:
*** # javac -classpath ../libcephfs.jar com/ceph/fs/Test.java
com/ceph/fs/Test:9:unreported exception java.io.FileNotFoundException;
must be caught or declared to be throw
mount.conf_read_file(/ect/ceph/ceph.conf);
On Sun, Oct 13, 2013 at 8:28 PM, 鹏 wkp4...@126.com wrote:
hi all:
Exception in thread main java.lang.NoClassDefFoundError:
com/ceph/fs/cephFileAlreadyExisteException
at java.lang.class.forName0(Native Method)
This looks like a bug, which I'll fixup today. But it shouldn't be
The error below seems to indicate that Hadoop isn't aware of the `ceph://`
file system. You'll need to manually add this to your core-site.xml:
* property** namefs.ceph.impl/name**
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value** /property*
report:FileSystem
Do you have the following in your core-site.xml?
property
namefs.ceph.impl/name
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
/property
On Sun, Oct 13, 2013 at 11:55 PM, 鹏 wkp4...@126.com wrote:
hi all
I follow the mail configure the ceph with hadoop
Hi Kai,
It doesn't look like there is anything Ceph specific in the Java
backtrace you posted. Does you installation work with HDFS? Are there
any logs where an error is occurring with the Ceph plugin?
Thanks,
Noah
On Mon, Oct 14, 2013 at 4:34 PM, log1024 log1...@yeah.net wrote:
Hi,
I have a
On Thu, Oct 10, 2013 at 12:27 AM, 鹏 wkp4...@126.com wrote:
First of all ,I install ceph at 192.168.58.132 , tar -zxvf
ceph-0.6.2.tar.gz ./configure make make install ; does this mean The
native Ceph file system client must be installed on each participating node
in the Hadoop cluster
On Thu, Oct 10, 2013 at 7:29 AM, Noah Watkins noah.watk...@inktank.com wrote:
hadoop cluster. You do not need to run any Ceph dameons, but it is
common to run them together for data locality. If you are building
Woah, my wording here is terrible. What I meant to say is that you
don't
property
nameceph.root.dir/name
value/mnt/mycephfs/value
/property
This is probably causing the issue. Is this meant to be a local mount
point? The 'ceph.root.dir' property specifies the root directory
/inside/ CephFS, and the Hadoop implementation doesn't require a local
/hadoop-ceph/ceph/ceph.mon.keyring/value
/property
On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins noah.watk...@inktank.com
wrote:
property
nameceph.root.dir/name
value/mnt/mycephfs/value
/property
This is probably causing the issue. Is this meant to be a local mount
point
/hadoop-ceph/lib
I confirmed using bin/hadoop classpath that both jar are in the classpath.
On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com
wrote:
How are you invoking Hadoop? Also, I forgot to ask, are you using the
wrappers located in github.com/ceph/hadoop-common
: ceph_mount: exit ret -2
On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com
wrote:
What happens when you run `bin/hadoop fs -ls` ? This is entirely
local, and a bit simpler and easier to grok.
On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
rolando.mart...@gmail.com wrote
: exit ret -2
2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com
wrote:
In the log file that you showing, do you see where
:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com
wrote:
In the log file
] ^
What are the dependencies that I need to have installed?
On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins noah.watk...@inktank.com
wrote:
Ok thanks. That narrows things down a lot. It seems like the keyring
property is not being recognized, and I don't see so I'm wondering
remove the jar that is posted online? It is misleading...
Thanks!
Rolando
On Mon, Sep 23, 2013 at 5:07 PM, Noah Watkins noah.watk...@inktank.com
wrote:
You need to stick the CephFS jar files in the hadoop lib folder.
On Mon, Sep 23, 2013 at 2:02 PM, Rolando Martins
rolando.mart
it does look like an older version 56.6, I got it from the Ubuntu Repo.
Is there another method or pull request I can run to get the latest? I am
having a hard time finding it.
Thanks
On Sun, Aug 4, 2013 at 10:33 PM, Noah Watkins noah.watk...@inktank.com
wrote:
Hey Scott,
Things look OK
Hey Scott,
Things look OK, but I'm a little foggy on what exactly was shipping in
the libcephfs-java jar file back at 0.61. There was definitely a time
where Hadoop and libcephfs.jar in the Debian repos were out of sync,
and that might be what you are seeing.
Could you list the contents of the
On Fri, Jul 19, 2013 at 8:09 AM, ker can kerca...@gmail.com wrote:
With ceph is there any way to influence the data block placement for a
single file ?
AFAIK, no... But, this is an interesting twist. New files written out
to HDFS, IIRC, will by default store 1 local and 2 remote copies. This
On Wed, Jul 17, 2013 at 11:07 AM, ker can kerca...@gmail.com wrote:
Hi,
Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and
hbase-0.94.9.
My setup is erroring out looking for getDefaultReplication
getDefaultBlockSize ... but I can see those defined in
On Wed, Jul 10, 2013 at 6:23 PM, ker can kerca...@gmail.com wrote:
Now separating out the journal from data disk ...
HDFS write numbers (3 disks/data node)
Average execution time: 466
Best execution time : 426
Worst execution time : 508
ceph write numbers (3 data disks/data node + 3
no max ?
I tried increasing the readahead max periods to 8 .. didn't look like a good
change.
thanks !
On Wed, Jul 10, 2013 at 10:56 AM, Noah Watkins noah.watk...@inktank.com
wrote:
Hey KC,
I wanted to follow up on this, but ran out of time yesterday. To set
the options in ceph.conf
- the map tasks are running on the same nodes as the
splits they're processing. good stuff !
On Mon, Jul 8, 2013 at 9:18 PM, Noah Watkins noah.watk...@inktank.com
wrote:
You might want create a new branch and cherry-pick the topology
relevant commits (I think there is 1 or 2) from the -topo
On Tue, Jul 9, 2013 at 12:35 PM, ker can kerca...@gmail.com wrote:
hi Noah,
while we're still on the hadoop topic ... I was also trying out the
TestDFSIO tests ceph v/s hadoop. The Read tests on ceph takes about 1.5x
the hdfs time. The write tests are worse about ... 2.5x the time on hdfs,
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
pg_num 960 pgp_num 960 last_change 1 owner 0
From hdfs-site.xml:
property
namedfs.replication/name
value1/value
/property
On Tue, Jul 9, 2013 at 2:44 PM, Noah Watkins noah.watk...@inktank.com
wrote
, Jul 9, 2013 at 3:27 PM, Noah Watkins noah.watk...@inktank.com wrote:
Is the JNI interface still an issue or have we moved past that ?
We haven't done much performance tuning with Hadoop, but I suspect
that the JNI interface is not a bottleneck.
My very first thought about what might be causing
...@gmail.com wrote:
Makes sense. I can try playing around with these settings when you're
saying client, would this be libcephfs.so ?
On Tue, Jul 9, 2013 at 5:35 PM, Noah Watkins noah.watk...@inktank.com
wrote:
Greg pointed out the read-ahead client options. I would suggest
fiddling
1
Thanks
KC
On Mon, Jul 8, 2013 at 3:36 PM, Noah Watkins noah.watk...@inktank.com
wrote:
Yes, all of the code needed to get the locality information should be
present the version of the jar file you referenced. We have tested a
to make sure the right data is available, but have
.
Are you running Cuttlefish? I believe it has all the dependencies.
On Mon, Jul 8, 2013 at 7:00 PM, Noah Watkins noah.watk...@inktank.com wrote:
KC,
Thanks a lot for checking that out. I just went to investigate, and
the work we have done on the locality/topology-aware features are
sitting
wrote:
Yep, I'm running cuttlefish ... I'll try building out of that branch and let
you know how that goes.
-KC
On Mon, Jul 8, 2013 at 9:06 PM, Noah Watkins noah.watk...@inktank.com
wrote:
FYI, here is the patch as it currently stands:
https://github.com/ceph/hadoop-common/compare/cephfs
Thanks a lot for this Ilja! I'm going to update the documentation again soon,
so this very helpful.
On Jun 5, 2013, at 12:21 PM, Ilja Maslov ilja.mas...@openet.us wrote:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Was there actually a problem if you didn't set this?
4. Symink JNI
On Jun 4, 2013, at 2:58 PM, Ilja Maslov ilja.mas...@openet.us wrote:
Is the only way to get it to work is to build Hadoop off the
https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
possible to compile/obtain some sort of a plugin and feed it to a stable
hadoop
Mike,
Thanks for the looking into this further.
On May 10, 2013, at 5:23 AM, Mike Bryant mike.bry...@ocado.com wrote:
I've just found this bug report though: http://tracker.ceph.com/issues/3601
Looks like that may be the same issue..
This definitely seems like a candidate.
Adding some
On Apr 25, 2013, at 4:08 AM, Varun Chandramouli varun@gmail.com wrote:
2013-04-25 13:54:36.182188 bff8cb40 -1 common/Thread.cc: In function 'void
Thread::create(size_t)' thread bff8cb40 time 2013-04-25
13:54:36.053392#012common/Thread.cc: 110: FAILED assert(ret == 0)#012#012
ceph
Varun,
What version of Ceph are you running? Can you confirm that the MDS daemon
(ceph-mds) is still running or has crashed when the MDS becomes
laggy/unresponsive? If it has crashed checked the MDS log for a crash report.
There were a couple Hadoop workloads that caused the MDS to misbehave
You may need to be root to look at the logs in /var/log/ceph. Turning up
logging is helpful, too. Is the bug reproducible? It'd be great if you could
get a core dump file for the crashed MDS process.
-Noah
On Apr 24, 2013, at 9:53 PM, Varun Chandramouli varun@gmail.com wrote:
Ceph
On Apr 4, 2013, at 3:06 AM, Waed Bataineh promiselad...@gmail.com wrote:
Hello,
I'm using Ceph as object storage, where it put the whole file what ever was
its size in one object (correct me if i'm wrong).
i used it for multiple files that have different extension (.txt, .mp3,
On Mar 21, 2013, at 8:03 AM, François P-L lord...@hotmail.com wrote:
I'm not seeing the new location on github (but the ceph documentation page
have been updated, thx ;)).
What is the status of all Hadoop dependency on the master branch ?
The current Hadoop dependency is on the master
On Mar 19, 2013, at 10:05 AM, Varun Chandramouli varun@gmail.com wrote:
libcephfs_jni.so is present in /usr/local/lib/, which I added to
LD_LIBRARY_PATH and tried it again. The same error is displayed in the log
file for the task trackers. Anything else I should be doing?
It looks like
Are you setting LD_LIBRARY_PATH in your bashrc? If so, make sure it is set at
the _very_ top (before the handling for interactive mode, a common problem with
stock Ubuntu setups).
Alternatively, set LD_LIBRARY_PATH in conf/hadoop-env.sh.
-Noah
On Mar 19, 2013, at 10:32 AM, Varun Chandramouli
81 matches
Mail list logo