Hi,
Back on this.
I finally found out a logic in the mapping.
So after taking the time to note all the disks serial numbers on 3 different
machines and 2 different OSes, I now know that my specific LSI SAS 2008 cards
(no reference on them, but I think those are LSI sas 9207-8i) map the disks
Hello,
On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
Hello all!
Some our customer asked for only ssd storage.
By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
x mellanox 40Gbit ConnectX-3 (maybe
Hi
Is it possible to share performance results with this kind of config? How many
iops? Bandwidth ? Latency ?
Thanks
Sent from my iPhone
On 11 déc. 2014, at 09:35, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
Hello all!
Some our
On 12/11/2014 04:18 AM, Christian Balzer wrote:
On Wed, 10 Dec 2014 20:09:01 -0800 Christopher Armstrong wrote:
Christian,
That indeed looks like the bug! We tried with moving the monitor
host/address into global and everything works as expected - see
On Thu, Dec 11, 2014 at 3:18 AM, Irek Fasikhov malm...@gmail.com wrote:
Hi, Cao.
https://github.com/ceph/ceph/commits/firefly
2014-12-11 5:00 GMT+03:00 Cao, Buddy buddy@intel.com:
Hi, I tried to download firefly rpm package, but found two rpms existing
in different folders, what is
Hello,
On 12/11/2014 11:35 AM, Christian Balzer wrote:
Hello,
On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
Hello all!
Some our customer asked for only ssd storage.
By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
8 internal 12Gb ports on each), 24 x Intel DC
On 12/11/2014 02:46 PM, Gregory Farnum wrote:
On Thu, Dec 11, 2014 at 2:21 AM, Joao Eduardo Luis j...@redhat.com wrote:
On 12/11/2014 04:28 AM, Christopher Armstrong wrote:
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to
there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a msa70
which gives me about 600 MB/s continous write speed with rados write bench.
tgt on the server with rbd backend uses this pool. mounting local(host)
with iscsiadm, sdz is the virtual iscsi device. As you can see, sdz max out
Our users are running CoreOS with kernel 3.17.2. Our user tested this by
setting up the config and then bringing down one of the mons. See
https://github.com/deis/deis/issues/2711#issuecomment-66566318 for his
testing scenario.
On Thu, Dec 11, 2014 at 8:16 AM, Joao Eduardo Luis j...@redhat.com
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito periqu...@gmail.com wrote:
Hi,
I've stopped OSD.16, removed the PG from the local filesystem and started
the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
deep-scrub and the PG is still inconsistent.
What led you to remove it
The branch I pushed earlier was based off recent development branch. I
just pushed one based off firefly (wip-10271-firefly). It will
probably take a bit to build.
Yehuda
On Thu, Dec 11, 2014 at 12:03 PM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
Hi again!
I have installed and enabled
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to contribute back to Ceph,
and this bug has hit us personally so I think it's a good candidate :)
On Wed, Dec 10, 2014 at 8:25 PM, Christopher Armstrong ch...@opdemand.com
wrote:
This issue seems very similar to these:
http://tracker.ceph.com/issues/8202
http://tracker.ceph.com/issues/8702
Would it make any difference if I try to build CEPH from sources?
I mean is someone aware of it been fixed on any of the recent commits
and probably hasn't passed yet to the
Hi all,
I follow the http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to
deploy ceph,
But when install the monitor node, i got error as below:
{code}
[louis@adminnode my-cluster]$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at:
I don't think it has been fixed recently. I'm looking at it now, and
not sure why it hasn't triggered before in other areas.
Yehuda
On Thu, Dec 11, 2014 at 5:55 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
This issue seems very similar to these:
http://tracker.ceph.com/issues/8202
Pushed a fix to wip-10271. Haven't tested it though, let me know if you try it.
Thanks,
Yehuda
On Thu, Dec 11, 2014 at 8:38 AM, Yehuda Sadeh yeh...@redhat.com wrote:
I don't think it has been fixed recently. I'm looking at it now, and
not sure why it hasn't triggered before in other areas.
Hi all,
I have upgrade two LSI SAS9201-16i HBAs to the latest Firmware P20.00.00
and after that I got following syslog messages:
Dec 9 18:11:31 ceph-03 kernel: [ 484.602834] mpt2sas0: log_info(0x3108):
originator(PL), code(0x08), sub_code(0x)
Dec 9 18:12:15 ceph-03 kernel: [
Hi again!
I have installed and enabled the development branch repositories as
described here:
http://ceph.com/docs/master/install/get-packages/#add-ceph-development
and when I try to update the ceph-radosgw package I get the following:
Installed Packages
Name: ceph-radosgw
Arch
Be very careful with running ceph pg repair. Have a look at this
thread:
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/15185
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
On Thu, Dec 11, 2014 at 10:57:22AM +, Luis Periquito wrote:
Hi,
I've stopped OSD.16, removed the PG from the
On 12/10/2014 07:30 PM, Kevin Sumner wrote:
The mons have grown another 30GB each overnight (except for 003?), which
is quite worrying. I ran a little bit of testing yesterday after my
post, but not a significant amount.
I wouldn’t expect compact on start to help this situation based on the
Anyone know why a VM live restore would be excessively slow on Ceph? restoring
a small VM with 12GB disk/2GB Ram is taking 18 *minutes*. Larger VM's can be
over half an hour.
The same VM's on the same disks, but native, or glusterfs take less than 30
seconds.
VM's are KVM on Proxmox.
On Thu, Dec 11, 2014 at 2:21 AM, Joao Eduardo Luis j...@redhat.com wrote:
On 12/11/2014 04:28 AM, Christopher Armstrong wrote:
If someone could point me to where this fix should go in the code, I'd
actually love to dive in - I've been wanting to contribute back to Ceph,
and this bug has hit
Hi all,
Any one can help ?
On Dec 11, 2014, at 20:34, mail list louis.hust...@gmail.com wrote:
Hi all,
I follow the http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to
deploy ceph,
But when install the monitor node, i got error as below:
{code}
[louis@adminnode my-cluster]$
OK! I will give it some time and will try again later!
Thanks a lot for your help!
Warmest regards,
George
The branch I pushed earlier was based off recent development branch.
I
just pushed one based off firefly (wip-10271-firefly). It will
probably take a bit to build.
Yehuda
On Thu,
hello sir!
I need some open source monitoring tool for examining these metrics.
Please suggest some open source monitoring software.
Thanks Regards Pragya Jain
On Thursday, 11 December 2014 9:16 PM, Denish Patel den...@omniti.com
wrote:
Try http://www.circonus.com
On Thu, Dec 11,
Hi.
We use Zabbix.
2014-12-12 8:33 GMT+03:00 pragya jain prag_2...@yahoo.co.in:
hello sir!
I need some open source monitoring tool for examining these metrics.
Please suggest some open source monitoring software.
Thanks
Regards
Pragya Jain
On Thursday, 11 December 2014 9:16 PM,
Hi.
For faster operation, use rbd export/export-diff and import/import-diff
2014-12-11 17:17 GMT+03:00 Lindsay Mathieson lindsay.mathie...@gmail.com:
Anyone know why a VM live restore would be excessively slow on Ceph?
restoring
a small VM with 12GB disk/2GB Ram is taking 18 *minutes*.
Examples
Backups:
/usr/bin/nice -n +20 /usr/bin/rbd -n client.backup export
test/vm-105-disk-1@rbd_data.505392ae8944a - | /usr/bin/pv -s 40G -n -i 1 |
/usr/bin/nice -n +20 /usr/bin/pbzip2 -c /backup/vm-105-disk-1
Restore:
pbzip2 -dk /nfs/RBD/big-vm-268-disk-1-LyncV2-20140830-011308.pbzip2 -c |
hello sir!
According to TomiTakussaari/riak_zabbixCurrently supported Zabbix
keys:riak.ring_num_partitions
riak.memory_total
riak.memory_processes_used
riak.pbc_active
riak.pbc_connects
riak.node_gets
riak.node_puts
riak.node_get_fsm_time_median
riak.node_put_fsm_time_medianAll these metrics are
29 matches
Mail list logo