Re: [ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hi Oliver,

Thank you for answer.

My cluster stay in VM servers. I will change to physical servers. The data
stay in storage with iscsi communications. I will map the iscsi again in
the new server.

Thank's.

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Wed, Dec 16, 2015 at 11:17 AM, Oliver Dzombic <i...@ip-interactive.de>
wrote:

> Hi,
>
> if you want to be nice/free of interruption, you should consider adding
> the new mon/osd to your existing cluster, let it sync, and then remove
> the old mon/osd.
>
> So this is an add/remove task, not a 1:1 replace. You will need to copy
> the data from your existing harddisks anyway.
>
> Greetings
> Oliver
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hello,

Anyone can help me, please?

I need change the servers (OSD's and MDS) of my cluster.

I have a mini cluster with 3 OSD's, 1 MON and 1 MDS in the Ceph 0.94.1

How can i change the servers? I install the SO and ceph packages and i copy
the ceph.conf? Just it?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Kernel Bug in 3.13.0-52

2015-05-13 Thread Daniel Takatori Ohara
Thank Gregory for the answer.

I will be upgrade the kernel.

Do you know what kernel the CephFS is stable?

Thanks.


Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Wed, May 13, 2015 at 5:01 PM, Gregory Farnum g...@gregs42.com wrote:

 On Wed, May 13, 2015 at 12:08 PM, Daniel Takatori Ohara
 dtoh...@mochsl.org.br wrote:
  Hi,
 
  We have a small ceph cluster with 4 OSD's and 1 MDS.
 
  I run Ubuntu 14.04 with 3.13.0-52-generic in the clients, and CentOS 6.6
  with 2.6.32-504.16.2.el6.x86_64 in Servers.
 
  The version of Ceph is 0.94.1
 
  Sometimes, the CephFS freeze, and the dmesg show me the follow messages :
 
  May 13 15:53:10 blade02 kernel: [93297.784094] [ cut here
  ]
  May 13 15:53:10 blade02 kernel: [93297.784121] WARNING: CPU: 10 PID: 299
 at
  /build/buildd/linux-3.13.0/fs/ceph/inode.c:701
 fill_inode.isra.8+0x9ed/0xa00
  [ceph]()
  May 13 15:53:10 blade02 kernel: [93297.784129] Modules linked in: 8021q
 garp
  stp mrp llc nfsv3 rpcsec_gss_krb5 nfsv4 ceph libceph libcrc32c intel_rapl
  x86_pkg_temp_thermal intel_powerclamp ipmi_devintf gpi
  May 13 15:53:10 blade02 kernel: [93297.784204] CPU: 10 PID: 299 Comm:
  kworker/10:1 Tainted: GW 3.13.0-52-generic #86-Ubuntu
  May 13 15:53:10 blade02 kernel: [93297.784207] Hardware name: Dell Inc.
  PowerEdge M520/050YHY, BIOS 2.1.3 01/20/2014
  May 13 15:53:10 blade02 kernel: [93297.784221] Workqueue: ceph-msgr
 con_work
  [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784225]  0009
  880801093a28 8172266e 
  May 13 15:53:10 blade02 kernel: [93297.784233]  880801093a60
  810677fd ffea 0036
  May 13 15:53:10 blade02 kernel: [93297.784239]  
   c9001b73f9d8 880801093a70
  May 13 15:53:10 blade02 kernel: [93297.784246] Call Trace:
  May 13 15:53:10 blade02 kernel: [93297.784257]  [8172266e]
  dump_stack+0x45/0x56
  May 13 15:53:10 blade02 kernel: [93297.784264]  [810677fd]
  warn_slowpath_common+0x7d/0xa0
  May 13 15:53:10 blade02 kernel: [93297.784269]  [810678da]
  warn_slowpath_null+0x1a/0x20
  May 13 15:53:10 blade02 kernel: [93297.784280]  [a046facd]
  fill_inode.isra.8+0x9ed/0xa00 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784290]  [a046e3cd] ?
  ceph_alloc_inode+0x1d/0x4e0 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784302]  [a04704cf]
  ceph_readdir_prepopulate+0x27f/0x6d0 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784318]  [a048a704]
  handle_reply+0x854/0xc70 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784331]  [a048c3f7]
  dispatch+0xe7/0xa90 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784342]  [a02a4a78] ?
  ceph_tcp_recvmsg+0x48/0x60 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784354]  [a02a7a9b]
  try_read+0x4ab/0x10d0 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784365]  [a02a9418] ?
  try_write+0x9a8/0xdb0 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784373]  [8101bc23] ?
  native_sched_clock+0x13/0x80
  May 13 15:53:10 blade02 kernel: [93297.784379]  [8109d585] ?
  sched_clock_cpu+0xb5/0x100
  May 13 15:53:10 blade02 kernel: [93297.784390]  [a02a98d9]
  con_work+0xb9/0x640 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784398]  [81083aa2]
  process_one_work+0x182/0x450
  May 13 15:53:10 blade02 kernel: [93297.784403]  [81084891]
  worker_thread+0x121/0x410
  May 13 15:53:10 blade02 kernel: [93297.784409]  [81084770] ?
  rescuer_thread+0x430/0x430
  May 13 15:53:10 blade02 kernel: [93297.784414]  [8108b5d2]
  kthread+0xd2/0xf0
  May 13 15:53:10 blade02 kernel: [93297.784420]  [8108b500] ?
  kthread_create_on_node+0x1c0/0x1c0
  May 13 15:53:10 blade02 kernel: [93297.784426]  [817330cc]
  ret_from_fork+0x7c/0xb0
  May 13 15:53:10 blade02 kernel: [93297.784431]  [8108b500] ?
  kthread_create_on_node+0x1c0/0x1c0
  May 13 15:53:10 blade02 kernel: [93297.784434] ---[ end trace
  05d3f5ee1f31bc67 ]---
  May 13 15:53:10 blade02 kernel: [93297.784437] ceph: fill_inode badness
 on
  8807f7eaa5c0

 I don't follow the kernel stuff too closely, but the CephFS kernel
 client is still improving quite rapidly and 3.13 is old at this point.
 You could try upgrading to something newer.
 Zheng might also know what's going on and if it's been fixed.
 -Greg

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Kernel Bug in 3.13.0-52

2015-05-13 Thread Daniel Takatori Ohara
Hello Lincoln,

Thank's for the answer. I will be upgrade the kernel in clients.

But, in the version 0.94.1 (hammer), the kernel is the same? Is the 3.16?

Thank's,


Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Wed, May 13, 2015 at 5:11 PM, Lincoln Bryant linco...@uchicago.edu
wrote:

 Hi Daniel,

 There are some kernel recommendations here, although it's unclear if they
 only apply to RBD or also to CephFS.
 http://ceph.com/docs/master/start/os-recommendations/

 --Lincoln

 On May 13, 2015, at 3:03 PM, Daniel Takatori Ohara wrote:

 Thank Gregory for the answer.

 I will be upgrade the kernel.

 Do you know what kernel the CephFS is stable?

 Thanks.


 Att.

 ---
 Daniel Takatori Ohara.
 System Administrator - Lab. of Bioinformatics
 Molecular Oncology Center
 Instituto Sírio-Libanês de Ensino e Pesquisa
 Hospital Sírio-Libanês
 Phone: +55 11 3155-0200 (extension 1927)
 R: Cel. Nicolau dos Santos, 69
 São Paulo-SP. 01308-060
 http://www.bioinfo.mochsl.org.br


 On Wed, May 13, 2015 at 5:01 PM, Gregory Farnum g...@gregs42.com wrote:

 On Wed, May 13, 2015 at 12:08 PM, Daniel Takatori Ohara
 dtoh...@mochsl.org.br wrote:
  Hi,
 
  We have a small ceph cluster with 4 OSD's and 1 MDS.
 
  I run Ubuntu 14.04 with 3.13.0-52-generic in the clients, and CentOS 6.6
  with 2.6.32-504.16.2.el6.x86_64 in Servers.
 
  The version of Ceph is 0.94.1
 
  Sometimes, the CephFS freeze, and the dmesg show me the follow messages
 :
 
  May 13 15:53:10 blade02 kernel: [93297.784094] [ cut here
  ]
  May 13 15:53:10 blade02 kernel: [93297.784121] WARNING: CPU: 10 PID:
 299 at
  /build/buildd/linux-3.13.0/fs/ceph/inode.c:701
 fill_inode.isra.8+0x9ed/0xa00
  [ceph]()
  May 13 15:53:10 blade02 kernel: [93297.784129] Modules linked in: 8021q
 garp
  stp mrp llc nfsv3 rpcsec_gss_krb5 nfsv4 ceph libceph libcrc32c
 intel_rapl
  x86_pkg_temp_thermal intel_powerclamp ipmi_devintf gpi
  May 13 15:53:10 blade02 kernel: [93297.784204] CPU: 10 PID: 299 Comm:
  kworker/10:1 Tainted: GW 3.13.0-52-generic #86-Ubuntu
  May 13 15:53:10 blade02 kernel: [93297.784207] Hardware name: Dell Inc.
  PowerEdge M520/050YHY, BIOS 2.1.3 01/20/2014
  May 13 15:53:10 blade02 kernel: [93297.784221] Workqueue: ceph-msgr
 con_work
  [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784225]  0009
  880801093a28 8172266e 
  May 13 15:53:10 blade02 kernel: [93297.784233]  880801093a60
  810677fd ffea 0036
  May 13 15:53:10 blade02 kernel: [93297.784239]  
   c9001b73f9d8 880801093a70
  May 13 15:53:10 blade02 kernel: [93297.784246] Call Trace:
  May 13 15:53:10 blade02 kernel: [93297.784257]  [8172266e]
  dump_stack+0x45/0x56
  May 13 15:53:10 blade02 kernel: [93297.784264]  [810677fd]
  warn_slowpath_common+0x7d/0xa0
  May 13 15:53:10 blade02 kernel: [93297.784269]  [810678da]
  warn_slowpath_null+0x1a/0x20
  May 13 15:53:10 blade02 kernel: [93297.784280]  [a046facd]
  fill_inode.isra.8+0x9ed/0xa00 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784290]  [a046e3cd] ?
  ceph_alloc_inode+0x1d/0x4e0 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784302]  [a04704cf]
  ceph_readdir_prepopulate+0x27f/0x6d0 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784318]  [a048a704]
  handle_reply+0x854/0xc70 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784331]  [a048c3f7]
  dispatch+0xe7/0xa90 [ceph]
  May 13 15:53:10 blade02 kernel: [93297.784342]  [a02a4a78] ?
  ceph_tcp_recvmsg+0x48/0x60 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784354]  [a02a7a9b]
  try_read+0x4ab/0x10d0 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784365]  [a02a9418] ?
  try_write+0x9a8/0xdb0 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784373]  [8101bc23] ?
  native_sched_clock+0x13/0x80
  May 13 15:53:10 blade02 kernel: [93297.784379]  [8109d585] ?
  sched_clock_cpu+0xb5/0x100
  May 13 15:53:10 blade02 kernel: [93297.784390]  [a02a98d9]
  con_work+0xb9/0x640 [libceph]
  May 13 15:53:10 blade02 kernel: [93297.784398]  [81083aa2]
  process_one_work+0x182/0x450
  May 13 15:53:10 blade02 kernel: [93297.784403]  [81084891]
  worker_thread+0x121/0x410
  May 13 15:53:10 blade02 kernel: [93297.784409]  [81084770] ?
  rescuer_thread+0x430/0x430
  May 13 15:53:10 blade02 kernel: [93297.784414]  [8108b5d2]
  kthread+0xd2/0xf0
  May 13 15:53:10 blade02 kernel: [93297.784420]  [8108b500] ?
  kthread_create_on_node+0x1c0/0x1c0
  May 13 15:53:10 blade02 kernel: [93297.784426]  [817330cc]
  ret_from_fork+0x7c/0xb0
  May 13 15

[ceph-users] Kernel Bug in 3.13.0-52

2015-05-13 Thread Daniel Takatori Ohara
Hi,

We have a small ceph cluster with 4 OSD's and 1 MDS.

I run Ubuntu 14.04 with 3.13.0-52-generic in the clients, and CentOS 6.6
with 2.6.32-504.16.2.el6.x86_64 in Servers.

The version of Ceph is 0.94.1

Sometimes, the CephFS freeze, and the dmesg show me the follow messages :

May 13 15:53:10 blade02 kernel: [93297.784094] [ cut here
]
May 13 15:53:10 blade02 kernel: [93297.784121] WARNING: CPU: 10 PID: 299 at
/build/buildd/linux-3.13.0/fs/ceph/inode.c:701
fill_inode.isra.8+0x9ed/0xa00 [ceph]()
May 13 15:53:10 blade02 kernel: [93297.784129] Modules linked in: 8021q
garp stp mrp llc nfsv3 rpcsec_gss_krb5 nfsv4 ceph libceph libcrc32c
intel_rapl x86_pkg_temp_thermal intel_powerclamp ipmi_devintf gpi
May 13 15:53:10 blade02 kernel: [93297.784204] CPU: 10 PID: 299 Comm:
kworker/10:1 Tainted: GW 3.13.0-52-generic #86-Ubuntu
May 13 15:53:10 blade02 kernel: [93297.784207] Hardware name: Dell Inc.
PowerEdge M520/050YHY, BIOS 2.1.3 01/20/2014
May 13 15:53:10 blade02 kernel: [93297.784221] Workqueue: ceph-msgr
con_work [libceph]
May 13 15:53:10 blade02 kernel: [93297.784225]  0009
880801093a28 8172266e 
May 13 15:53:10 blade02 kernel: [93297.784233]  880801093a60
810677fd ffea 0036
May 13 15:53:10 blade02 kernel: [93297.784239]  
 c9001b73f9d8 880801093a70
May 13 15:53:10 blade02 kernel: [93297.784246] Call Trace:
May 13 15:53:10 blade02 kernel: [93297.784257]  [8172266e]
dump_stack+0x45/0x56
May 13 15:53:10 blade02 kernel: [93297.784264]  [810677fd]
warn_slowpath_common+0x7d/0xa0
May 13 15:53:10 blade02 kernel: [93297.784269]  [810678da]
warn_slowpath_null+0x1a/0x20
May 13 15:53:10 blade02 kernel: [93297.784280]  [a046facd]
fill_inode.isra.8+0x9ed/0xa00 [ceph]
May 13 15:53:10 blade02 kernel: [93297.784290]  [a046e3cd] ?
ceph_alloc_inode+0x1d/0x4e0 [ceph]
May 13 15:53:10 blade02 kernel: [93297.784302]  [a04704cf]
ceph_readdir_prepopulate+0x27f/0x6d0 [ceph]
May 13 15:53:10 blade02 kernel: [93297.784318]  [a048a704]
handle_reply+0x854/0xc70 [ceph]
May 13 15:53:10 blade02 kernel: [93297.784331]  [a048c3f7]
dispatch+0xe7/0xa90 [ceph]
May 13 15:53:10 blade02 kernel: [93297.784342]  [a02a4a78] ?
ceph_tcp_recvmsg+0x48/0x60 [libceph]
May 13 15:53:10 blade02 kernel: [93297.784354]  [a02a7a9b]
try_read+0x4ab/0x10d0 [libceph]
May 13 15:53:10 blade02 kernel: [93297.784365]  [a02a9418] ?
try_write+0x9a8/0xdb0 [libceph]
May 13 15:53:10 blade02 kernel: [93297.784373]  [8101bc23] ?
native_sched_clock+0x13/0x80
May 13 15:53:10 blade02 kernel: [93297.784379]  [8109d585] ?
sched_clock_cpu+0xb5/0x100
May 13 15:53:10 blade02 kernel: [93297.784390]  [a02a98d9]
con_work+0xb9/0x640 [libceph]
May 13 15:53:10 blade02 kernel: [93297.784398]  [81083aa2]
process_one_work+0x182/0x450
May 13 15:53:10 blade02 kernel: [93297.784403]  [81084891]
worker_thread+0x121/0x410
May 13 15:53:10 blade02 kernel: [93297.784409]  [81084770] ?
rescuer_thread+0x430/0x430
May 13 15:53:10 blade02 kernel: [93297.784414]  [8108b5d2]
kthread+0xd2/0xf0
May 13 15:53:10 blade02 kernel: [93297.784420]  [8108b500] ?
kthread_create_on_node+0x1c0/0x1c0
May 13 15:53:10 blade02 kernel: [93297.784426]  [817330cc]
ret_from_fork+0x7c/0xb0
May 13 15:53:10 blade02 kernel: [93297.784431]  [8108b500] ?
kthread_create_on_node+0x1c0/0x1c0
May 13 15:53:10 blade02 kernel: [93297.784434] ---[ end trace
05d3f5ee1f31bc67 ]---
May 13 15:53:10 blade02 kernel: [93297.784437] ceph: fill_inode badness on
8807f7eaa5c0


Any ideas cause this?

Help me please.

Thanks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mds log message

2015-03-20 Thread Daniel Takatori Ohara
 16:22:19.564478 7f1608d49700  0 log_channel(default) log [WRN] :
slow request 482.457264 seconds old, received at 2015-03-20
16:14:17.107149: client_request(client.1727647:56325699 create #1

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem in cephfs for remove empty directory

2015-03-03 Thread Daniel Takatori Ohara
Hi John and Gregory,

The version of ceph client is 0.87 and the kernel is 3.13.

The debug logs here in attach.

I see this problem in a older kernel, but i didn't find the solution in the
track.

Thanks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Tue, Mar 3, 2015 at 2:26 PM, Gregory Farnum g...@gregs42.com wrote:

 On Tue, Mar 3, 2015 at 9:24 AM, John Spray john.sp...@redhat.com wrote:
  On 03/03/2015 14:07, Daniel Takatori Ohara wrote:
 
  $ls test-daniel-old/
  total 0
  drwx-- 1 rmagalhaes BioInfoHSL Users0 Mar  2 10:52 ./
  drwx-- 1 rmagalhaes BioInfoHSL Users 773099838313 Mar  2 11:41 ../
 
  $rm -rf test-daniel-old/
  rm: cannot remove ‘test-daniel-old/’: Directory not empty
 
  $ls test-daniel-old/
  ls: cannot access
  test-daniel-old/M_S8_L001_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such
 file
  or directory
  ls: cannot access
  test-daniel-old/M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
  file or directory
  ls: cannot access
  test-daniel-old/M_S8_L002_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such
 file
  or directory
  ls: cannot access
  test-daniel-old/M_S8_L002_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
  file or directory
  ls: cannot access
  test-daniel-old/M_S8_L003_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such
 file
  or directory
  ls: cannot access
  test-daniel-old/M_S8_L003_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
  file or directory
  ls: cannot access
  test-daniel-old/M_S8_L004_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such
 file
  or directory
  ls: cannot access
  test-daniel-old/M_S8_L004_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
  file or directory
  total 0
  drwx-- 1 rmagalhaes BioInfoHSL Users0 Mar  2 10:52 ./
  drwx-- 1 rmagalhaes BioInfoHSL Users 773099838313 Mar  2 11:41 ../
  l? ? ?  ?   ??
  M_S8_L001_R1-2_001.fastq.gz_ref.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L002_R1-2_001.fastq.gz_ref.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L002_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L003_R1-2_001.fastq.gz_ref.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L003_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L004_R1-2_001.fastq.gz_ref.sam_fixed.bam
  l? ? ?  ?   ??
  M_S8_L004_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
 
  You don't say what version of the client (version of kernel, if it's the
  kernel client) this is.  It would appear that the client thinks there are
  some dentries that don't really exist.  You should enable verbose debug
 logs
  (with fuse client, debug client = 20) and reproduce this.  It looks
 like
  you had similar issues (subject: problem for remove files in cephfs) a
  while back, when Yan Zheng also advised you to get some debug logs.

 In particular this is a known bug in older kernels and is fixed in new
 enough ones. Unfortunately I don't have the bug link handy though. :(
 -Greg



log_mds.gz
Description: GNU Zip compressed data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] problem in cephfs for remove empty directory

2015-03-03 Thread Daniel Takatori Ohara
Hi,

I have a  problem when i will remove a empty directory in cephfs. The
directory is empty, but it seems have files crashed in MDS.

*$ls test-daniel-old/*
total 0
drwx-- 1 rmagalhaes BioInfoHSL Users0 Mar  2 10:52 ./
drwx-- 1 rmagalhaes BioInfoHSL Users 773099838313 Mar  2 11:41 ../

*$rm -rf test-daniel-old/*
rm: cannot remove ‘test-daniel-old/’: Directory not empty

*$ls test-daniel-old/*
ls: cannot access
test-daniel-old/M_S8_L001_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such file
or directory
ls: cannot access
test-daniel-old/M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
file or directory
ls: cannot access
test-daniel-old/M_S8_L002_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such file
or directory
ls: cannot access
test-daniel-old/M_S8_L002_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
file or directory
ls: cannot access
test-daniel-old/M_S8_L003_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such file
or directory
ls: cannot access
test-daniel-old/M_S8_L003_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
file or directory
ls: cannot access
test-daniel-old/M_S8_L004_R1-2_001.fastq.gz_ref.sam_fixed.bam: No such file
or directory
ls: cannot access
test-daniel-old/M_S8_L004_R1-2_001.fastq.gz_sylvio.sam_fixed.bam: No such
file or directory
total 0
drwx-- 1 rmagalhaes BioInfoHSL Users0 Mar  2 10:52 ./
drwx-- 1 rmagalhaes BioInfoHSL Users 773099838313 Mar  2 11:41 ../
l? ? ?  ?   ??
M_S8_L001_R1-2_001.fastq.gz_ref.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L002_R1-2_001.fastq.gz_ref.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L002_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L003_R1-2_001.fastq.gz_ref.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L003_R1-2_001.fastq.gz_sylvio.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L004_R1-2_001.fastq.gz_ref.sam_fixed.bam
l? ? ?  ?   ??
M_S8_L004_R1-2_001.fastq.gz_sylvio.sam_fixed.bam


Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Lost Object

2015-02-27 Thread Daniel Takatori Ohara
Anyone help me, please?

In the attach, the log of mds with debug = 20.

Thanks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Thu, Feb 26, 2015 at 4:21 PM, Daniel Takatori Ohara 
dtoh...@mochsl.org.br wrote:

 Hello,

 I have an problem. I will make a symbolic link for an file, but return the
 message : ln: failed to create symbolic link
 ‘./M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam’: File exists

 When i do the command ls, the result is

 l? ? ?  ?   ??
 M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam

 But, when do the command ls in the second time, the result not show the
 file.

 Anyone help me, please?

 Thank you,

 Att.

 ---
 Daniel Takatori Ohara.
 System Administrator - Lab. of Bioinformatics
 Molecular Oncology Center
 Instituto Sírio-Libanês de Ensino e Pesquisa
 Hospital Sírio-Libanês
 Phone: +55 11 3155-0200 (extension 1927)
 R: Cel. Nicolau dos Santos, 69
 São Paulo-SP. 01308-060
 http://www.bioinfo.mochsl.org.br




log_mds.gz
Description: GNU Zip compressed data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Lost Object

2015-02-26 Thread Daniel Takatori Ohara
Hello,

I have an problem. I will make a symbolic link for an file, but return the
message : ln: failed to create symbolic link
‘./M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam’: File exists

When i do the command ls, the result is

l? ? ?  ?   ??
M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam

But, when do the command ls in the second time, the result not show the
file.

Anyone help me, please?

Thank you,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD down

2015-02-05 Thread Daniel Takatori Ohara
Hello Alex,

Thank's for the answer.

In the server's, i use CentOS 6.6 with kernel 2.6.32, and in the clients i
use Ubuntu 14 with kernel 3.16.

And the version of the Ceph is 0.87.

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Thu, Feb 5, 2015 at 10:43 AM, Alexis KOALLA alexis.koa...@orange.com
wrote:

  Hi Daniel
 Could you be more precise on your issue please?
 What is the OS under which your ceph is running and what is the ceph
 version you are currently running?

 Anyway, I have exeprienced an issue that looks like yours.
 I have  installed and configured a small cluster microceph on my PC  for
 quick demo. AOn this cluster I have 4 OSDs and 1 MON . There is no MDS.
 I have written a script that starts the cluster.
 In this script I start the monitor: ceph-mon -c /path/to/yourceph/confile
 -i mon_id
 I also start manually the 4 OSD like this :ceph-osd -c
 /path/to/yourceph/confile -i osd_id

 I also forced the OSD to be in after the start.
 Right now it works fine.But I don't think it's the right ay to
 process(start manually the OSD and putting them in )
 May be it can give you an idea where to start investigation.

 Regards
 Alex


 Le 05/02/2015 11:29, Daniel Takatori Ohara a écrit :

  Hi, anyone help me please.

  I have a cluster with 4 OSD's, 1 MDS and 1 MON.

  The osd.3 was down, and i need restart in the host with the command
 /etc/init.d/ceph restart osd.3.

  The osd.0 is marked down sometimes, but he is marked up automatically.

  [ceph@ceph-admin my-cluster]$ ceph osd tree
 # idweight  type name   up/down reweight
 -1  50.63   root default
 -2  13.84   host ceph-osd1
 0   13.84   osd.0   up  1
 -3  14.76   host ceph-osd2
 1   14.76   osd.1   up  1
 -4  22.03   host ceph-osd3
 2   10.09   osd.2   up  0.8
 3   11.94   osd.3   down0

  Anyone, can help me, please?

  Thank's,

  Att.

  ---
 Daniel Takatori Ohara.
 System Administrator - Lab. of Bioinformatics
 Molecular Oncology Center
 Instituto Sírio-Libanês de Ensino e Pesquisa
 Hospital Sírio-Libanês
 Phone: +55 11 3155-0200 (extension 1927)
 R: Cel. Nicolau dos Santos, 69
 São Paulo-SP. 01308-060
 http://www.bioinfo.mochsl.org.br



 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 --

 [image: logo Orange] http://www.orange.com/

 *Alexis KOALLA*

 Orange/IMT/OLPS/ASE/DAPI/CSE

 Spécialiste en Technologies/Cloud Storage Services  Plateformes

 Specialist  in Technologies/Cloud Storage Services  Platforms

 Tel :+33(0) 299 124 939 / +33 670 698 929
 alexis.koa...@orange.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD down

2015-02-05 Thread Daniel Takatori Ohara
Hi, anyone help me please.

I have a cluster with 4 OSD's, 1 MDS and 1 MON.

The osd.3 was down, and i need restart in the host with the command
/etc/init.d/ceph restart osd.3.

The osd.0 is marked down sometimes, but he is marked up automatically.

[ceph@ceph-admin my-cluster]$ ceph osd tree
# idweight  type name   up/down reweight
-1  50.63   root default
-2  13.84   host ceph-osd1
0   13.84   osd.0   up  1
-3  14.76   host ceph-osd2
1   14.76   osd.1   up  1
-4  22.03   host ceph-osd3
2   10.09   osd.2   up  0.8
3   11.94   osd.3   down0

Anyone, can help me, please?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] problem for remove files in cephfs

2015-01-16 Thread Daniel Takatori Ohara
Hi,

I have a problem for remove one file in cephfs. With the command ls, all
the arguments show me with ???.

*ls: cannot access refseq/source_step2: No such file or directory*
*total 0*
*drwxrwxr-x 1 dtohara BioInfoHSL Users0 Jan 15 15:01 .*
*drwxrwxr-x 1 dtohara BioInfoHSL Users 3.8G Jan 15 14:55 ..*
*l? ? ?   ?   ?? source_step2*

Anyone, help me.

Ps.: Sorry for my english.

Thaks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cache pressure fail

2014-11-07 Thread Daniel Takatori Ohara
Hi,

In my cluster, when i execute the command ceph health detail, show me the
message.

mds0: Many clients (17) failing to respond to cache pressure(client_count: )

This message appear when i upgrade the ceph for 0.87 from 0.80.7.

Anyone help me?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Change port of Mon

2014-10-27 Thread Daniel Takatori Ohara
Hello,

Anyone help me. How can i modify the port of the mon?

And how can i modify the cluster name?

Thanks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ls hangs

2014-10-23 Thread Daniel Takatori Ohara
Hello,

I new in Cephh. I create a cluster with 2 osds and 1 mds.

But, the ls in a specific directory hangs.

Anyone help me?

My clients are Ubuntu 14.04 with kernel 3.13.0-24-generic

And my servers are CentOS 6.5 with kernel 2.6.32-431.23.3.el6.x86_64

The Ceph version is 0.80.5

Thanks,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com