Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-16 Thread Alfredo Deza
On Wed, Nov 15, 2017 at 8:31 AM, Wei Jin  wrote:
> I tried to do purge/purgedata and then redo the deploy command for a
> few times, and it still fails to start osd.
> And there is no error log, anyone know what's the problem?

Seems like this is OSD 0, right? Have you checked the startup errors
on /var/log/ceph/ ? Or by checking the output of the daemon with
systemctl?

If nothing is working still, maybe try running the OSD in the
foreground with (assuming OSD 0):

/usr/bin/ceph-osd --debug_osd 20 -d -f --cluster ceph --id 0
--setuser ceph --setgroup ceph

Behind the scenes, ceph-disk is getting these devices ready and
associated with the cluster as OSD 0, if you've tried this many times
already I am suspicious
on the same OSD id being used or drives being polluted.

Seems like you are using filestore as well, so sdb1 will probably be
your data and mounted at /var/lib/ceph/osd/ceph-0 and sdb2 your
journal, linked at /var/lib/ceph/osd/ceph-0/journal

Make sure those are mounted and linked properly.

> BTW, my os is dedian with 4.4 kernel.
> Thanks.
>
>
> On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin  wrote:
>> Hi, List,
>>
>> My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for
>> some machine/disks,it failed to start osd.
>> I tried many times, some success but others failed. But there is no error
>> info.
>> Following is ceph-deploy log for one disk:
>>
>>
>> root@n10-075-012:~# ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /root/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create
>> --zap-disk n10-075-094:sdb:sdb
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : None
>> [ceph_deploy.cli][INFO  ]  block_db  : None
>> [ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094',
>> '/dev/sdb', '/dev/sdb')]
>> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>> [ceph_deploy.cli][INFO  ]  bluestore : None
>> [ceph_deploy.cli][INFO  ]  block_wal : None
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> [ceph_deploy.cli][INFO  ]  subcommand: create
>> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
>> /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO  ]  quiet : False
>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>> 
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
>> [ceph_deploy.cli][INFO  ]  filestore : None
>> [ceph_deploy.cli][INFO  ]  func  : > 0x7f566ae9a938>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>> [ceph_deploy.cli][INFO  ]  zap_disk  : True
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
>> n10-075-094:/dev/sdb:/dev/sdb
>> [n10-075-094][DEBUG ] connected to host: n10-075-094
>> [n10-075-094][DEBUG ] detect platform information from remote host
>> [n10-075-094][DEBUG ] detect machine type
>> [n10-075-094][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
>> [ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
>> [n10-075-094][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal
>> /dev/sdb activate True
>> [n10-075-094][DEBUG ] find the location of an executable
>> [n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
>> --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --cluster=ceph --show-config-value=fsid
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
>> /sys/dev/block/8:16/dm/uuid
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --cluster=ceph --show-config-value=osd_journal_size
>> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
>> /sys/dev/block/8:16/dm/uuid
>> [n10-075-094][WARNIN] get_dm_uuid: 

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a
few times, and it still fails to start osd.
And there is no error log, anyone know what's the problem?
BTW, my os is dedian with 4.4 kernel.
Thanks.


On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin  wrote:
> Hi, List,
>
> My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for
> some machine/disks,it failed to start osd.
> I tried many times, some success but others failed. But there is no error
> info.
> Following is ceph-deploy log for one disk:
>
>
> root@n10-075-012:~# ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create
> --zap-disk n10-075-094:sdb:sdb
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  block_db  : None
> [ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094',
> '/dev/sdb', '/dev/sdb')]
> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  bluestore : None
> [ceph_deploy.cli][INFO  ]  block_wal : None
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  subcommand: create
> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
> /etc/ceph/dmcrypt-keys
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
> [ceph_deploy.cli][INFO  ]  filestore : None
> [ceph_deploy.cli][INFO  ]  func  :  0x7f566ae9a938>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.cli][INFO  ]  zap_disk  : True
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> n10-075-094:/dev/sdb:/dev/sdb
> [n10-075-094][DEBUG ] connected to host: n10-075-094
> [n10-075-094][DEBUG ] detect platform information from remote host
> [n10-075-094][DEBUG ] detect machine type
> [n10-075-094][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
> [ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
> [n10-075-094][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> [ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal
> /dev/sdb activate True
> [n10-075-094][DEBUG ] find the location of an executable
> [n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
> --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=osd_journal_size
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
> /sys/dev/block/8:17/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is
> /sys/dev/block/8:18/dm/uuid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] zap: Zapping partition table on /dev/sdb
> [n10-075-094][WARNIN] 

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List,

My machine has 12 ssd
There are some errors for ceph-deploy.
It failed randomly

root@n10-075-012:~# *ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb*
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create
--zap-disk n10-075-094:sdb:sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  block_db  : None
[ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094',
'/dev/sdb', '/dev/sdb')]
[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  bluestore : None
[ceph_deploy.cli][INFO  ]  block_wal : None
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   :

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : xfs
[ceph_deploy.cli][INFO  ]  filestore : None
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : True
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
n10-075-094:/dev/sdb:/dev/sdb
[n10-075-094][DEBUG ] connected to host: n10-075-094
[n10-075-094][DEBUG ] detect platform information from remote host
[n10-075-094][DEBUG ] detect machine type
[n10-075-094][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
[ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
[n10-075-094][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal
/dev/sdb activate True
[n10-075-094][DEBUG ] find the location of an executable
[n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
--zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
--check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
--cluster ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
--check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
--cluster ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
--check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
--cluster ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
/sys/dev/block/8:17/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is
/sys/dev/block/8:18/dm/uuid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] zap: Zapping partition table on /dev/sdb
[n10-075-094][WARNIN] command_check_call: Running command: /sbin/sgdisk
--zap-all -- /dev/sdb
[n10-075-094][WARNIN] Caution: invalid backup GPT header, but valid main
header; regenerating
[n10-075-094][WARNIN] backup header from main header.
[n10-075-094][WARNIN]
[n10-075-094][WARNIN] Warning! Main and backup partition tables differ! Use
the 'c' and 'e' options
[n10-075-094][WARNIN] on the recovery & transformation menu to examine the
two tables.
[n10-075-094][WARNIN]
[n10-075-094][WARNIN] Warning! One or more CRCs don't match. You should
repair the disk!
[n10-075-094][WARNIN]
[n10-075-094][DEBUG ] **

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List,

My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for some 
machine/disks,it failed to start osd.
I tried many times, some success but others failed. But there is no error info.
Following is ceph-deploy log for one disk:


root@n10-075-012:~# ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create 
--zap-disk n10-075-094:sdb:sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  block_db  : None
[ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094', 
'/dev/sdb', '/dev/sdb')]
[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  bluestore : None
[ceph_deploy.cli][INFO  ]  block_wal : None
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   : 

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : xfs
[ceph_deploy.cli][INFO  ]  filestore : None
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : True
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
n10-075-094:/dev/sdb:/dev/sdb
[n10-075-094][DEBUG ] connected to host: n10-075-094
[n10-075-094][DEBUG ] detect platform information from remote host
[n10-075-094][DEBUG ] detect machine type
[n10-075-094][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
[ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
[n10-075-094][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal 
/dev/sdb activate True
[n10-075-094][DEBUG ] find the location of an executable
[n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare 
--zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log 
--cluster ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster 
ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster 
ceph --setuser ceph --setgroup ceph
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is 
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is 
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is 
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is 
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is 
/sys/dev/block/8:17/dm/uuid
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is 
/sys/dev/block/8:18/dm/uuid
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is 
/sys/dev/block/8:16/dm/uuid
[n10-075-094][WARNIN] zap: Zapping partition table on /dev/sdb
[n10-075-094][WARNIN] command_check_call: Running command: /sbin/sgdisk 
--zap-all -- /dev/sdb
[n10-075-094][WARNIN] Caution: invalid backup GPT header, but valid main 
header; regenerating
[n10-075-094][WARNIN] backup header from main header.
[n10-075-094][WARNIN]
[n10-075-094][WARNIN] Warning! Main and backup partition tables differ! Use the 
'c' and 'e' options
[n10-075-094][WARNIN] on the recovery & transformation menu to examine the two 
tables.