No improvement speedwise using ramdisk.  Same as before.  I'll try
increasing my Crail Java client buffer size...

Lou.

On Fri, Mar 6, 2020 at 8:25 AM Lou DeGenaro <[email protected]> wrote:

> Hi Adrian,
>
> Still a bit confused.  So I created a ramdisk and have Crail configured to
> use it:
>
> [root@sgt-pepper hugepages]# mount -t tmpfs -o size=2G ramdisk
> /tmp/hugepages[root@sgt-pepper hugepages]
> ...
> [root@sgt-pepper hugepages]# mount | grep tmp
> ...
> ramdisk on /tmp/hugepages type tmpfs (rw,relatime,seclabel,size=2097152k)
> ...
>
> [root@sgt-pepper hugepages]# cat
> /usr/local/apache-crail-1.2-incubating/conf/crail-site.conf
> crail.namenode.address            crail://sgt-pepper:9060
> crail.cachepath                   /tmp/hugepages/cache
> crail.cachelimit                  1073741824
> crail.storage.tcp.interface       eth0
> crail.storage.tcp.datapath        /tmp/hugepages/data
> crail.storage.tcp.storagelimit    1073741824
>
> I put a file into Crail using CLI:
>
> [root@sgt-pepper hugepages]#
> /usr/local/apache-crail-1.2-incubating/bin/crail fs -copyFromLocal
> /root/G1.txt /G1.txt
> /usr/local/apache-crail-1.2-incubating/bin/crail fs -ls /
> ...
> Found 1 items
> -rw-rw-rw-   1 crail crail 1073741824 2020-03-06 07:09 /G1.txt
> ...
>
> I look for file in file system, but do not find it?
>
> [root@sgt-pepper hugepages]# pwd
> /tmp/hugepages
> [root@sgt-pepper hugepages]# ls
> cache  data
> [root@sgt-pepper hugepages]# ls -atl cache
> total 0
> drwxr-xr-x. 2 root root 40 Mar  6 07:10 .
> drwxrwxrwt. 4 root root 80 Mar  5 08:54 ..
> [root@sgt-pepper hugepages]# ls -atl data
> total 0
> drwxr-xr-x. 2 root root 40 Mar  5 13:56 .
> drwxrwxrwt. 4 root root 80 Mar  5 08:54 ..
>
> I will test speed with my remote Crail Java client, even though I see no
> file there?  Does not datanode keep file in DRAM so disk (of any kind, hard
> or RAM) should not be involved in the first place?
>
> Lou.
>
>
>
> On Thu, Mar 5, 2020 at 9:22 AM Adrian Schuepbach <
> [email protected]> wrote:
>
>> Hi Lou
>>
>> It seems that you are doing a disk access. You have no hugepages
>> and /tmp/hugepages belongs to the root file saystem, which
>> is an ext3 most probably on a disk.
>>
>> I don't know, if this is indeed what makes it slower, but
>> you should actually use Crail with a hugetlbfs file system mounted
>> where you configured the data and cache paths to be.
>>
>> By increasing the buffer size I mean increase the value
>> from 1024*1024 in your code.
>>
>> Thanks
>> Adrian
>>
>> On 3/5/20 15:14, Lou DeGenaro wrote:
>> > [root@sgt-pepper ~]# mount
>> > sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
>> > proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
>> > devtmpfs on /dev type devtmpfs
>> > (rw,nosuid,seclabel,size=3991388k,nr_inodes=997847,mode=755)
>> > securityfs on /sys/kernel/security type securityfs
>> > (rw,nosuid,nodev,noexec,relatime)
>> > tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
>> > devpts on /dev/pts type devpts
>> > (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
>> > tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
>> > tmpfs on /sys/fs/cgroup type tmpfs
>> > (ro,nosuid,nodev,noexec,seclabel,mode=755)
>> > cgroup on /sys/fs/cgroup/systemd type cgroup
>> >
>> (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
>> > pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
>> > cgroup on /sys/fs/cgroup/cpuset type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
>> > cgroup on /sys/fs/cgroup/hugetlb type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
>> > cgroup on /sys/fs/cgroup/memory type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
>> > cgroup on /sys/fs/cgroup/blkio type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
>> > cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)
>> > cgroup on /sys/fs/cgroup/pids type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
>> > cgroup on /sys/fs/cgroup/freezer type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
>> > cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)
>> > cgroup on /sys/fs/cgroup/devices type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
>> > cgroup on /sys/fs/cgroup/perf_event type cgroup
>> > (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
>> > configfs on /sys/kernel/config type configfs (rw,relatime)
>> > /dev/xvda2 on / type ext3 (rw,noatime,seclabel,data=ordered)
>> > selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
>> > mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
>> > debugfs on /sys/kernel/debug type debugfs (rw,relatime)
>> > hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>> > /dev/xvda1 on /boot type ext3 (rw,noatime,seclabel,data=ordered)
>> > none on /proc/xen type xenfs (rw,relatime)
>> > systemd-1 on /proc/sys/fs/binfmt_misc type autofs
>> >
>> (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=28876)
>> > overlay on
>> >
>> /var/lib/docker/overlay2/6dadbda2c0217397cdc88f043163a31abdef1e618db43e1187caf3f88e4be9c5/merged
>> > type overlay
>> >
>> (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/CVJZV2GQR7FFUFKDATWUC5XOHD:/var/lib/docker/overlay2/l/3TADZQEQKTFXPQHJ7HLQJH3YBP:/var/lib/docker/overlay2/l/AAIXB6R7K6R7WP76YQY4MNX6DB:/var/lib/docker/overlay2/l/HBG67PHTIV7E4TOGCV5565CJVU:/var/lib/docker/overlay2/l/S3A7TWP3B6UWEWAYG4E33DZC2O:/var/lib/docker/overlay2/l/D5W2VRAGMJU7ED26WDAMKE54MY:/var/lib/docker/overlay2/l/KUZMRHL4IAYCEBT74BBKVDC6IQ:/var/lib/docker/overlay2/l/HYGNGZ6MV3C5YVL2TCXT7E55PS:/var/lib/docker/overlay2/l/ZESSLGAWK2ZYDV3IU2A2RB736Q:/var/lib/docker/overlay2/l/K43FJ3RNBAUET3G45SI47KHUQK:/var/lib/docker/overlay2/l/Q26VC3ZXOGXBYHUANTP3WDI5X6:/var/lib/docker/overlay2/l/K6LQYQTUY75DFZH6NZ34S65GAQ:/var/lib/docker/overlay2/l/E3DMUWQXEBRL5LCV7Z5UK4N6GK,upperdir=/var/lib/docker/overlay2/6dadbda2c0217397cdc88f043163a31abdef1e618db43e1187caf3f88e4be9c5/diff,workdir=/var/lib/docker/overlay2/6dadbda2c0217397cdc88f043163a31abdef1e618db43e1187caf3f88e4be9c5/work)
>> > proc on /run/docker/netns/default type proc
>> > (rw,nosuid,nodev,noexec,relatime)
>> > shm on
>> >
>> /var/lib/docker/containers/4f3bff2fbb9fa38110d3ae70a86a2a2f0a999299597c8d0c7f072eca30d8b186/mounts/shm
>> > type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k)
>> > overlay on
>> >
>> /var/lib/docker/overlay2/0974ad067d2681ce88bc2a5fe0dd4c27a06921b4db976bfb7fa7e95c9eca1800/merged
>> > type overlay
>> >
>> (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/ABKWJRSETYWLWIFWINXQALMKHA:/var/lib/docker/overlay2/l/3TADZQEQKTFXPQHJ7HLQJH3YBP:/var/lib/docker/overlay2/l/AAIXB6R7K6R7WP76YQY4MNX6DB:/var/lib/docker/overlay2/l/HBG67PHTIV7E4TOGCV5565CJVU:/var/lib/docker/overlay2/l/S3A7TWP3B6UWEWAYG4E33DZC2O:/var/lib/docker/overlay2/l/D5W2VRAGMJU7ED26WDAMKE54MY:/var/lib/docker/overlay2/l/KUZMRHL4IAYCEBT74BBKVDC6IQ:/var/lib/docker/overlay2/l/HYGNGZ6MV3C5YVL2TCXT7E55PS:/var/lib/docker/overlay2/l/ZESSLGAWK2ZYDV3IU2A2RB736Q:/var/lib/docker/overlay2/l/K43FJ3RNBAUET3G45SI47KHUQK:/var/lib/docker/overlay2/l/Q26VC3ZXOGXBYHUANTP3WDI5X6:/var/lib/docker/overlay2/l/K6LQYQTUY75DFZH6NZ34S65GAQ:/var/lib/docker/overlay2/l/E3DMUWQXEBRL5LCV7Z5UK4N6GK,upperdir=/var/lib/docker/overlay2/0974ad067d2681ce88bc2a5fe0dd4c27a06921b4db976bfb7fa7e95c9eca1800/diff,workdir=/var/lib/docker/overlay2/0974ad067d2681ce88bc2a5fe0dd4c27a06921b4db976bfb7fa7e95c9eca1800/work)
>> > shm on
>> >
>> /var/lib/docker/containers/6ec82c01ef8fcf8bae69fd55b7fe82ee7e685e28a9331a4f1aeb7425e670a433/mounts/shm
>> > type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k)
>> > tmpfs on /run/user/0 type tmpfs
>> > (rw,nosuid,nodev,relatime,seclabel,size=800344k,mode=700)
>> > [root@sgt-pepper ~]# cat /proc/meminfo
>> > MemTotal:        8003408 kB
>> > MemFree:          778948 kB
>> > MemAvailable:    7202972 kB
>> > Buffers:          235960 kB
>> > Cached:          6170780 kB
>> > SwapCached:            0 kB
>> > Active:          2572556 kB
>> > Inactive:        4205592 kB
>> > Active(anon):     372284 kB
>> > Inactive(anon):     8072 kB
>> > Active(file):    2200272 kB
>> > Inactive(file):  4197520 kB
>> > Unevictable:           0 kB
>> > Mlocked:               0 kB
>> > SwapTotal:       2096444 kB
>> > SwapFree:        2096444 kB
>> > Dirty:                20 kB
>> > Writeback:             0 kB
>> > AnonPages:        371448 kB
>> > Mapped:          2192120 kB
>> > Shmem:              8956 kB
>> > Slab:             366532 kB
>> > SReclaimable:     329032 kB
>> > SUnreclaim:        37500 kB
>> > KernelStack:        3456 kB
>> > PageTables:         9948 kB
>> > NFS_Unstable:          0 kB
>> > Bounce:                0 kB
>> > WritebackTmp:          0 kB
>> > CommitLimit:     6098148 kB
>> > Committed_AS:    1331684 kB
>> > VmallocTotal:   34359738367 kB
>> > VmallocUsed:       20280 kB
>> > VmallocChunk:   34359715812 kB
>> > HardwareCorrupted:     0 kB
>> > AnonHugePages:    239616 kB
>> > CmaTotal:              0 kB
>> > CmaFree:               0 kB
>> > HugePages_Total:       0
>> > HugePages_Free:        0
>> > HugePages_Rsvd:        0
>> > HugePages_Surp:        0
>> > Hugepagesize:       2048 kB
>> > DirectMap4k:      145408 kB
>> > DirectMap2M:     6141952 kB
>> > DirectMap1G:     2097152 kB
>> >
>> > By increased buffer size, you mean in both Crail conf and Java client?
>> >
>> > Lou.
>> >
>> > On Thu, Mar 5, 2020 at 9:08 AM Adrian Schuepbach <
>> > [email protected]> wrote:
>> >
>> >> Hi Lou
>> >>
>> >> Thanks. Can you show the output of `mount` and `cat /proc/meminfo`?
>> >>
>> >> Can you measure again with an increased buffer size?
>> >>
>> >> Thanks
>> >> Adrian
>> >>
>> >> On 3/5/20 14:48, Lou DeGenaro wrote:
>> >>> Hi Adrian,
>> >>>
>> >>> re: iperf
>> >>>
>> >>> iperf -s
>> >>> iperf -c <hostname>
>> >>>
>> >>> re: bandwidth
>> >>>
>> >>> bandwidth is calculated offline, using the seconds printed and the 1
>> GB
>> >>> known file size.  The 1 GB file is generated using:
>> >>>
>> >>> base64 /dev/urandom | head -c 1073741824 > G1.txt
>> >>>
>> >>> and is stored into Crail using:
>> >>>
>> >>> bin/crail fs -copyFromLocal /root/G1.txt /G1.txt
>> >>>
>> >>>
>> >>> re: Crail configuration
>> >>>
>> >>> crail.namenode.address            crail://sgt-pepper:9060
>> >>> crail.cachepath                   /tmp/hugepages/cache
>> >>> crail.cachelimit                  1073741824
>> >>> crail.storage.tcp.interface       eth0
>> >>> crail.storage.tcp.datapath        /tmp/hugepages/data
>> >>> crail.storage.tcp.storagelimit    1073741824
>> >>>
>> >>> I did try using iobench sometime ago without success.  See my
>> newsgroup
>> >>> post 2020/02/07
>> >>>
>> >>> Thanks for your help and interest.
>> >>>
>> >>> Lou.
>> >>>
>> >>>
>> >>> On Thu, Mar 5, 2020 at 8:26 AM Adrian Schuepbach <
>> >>> [email protected]> wrote:
>> >>>
>> >>>> Hi Lou
>> >>>>
>> >>>> It is hard to say without knowing more details.
>> >>>>
>> >>>> Can you post the exact iperf command you used?
>> >>>>
>> >>>> Also the Crail code you show seems not to be the one you
>> >>>> use for the measurements, at least I don't see where the code
>> >>>> computes the bandwidth. Can you post the actual code you are using?
>> >>>>
>> >>>> Have you configured Crail to store data on hugepages? Or will
>> >>>> it access disks?
>> >>>>
>> >>>> There is also iobench, a performance measurement tool that comes
>> >>>> with Crail. Have you tried measuring with this one?
>> >>>>
>> >>>>
>> >>>> Best
>> >>>> Adrian
>> >>>>
>> >>>> On 3/5/20 14:01, Lou DeGenaro wrote:
>> >>>>> Hello,
>> >>>>>
>> >>>>> I'm comparing Crail to iperf.  VM-A is used to run the server(s).
>> VM-B
>> >>>> is
>> >>>>> used to run the client.  Both VMs are CentOS 7 with 8 GB memory + 2
>> >> CPUs.
>> >>>>> Tests runs are non-overlapping.
>> >>>>>
>> >>>>> For the Crail case , a 1 GB file is posted to server and a simple
>> Java
>> >>>>> client is employed (see below).
>> >>>>>
>> >>>>> Results:
>> >>>>>
>> >>>>> iperf: 4.05 Gb/seb
>> >>>>> Crail: 2.52 Gb/sec
>> >>>>>
>> >>>>> Why is iperf so much better??
>> >>>>>
>> >>>>> Lou.
>> >>>>>
>> >>>>> -----
>> >>>>>
>> >>>>> Crail Java Client:
>> >>>>>
>> >>>>> public static void main(String[] args) {
>> >>>>> try {
>> >>>>> //initialize
>> >>>>> String filename = "/G1.txt";
>> >>>>> int filesize = 1024*1024*1024;
>> >>>>> int bufsize = 1024*1024;
>> >>>>> CrailBuffer buf =
>> >> OffHeapBuffer.wrap(ByteBuffer.allocateDirect(bufsize));
>> >>>>> CrailConfiguration cconf =
>> >>>> CrailConfiguration.createConfigurationFromFile();
>> >>>>> CrailStore cstore = CrailStore.newInstance(cconf);
>> >>>>> CrailFile file = cstore.lookup(filename).get().asFile();
>> >>>>> CrailInputStream directStream =
>> >>>>> file.getDirectInputStream(file.getCapacity());
>> >>>>> long sumbytes = 0;
>> >>>>> //run test
>> >>>>> long start = System.currentTimeMillis();
>> >>>>> while (sumbytes < filesize) {
>> >>>>> buf.clear();
>> >>>>> CrailResult cr = directStream.read(buf).get();
>> >>>>> long ret = cr.getLen();
>> >>>>> sumbytes = sumbytes + ret;
>> >>>>> }
>> >>>>> long end = System.currentTimeMillis();
>> >>>>> //print result and clean-up
>> >>>>> double executionTime = ((double) (end - start)) / 1000.0;
>> >>>>> System.out.println("time: "+executionTime);
>> >>>>> cstore.close();
>> >>>>> directStream.close();
>> >>>>> }
>> >>>>> catch(Exception e) {
>> >>>>> e.printStackTrace();
>> >>>>> }
>> >>>>> }
>> >>>>>
>> >>>> --
>> >>>> Adrian Schüpbach, Dr. sc. ETH Zürich
>> >>>>
>> >>>>
>> >> --
>> >> Adrian Schüpbach, Dr. sc. ETH Zürich
>> >>
>> >>
>> --
>> Adrian Schüpbach, Dr. sc. ETH Zürich
>>
>>

Reply via email to