I may have misspoke about the 2GB.  I can't find the screen with the data.

The following is some memory stats prior to rebooting the system.  The core
might be the lack of indexing.

-------------------------------------------------------------------------
free -t -m
             total       used       free     shared    buffers     cached
Mem:           592        585          6          0         20        491
-/+ buffers/cache:         73        518
Swap:            0          0          0
Total:         592        585          6


-------------------------------------------------------------------------
vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
wa
 1  0      0   6640  21024 503796    0    0     3    51   17  211  3  2  3
8


-------------------------------------------------------------------------
top
top - 13:52:22 up 1 day,  3:22,  1 user,  load average: 3.30, 3.44, 3.08
Tasks:  66 total,   3 running,  63 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.5%us,  1.5%sy,  0.0%ni,  2.7%id, 14.3%wa,  0.0%hi,  0.0%si,
81.1%st
Mem:    606804k total,   599424k used,     7380k free,    20912k buffers
Swap:        0k total,        0k used,        0k free,   502452k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  593 ubuntu    20   0 1073m  49m  45m R 43.2  8.3   1204:39 se_trn
  548 ubuntu    20   0  228m 210m 202m S 40.0 35.6 252:57.28 se_sm
 1019 ubuntu    20   0 19256 1252  960 R  0.3  0.2   0:00.01 top
    1 root      20   0 23840 1636  948 S  0.0  0.3   0:00.34 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd
    3 root      20   0     0    0    0 S  0.0  0.0   0:01.12 ksoftirqd/0
    4 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0
....

-----Original Message-----
From: Ivan Shcheklein [mailto:shchekl...@gmail.com] 
Sent: Thursday, July 07, 2011 2:48 PM
To: Malcolm Davis
Cc: sedna-discussion@lists.sourceforge.net
Subject: Re: [Sedna-discussion] Sedna memory requirement and configuration

Malcom,

First of all you should determine which process takes 2GB. se_trn, se_sm?
How many buffers do you use for se_sm?

What kind of memory does it use? Virtual or physical? Can you send top
output for that process?

Ivan



        Hello Ivan,

         

        Thank you very much for the response.  

         

        We do a bulk data load, and then incremental updates that occur in
batch mode.   In some systems, the bulk load can be as much as 3 GB, with
incremental batch updates of 20-30 MB.

         

        We prevent access to the system during bulk and batch updates, so no
queries occur.  

         

        The problem is that we noticed one of the Sedna systems use 2 G of
ram memory, which basically dragged down an Amazon micro instance and made
it unusable. (We stopped & started the instance and it worked correctly)

         

        We have not discovered the root cause of the memory problem, and are
still investigating other concerns.  (The OS and Sedna are the only thing we
have running on the micro instance)

         

        We will start profiling Sedna request in an attempt discover root
cause of some of issues.

         

        Thanks again for the quick response,

        Malcolm

         

         

         

         

        From: Ivan Shcheklein [mailto:shchekl...@gmail.com] 
        Sent: Thursday, July 07, 2011 2:07 PM
        To: Malcolm Davis
        Cc: sedna-discussion@lists.sourceforge.net
        Subject: Re: [Sedna-discussion] Sedna memory requirement and
configuration

         

        Hi Malcolm,

         

                I understand the problem of determining memory requirements
is more complicated than just volume.  There are the number of nodes to
consider, and the number and type of the indexes.

                 

                I was curious if any formulas existed I could utilize to
determine minimum memory requirements.

         

        No, there is no such formula.  Apart from the data structure and
indexes there is also one important factor - workload - i.e. queries/updates
you run. Besides, what does enough mean? 100MB is *physically* enough to run
any query on any data.

         

        I believe the only really effective approach to analyze queries. Run
them and look how many blocks they read/write (this information is available
in event log after session is closed). 

         

        Ivan Shcheklein,

        Sedna Team




------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
Sedna-discussion mailing list
Sedna-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sedna-discussion

Reply via email to