Re: High Cpu sys usage

2016-03-19 Thread Patrick Plaatje
Yeah, I did’t pay attention to the cached memory at all, my bad!

I remember running into a similar situation a couple of years ago, one of the 
things to investigate our memory profile was to produce a full heap dump and 
manually analyse that using a tool like MAT.

Cheers,
-patrick




On 17/03/2016, 21:58, "Otis Gospodnetić" <otis.gospodne...@gmail.com> wrote:

>Hi,
>
>On Wed, Mar 16, 2016 at 10:59 AM, Patrick Plaatje <pplaa...@gmail.com>
>wrote:
>
>> Hi,
>>
>> From the sar output you supplied, it looks like you might have a memory
>> issue on your hosts. The memory usage just before your crash seems to be
>> *very* close to 100%. Even the slightest increase (Solr itself, or possibly
>> by a system service) could caused the system crash. What are the
>> specifications of your hosts and how much memory are you allocating?
>
>
>That's normal actually - http://www.linuxatemyram.com/
>
>You *want* Linux to be using all your memory - you paid for it :)
>
>Otis
>--
>Monitoring - Log Management - Alerting - Anomaly Detection
>Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
>
>>
>
>
>>
>>
>> On 16/03/2016, 14:52, "YouPeng Yang" <yypvsxf19870...@gmail.com> wrote:
>>
>> >Hi
>> > It happened again,and worse thing is that my system went to crash.we can
>> >even not connect to it with ssh.
>> > I use the sar command to capture the statistics information about it.Here
>> >are my details:
>> >
>> >
>> >[1]cpu(by using sar -u),we have to restart our system just as the red font
>> >LINUX RESTART in the logs.
>>
>> >--
>> >03:00:01 PM all  7.61  0.00  0.92  0.07  0.00
>> >91.40
>> >03:10:01 PM all  7.71  0.00  1.29  0.06  0.00
>> >90.94
>> >03:20:01 PM all  7.62  0.00  1.98  0.06  0.00
>> >90.34
>> >03:30:35 PM all  5.65  0.00 31.08  0.04  0.00
>> >63.23
>> >03:42:40 PM all 47.58  0.00 52.25  0.00  0.00
>> > 0.16
>> >Average:all  8.21  0.00  1.57  0.05  0.00
>> >90.17
>> >
>> >04:42:04 PM   LINUX RESTART
>> >
>> >04:50:01 PM CPU %user %nice   %system   %iowait%steal
>> >%idle
>> >05:00:01 PM all  3.49  0.00  0.62  0.15  0.00
>> >95.75
>> >05:10:01 PM all  9.03  0.00  0.92  0.28  0.00
>> >89.77
>> >05:20:01 PM all  7.06  0.00  0.78  0.05  0.00
>> >92.11
>> >05:30:01 PM all  6.67  0.00  0.79  0.06  0.00
>> >92.48
>> >05:40:01 PM all  6.26  0.00  0.76  0.05  0.00
>> >92.93
>> >05:50:01 PM all  5.49  0.00  0.71  0.05  0.00
>> >93.75
>>
>> >--
>> >
>> >[2]mem(by using sar -r)
>>
>> >--
>> >03:00:01 PM   1519272 196633272 99.23361112  76364340 143574212
>> >47.77
>> >03:10:01 PM   1451764 196700780 99.27361196  76336340 143581608
>> >47.77
>> >03:20:01 PM   1453400 196699144 99.27361448  76248584 143551128
>> >47.76
>> >03:30:35 PM   1513844 196638700 99.24361648  76022016 143828244
>> >47.85
>> >03:42:40 PM   1481108 196671436 99.25361676  75718320 144478784
>> >48.07
>> >Average:  5051607 193100937 97.45362421  81775777 142758861
>> >47.50
>> >
>> >04:42:04 PM   LINUX RESTART
>> >
>> >04:50:01 PM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit
>> >%commit
>> >05:00:01 PM 154357132  43795412 22.10 92012  18648644 134950460
>> >44.90
>> >05:10:01 PM 136468244  61684300 31.13219572  31709216 134966548
>> >44.91
>> >05:20:01 PM 135092452  63060092 31.82221488  32162324 134949788
>> >44.90
>> >05:30:01 PM 133410464  64742080 32.67233848  32793848 134976828
>> >44.91
>> >05:40:01 PM 132022052  66130492 33.37235812  33278908 135007268
>> >44.92
>> >05:50:01 PM 130630408  67522136 34.

Re: High Cpu sys usage

2016-03-19 Thread Patrick Plaatje
Hi,

>From the sar output you supplied, it looks like you might have a memory issue 
>on your hosts. The memory usage just before your crash seems to be *very* 
>close to 100%. Even the slightest increase (Solr itself, or possibly by a 
>system service) could caused the system crash. What are the specifications of 
>your hosts and how much memory are you allocating?

Cheers,
-patrick




On 16/03/2016, 14:52, "YouPeng Yang"  wrote:

>Hi
> It happened again,and worse thing is that my system went to crash.we can
>even not connect to it with ssh.
> I use the sar command to capture the statistics information about it.Here
>are my details:
>
>
>[1]cpu(by using sar -u),we have to restart our system just as the red font
>LINUX RESTART in the logs.
>--
>03:00:01 PM all  7.61  0.00  0.92  0.07  0.00
>91.40
>03:10:01 PM all  7.71  0.00  1.29  0.06  0.00
>90.94
>03:20:01 PM all  7.62  0.00  1.98  0.06  0.00
>90.34
>03:30:35 PM all  5.65  0.00 31.08  0.04  0.00
>63.23
>03:42:40 PM all 47.58  0.00 52.25  0.00  0.00
> 0.16
>Average:all  8.21  0.00  1.57  0.05  0.00
>90.17
>
>04:42:04 PM   LINUX RESTART
>
>04:50:01 PM CPU %user %nice   %system   %iowait%steal
>%idle
>05:00:01 PM all  3.49  0.00  0.62  0.15  0.00
>95.75
>05:10:01 PM all  9.03  0.00  0.92  0.28  0.00
>89.77
>05:20:01 PM all  7.06  0.00  0.78  0.05  0.00
>92.11
>05:30:01 PM all  6.67  0.00  0.79  0.06  0.00
>92.48
>05:40:01 PM all  6.26  0.00  0.76  0.05  0.00
>92.93
>05:50:01 PM all  5.49  0.00  0.71  0.05  0.00
>93.75
>--
>
>[2]mem(by using sar -r)
>--
>03:00:01 PM   1519272 196633272 99.23361112  76364340 143574212
>47.77
>03:10:01 PM   1451764 196700780 99.27361196  76336340 143581608
>47.77
>03:20:01 PM   1453400 196699144 99.27361448  76248584 143551128
>47.76
>03:30:35 PM   1513844 196638700 99.24361648  76022016 143828244
>47.85
>03:42:40 PM   1481108 196671436 99.25361676  75718320 144478784
>48.07
>Average:  5051607 193100937 97.45362421  81775777 142758861
>47.50
>
>04:42:04 PM   LINUX RESTART
>
>04:50:01 PM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit
>%commit
>05:00:01 PM 154357132  43795412 22.10 92012  18648644 134950460
>44.90
>05:10:01 PM 136468244  61684300 31.13219572  31709216 134966548
>44.91
>05:20:01 PM 135092452  63060092 31.82221488  32162324 134949788
>44.90
>05:30:01 PM 133410464  64742080 32.67233848  32793848 134976828
>44.91
>05:40:01 PM 132022052  66130492 33.37235812  33278908 135007268
>44.92
>05:50:01 PM 130630408  67522136 34.08237140  33900912 135099764
>44.95
>Average:136996792  61155752 30.86206645  30415642 134991776
>44.91
>--
>
>
>As the blue font parts show that my hardware crash from 03:30:35.It is hung
>up until I restart it manually at 04:42:04
>ALl the above information just snapshot the performance when it crashed
>while there is nothing cover the reason.I have also
>check the /var/log/messages and find nothing useful.
>
>Note that I run the command- sar -v .It shows something abnormal:
>
>02:50:01 PM  11542262  9216 76446   258
>03:00:01 PM  11645526  9536 76421   258
>03:10:01 PM  11748690  9216 76451   258
>03:20:01 PM  11850191  9152 76331   258
>03:30:35 PM  11972313 10112132625   258
>03:42:40 PM  12177319 13760340227   258
>Average:  8293601  8950 68187   161
>
>04:42:04 PM   LINUX RESTART
>
>04:50:01 PM dentunusd   file-nr  inode-nrpty-nr
>05:00:01 PM 35410  7616 35223 4
>05:10:01 PM137320  7296 42632 6
>05:20:01 PM247010  7296 42839 9
>05:30:01 PM358434  7360 42697 9
>05:40:01 PM471543  7040 4292910
>05:50:01 PM583787  7296 4283713
>
>
>and I check the man info about the -v option :
>
>*-v*  Report status of inode, file and other kernel tables.  The following
>values 

Re: Solr3.6 DeleteByQuery not working with negated query

2012-10-22 Thread Patrick Plaatje
Hi Markus,

Why do you think it's not deleting amyrhing,?

Thanks,
Patrick
Op 22 okt. 2012 08:36 schreef Markus.Mirsberger markus.mirsber...@gmx.de
het volgende:

 Hi,

 I am trying to delete a some documents in my index by query.
 When I just select them with this negated query, I get all the documents I
 want to delete but when I use this query in the DeleteByQuery it is not
 working
 Im trying to delete all elements which value ends with 'somename/' 
 When I use this for selection it works and I get exactly the right
 documents (about 10.000. so too many to delete one by one:) )

 curl http://solrip:8080/solr/**core/update/?commit=true -H
 Content-Type: text/xml --data-binary 'updatedeletequery-**
 field:*somename//query/**delete/update';

 And here the response:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint
 name=QTime11091/int/lst
 /response

 I tried to perform it in the browser too by using /update?stream.body  ...
 but the result is the same.
 And no Error in the Solr-Log.

 I hope someone can help me ... I dont want do this manually :)

 Regards,
 Markus



Re: Solr3.6 DeleteByQuery not working with negated query

2012-10-22 Thread Patrick Plaatje
Did you make sure to commit after the delete?

Patrick
Op 22 okt. 2012 08:43 schreef Markus.Mirsberger markus.mirsber...@gmx.de
het volgende:

 Hi, Patrick,

 Because I have the same amount of documents in my index than before I
 perform the query.
 And when I use the negated query just to select the documents I can see
 they still there (and of course all other documents too :) )

 Regards,
 Markus




 On 22.10.2012 14:38, Patrick Plaatje wrote:

 Hi Markus,

 Why do you think it's not deleting amyrhing,?

 Thanks,
 Patrick
 Op 22 okt. 2012 08:36 schreef Markus.Mirsberger 
 markus.mirsber...@gmx.de
 het volgende:

  Hi,

 I am trying to delete a some documents in my index by query.
 When I just select them with this negated query, I get all the documents
 I
 want to delete but when I use this query in the DeleteByQuery it is not
 working
 Im trying to delete all elements which value ends with 'somename/' 
 When I use this for selection it works and I get exactly the right
 documents (about 10.000. so too many to delete one by one:) )

 curl http://solrip:8080/solr/core/update/?commit=true -H
 Content-Type: text/xml --data-binary 'updatedeletequery-**
 field:*somename//query/delete/update';

 And here the response:
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint
 name=QTime11091/int/lst
 /response

 I tried to perform it in the browser too by using /update?stream.body
  ...
 but the result is the same.
 And no Error in the Solr-Log.

 I hope someone can help me ... I dont want do this manually :)

 Regards,
 Markus





Re: is there any practice to load index into RAM to accelerate solr performance?

2012-02-08 Thread Patrick Plaatje
A start maybe to use a RAM disk for that. Mount is as a normal disk and
have the index files stored there. Have a read here:

http://en.wikipedia.org/wiki/RAM_disk

Cheers,

Patrick


2012/2/8 Ted Dunning ted.dunn...@gmail.com

 This is true with Lucene as it stands.  It would be much faster if there
 were a specialized in-memory index such as is typically used with high
 performance search engines.

 On Tue, Feb 7, 2012 at 9:50 PM, Lance Norskog goks...@gmail.com wrote:

  Experience has shown that it is much faster to run Solr with a small
  amount of memory and let the rest of the ram be used by the operating
  system disk cache. That is, the OS is very good at keeping the right
  disk blocks in memory, much better than Solr.
 
  How much RAM is in the server and how much RAM does the JVM get? How
  big are the documents, and how large is the term index for your
  searches? How many documents do you get with each search? And, do you
  use filter queries- these are very powerful at limiting searches.
 
  2012/2/7 James ljatreey...@163.com:
   Is there any practice to load index into RAM to accelerate solr
  performance?
   The over all documents is about 100 million. The search time around
  100ms. I am seeking some method to accelerate the respond time for solr.
   Just check that there is some practice use SSD disk. And SSD is also
  cost much, just want to know is there some method like to load the index
  file in RAM and keep the RAM index and disk index synchronized. Then I
 can
  search on the RAM index.
 
 
 
  --
  Lance Norskog
  goks...@gmail.com
 




-- 
Patrick Plaatje
Senior Consultant
http://www.nmobile.nl/


Re: Searching partial phone numbers

2012-01-19 Thread Patrick Plaatje
Hi Marotosg,

you can index the phonenumber field with the ngram field type, which allows
for partial (wildcard) searches on this field. Have a look here:

http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.CommonGramsFilterFactory

Cheers,

Patrick



2012/1/19 marotosg marot...@gmail.com

 Hi.
 I have phone numbers in my solr schema in a field. At the moment i have
 this
 field as string.
 I would like to be able to make searches that find parts of a  phone
 number.

 For instance:
 Number +35384589458

 search by  *+35384* or search by  *84589*.

 Do you know if this is posible?

 Thanks a lot

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Searching-partial-phone-numbers-tp3671908p3671908.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Patrick Plaatje
Senior Consultant
http://www.nmobile.nl/


Re: How to accelerate your Solr-Lucene appication by 4x

2012-01-19 Thread Patrick Plaatje
Partially agree. If just the facts are given, and not a complete sales talk
instead, it'll be fine. Don't overdo it like this though.

Cheers,

Patrick


2012/1/19 Darren Govoni dar...@ontrenet.com

 I think the occassional Hey, we made something cool you might be
 interested in! notice, even if commercial, is ok
 because it addresses numerous issues we struggle with on this list.

 Now, if it were something completely off-base or unrelated (e.g. male
 enhancement pills), then yeah, I agree.

 On 01/18/2012 11:08 PM, Steven A Rowe wrote:

 Hi Darren,

 I think it's rare because it's rare: if this were found to be a useful
 advertising space, rare would cease to be descriptive of it.  But I could
 be wrong.

 Steve

  -Original Message-
 From: Darren Govoni [mailto:dar...@ontrenet.com]
 Sent: Wednesday, January 18, 2012 8:40 PM
 To: solr-user@lucene.apache.org
 Subject: Re: How to accelerate your Solr-Lucene appication by 4x

 And to be honest, many people on this list are professionals who not
 only build their own solutions, but also buy tools and tech.

 I don't see what the big deal is if some clever company has something of
 imminent value here to share it. Considering that its a rare event.

 On 01/18/2012 08:28 PM, Jason Rutherglen wrote:

 Steven,

 If you are going to admonish people for advertising, it should be
 equally dished out or not at all.

 On Wed, Jan 18, 2012 at 6:38 PM, Steven A Rowesar...@syr.edu   wrote:

 Hi Peter,

 Commercial solicitations are taboo here, except in the context of a

 request for help that is directly relevant to a product or service.

 Please don’t do this again.

 Steve Rowe

 From: Peter Velikin [mailto:pe...@velobit.com]
 Sent: Wednesday, January 18, 2012 6:33 PM
 To: solr-user@lucene.apache.org
 Subject: How to accelerate your Solr-Lucene appication by 4x

 Hello Solr users,

 Did you know that you can boost the performance of your Solr

 application using your existing servers? All you need is commodity SSD
 and
 plug-and-play software like VeloBit.

 At ZoomInfo, a leading business information provider, VeloBit increased

 the performance of the Solr-Lucene-powered application by 4x.

 I would love to tell you more about VeloBit and find out if we can

 deliver same business benefits at your company. Click
 herehttp://www.velobit.com/**15-minute-briefhttp://www.velobit.com/15-minute-brief
   for a 15-minute
 briefinghttp://www.velobit.**com/15-minute-briefhttp://www.velobit.com/15-minute-brief
   on the VeloBit
 technology.

 Here is more information on how VeloBit helped ZoomInfo:

   *   Increased Solr-Lucene performance by 4x using existing servers

 and commodity SSD

   *   Installed VeloBit plug-and-play SSD caching software in 5-minutes

 transparent to running applications and storage infrastructure

   *   Reduced by 75% the hardware and monthly operating costs required

 to support service level agreements

 Technical Details:

   *   Environment: Solr‐Lucene indexed directory search service fronted

 by J2EE web application technology

   *   Index size: 600 GB
   *   Number of items indexed: 50 million
   *   Primary storage: 6 x SAS HDD
   *   SSD Cache: VeloBit software + OCZ Vertex 3

 Click 
 herehttp://www.velobit.com/**use-cases/enterprise-search/http://www.velobit.com/use-cases/enterprise-search/
   to

 read more about the ZoomInfo Solr-Lucene case
 studyhttp://www.velobit.com/**use-cases/enterprise-search/http://www.velobit.com/use-cases/enterprise-search/
 .

 You can also sign 
 uphttp://www.velobit.com/**early-access-program-http://www.velobit.com/early-access-program-

 accelerate-application   for our Early Access
 Programhttp://www.velobit.**com/early-access-program-**accelerate-http://www.velobit.com/early-access-program-accelerate-
 application   and try VeloBit HyperCache for free.

 Also, feel free to write to me directly at

 pe...@velobit.commailto:peter**@velobit.com pe...@velobit.com.

 Best regards,

 Peter Velikin
 VP Online Marketing, VeloBit, Inc.
 pe...@velobit.commailto:peter**@velobit.com pe...@velobit.com
 tel. 978-263-4800
 mob. 617-306-7165
 [Description: VeloBit with tagline]
 VeloBit provides plug   play SSD caching software that dramatically

 accelerates applications at a remarkably low cost. The software installs
 seamlessly in less than 10 minutes and automatically tunes for fastest
 application speed. Visit 
 www.velobit.comhttp://www.**velobit.comhttp://www.velobit.com
   for
 details.





-- 
Patrick Plaatje
Senior Consultant
http://www.nmobile.nl/


Re: blocking access by user-agent

2011-12-21 Thread Patrick Plaatje
Hi Roland,

you can configure Jetty to use a simple .htaccess file to allow only
specific IP adresses access to your webapp. Have a look here on how to do
thta:

http://www.viaboxxsystems.de/how-to-configure-your-jetty-webapp-to-grant-access-for-dedicated-ip-addresses-only

If you want more sophisticated access control, you need it to be included
in an extra layer between Solr and the devices accressing your Solr
instance.


- Patrick


2011/12/21 RT rwatollen...@gmail.com

 Hi,

 I would like to control what applications get access to the solr database.
 I am using jetty as the appcontainer.

 Is this at all achievable? If yes, how?

 Internet search has not yielded anything I could use so far.

 Thanks in advance.

 Roland




-- 
Patrick Plaatje
Senior Consultant
http://www.nmobile.nl/


Re: How to get SolrServer within my own servlet

2011-12-13 Thread Patrick Plaatje
Have à look here first and you're will probably be using SolrEmbeddedServer.

http://wiki.apache.org/solr/Solrj

Patrick


Op 13 dec. 2011 om 20:38 heeft Joey vanjo...@gmail.com het volgende 
geschreven:

 Anybody could help?
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/How-to-get-SolrServer-within-my-own-servlet-tp3583304p3583368.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to get SolrServer within my own servlet

2011-12-13 Thread Patrick Plaatje
Hey Joey,

You should first configure your deployed Solr instance by adding/changing the 
schema.xml and solrconfig.xml. After that you can use SolrJ to connect to that 
Solr instance and add documents to it. On the link i posted earlier, you'll 
find à couple of examples on how to do that.

- Patrick 

Verstuurd vanaf mijn iPhone

Op 13 dec. 2011 om 20:53 heeft Joey vanjo...@gmail.com het volgende 
geschreven:

 Thanks Patrick  for the reply. 
 
 What I did was un-jar solr.war and created my own web application. Now I
 want to write my own servlet to index all files inside a folder. 
 
 I suppose there is already solrserver instance initialized when my web app
 started. 
 
 How can I access that solr server instance in my servlet?
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/How-to-get-SolrServer-within-my-own-servlet-tp3583304p3583416.html
 Sent from the Solr - User mailing list archive at Nabble.com.