Hi,
I am facing an issue where I need to change Solr schema but I have crucial
data that I don't want to delete. Is there a way where I can change the
schema of the index while keeping the data intact?
Regards,
Salman
Is the schema change affects the data you want to keep?
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 29 December 2015 at 01:48, Salman Ansari <salman.rah...@gmail.com> wrote:
> Hi,
>
> I am facing an issue where I need to cha
. Was someone telling you something different?
-- Jack Krupansky
On Mon, Dec 28, 2015 at 1:48 PM, Salman Ansari <salman.rah...@gmail.com>
wrote:
> Hi,
>
> I am facing an issue where I need to change Solr schema but I have crucial
> data that I don't want to delete. Is there
wrote:
> Is the schema change affects the data you want to keep?
>
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 29 December 2015 at 01:48, Salman Ansari <salman.rah...@gmail.com>
> wrote:
> > Hi,
> >
>
Adding new fields is not a problem. You can continue to use your
existing index with the new schema.
On Tue, Dec 29, 2015 at 1:58 AM, Salman Ansari <salman.rah...@gmail.com> wrote:
> You can say that we are not removing any fields (so the old data should not
> get affected), howe
Hello,
I am going thru few use cases where we have kind of multiple disparate data
sources which in general doesn't have much common fields and i was thinking
to design different schema/index/collection for each of them and query each
of them separately and provide different result sets
gt; sources which in general doesn't have much common fields and i was thinking
> to design different schema/index/collection for each of them and query each
> of them separately and provide different result sets to the client.
>
> I have seen one implementation where all different fields fro
option, how do you suggest to design the
index/schema?
Let me know if i am missing any other info to get your thoughts.
On Tue, Dec 22, 2015 at 11:53 AM, Jack Krupansky <jack.krupan...@gmail.com>
wrote:
> Step one is to refine and more clearly state the requirements. Sure,
> som
Hi,
How can I change the defaultoperator parameter through the schema API?
Thanks.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Schema-API-change-the-defaultoperator-tp4244857.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 12/11/2015 4:23 AM, Yago Riveiro wrote:
> How can I change the defaultoperator parameter through the schema API?
The default operator and default field settings in the schema have been
deprecated for quite some time, so I would imagine that you can't change
them with the schema
works with POST http calls?
On Fri, Dec 11, 2015 at 2:26 PM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 12/11/2015 4:23 AM, Yago Riveiro wrote:
>> How can I change the defaultoperator parameter through the schema API?
> The default operator and default field settings in th
On 12/11/2015 8:02 AM, Yago Riveiro wrote:
> I uploaded a schema.xml manualy with the defaultoperator configuration and
> it's working.
>
> My problem is that my legacy application is huge and I can't go to all places
> to add the q.op parameter.
>
> The solrconfig.xml option should be an
I fixe the prb using requestHandler dataimoprt:
tika-data-config.xml
I configure the tika-data-config.xml according to my needs to get the right
value :
now dont need indexing from Commandline using
On Fri, Dec 4, 2015 at 12:59 AM, <solr-user-digest-h...@lucene.apache.org>
wrote:
>
> >Just wondering if folks have any suggestions on using Schema.xml vs.
> >Managed Schema going forward.
> >
We are using loosely typed languages (Perl and Javascript), and
thank you Erick, i follow you advice and take a look to config apache tika,
I have modifie my request handler /update/extract:
last_modified
ignored_
true
links
ignored_
D:\solr\solr-5.3.1\server\solr\tika-data-config.xml
and config tika :
Kostali -
See if the "Introspect rich document parsing and extraction” section of
http://lucidworks.com/blog/2015/08/04/solr-5-new-binpost-utility/ helps*.
You’ll be able to see the output of /update/extract (aka Tika) and adjust your
mappings and configurations accordingly.
* And apologies
thank you , that's why I choose to add the exact value using solarium PHP
Client, but the time out stop indexing after 30seconde:
$dir = new Folder($dossier);
$files = $dir->find('.*\.*');
foreach ($files as $file) {
$file = new File($dir->pwd() . DS . $file);
$query =
,
Erick
On Fri, Dec 4, 2015 at 6:51 AM, Rick Leir <richard.l...@canadiana.ca> wrote:
> On Fri, Dec 4, 2015 at 12:59 AM, <solr-user-digest-h...@lucene.apache.org>
> wrote:
>
>>
>> >Just wondering if folks have any suggestions on using Schema.xml vs.
>> &g
Not that hard to setup a cron and diff job and email when the diff is
not-empty. A sort-of "is that what you expected" report.
But, for myself, I also prefer schema and then managed. I do not like
schemaless mode, even for development. Instead, I prefer to do
"dynamicField *".
.
The theme with Elastic Search was:
- spend some time on your field mappings (which are a schema) up front.
- if you don't, you are either going to be wasting space, or experiencing slow
search, or both.
The theme with RDF was:
- First model your vocabulary and make sure it answers the questions you
having to re-index everything because they had their field
> mappings wrong. I've also worked on Linked Data, RDF, where the fact
> that everything is a triple is supposed to make SQL schemas unneeded.
>
> The theme with Elastic Search was:
> - spend some time on your field mappings (w
I start working in solr 5x by extract solr in D://solr and run solr server
with :
D:\solr\solr-5.3.1\bin>solr start ;
Then I create a core in standalone mode :
D:\solr\solr-5.3.1\bin>solr create -c mycore
I need indexing from system files (word and pdf) and the schema API don’t
have a
On 12/3/2015 8:09 AM, Kelly, Frank wrote:
> Just wondering if folks have any suggestions on using Schema.xml vs. Managed
> Schema going forward.
>
> Our deployment will be
>> 3 Zk, 3 Shards, 3 replicas
>> Copies of each collection in 5 AWS regions (EBS-backed EC2 instanc
Shawn:
Managed schema is _used_ by "schemaless", but not the same thing at
all. For "schemaless" (i.e. "data driven"), you need to include the
update processor chains that do the guessing for you and makes use of
the managed veatures to add fields to your schem
Just wondering if folks have any suggestions on using Schema.xml vs. Managed
Schema going forward.
Our deployment will be
> 3 Zk, 3 Shards, 3 replicas
> Copies of each collection in 5 AWS regions (EBS-backed EC2 instances)
> Planning at least 1 Billion objects indexed (currently <
I’ve never used the managed schema, so I’m probably biased, but I’ve never
seen much of a point to the Schema API.
I need to make changes sometimes to solrconfig.xml, in addition to
schema.xml and other config files, and there’s no API for those, so my
process has been like:
1. Put the entire
It Depends (tm).
Managed Schema is way cool if you have a front end that lets you
manipulate the schema via a browser or other program. There's really
no other way to deal with changing the schema from a browser without
allowing uploading xml files, which is a security problem. Trust me
They are different beasts, but I bet on the managed schema winning in
the long run.
With the bulk API, you can post a heap of fields/etc in one go, so
basically, rather than pushing the schema to Zookeeper, you push it to
Solr.
Look at Solr 5.4 when it comes out shortly. It'll change the way
My experience is, once managed-schema is created, then schema.xml even if
present is ignored. When both are present, you will get a warning in the Solr
log.
I have stopped using schema.xml. Actually, I use it once, start Solr and after
it generates managed-schema, I export it and pretty much
>solr start ;
>
> Then I create a core in standalone mode :
>
> D:\solr\solr-5.3.1\bin>solr create -c mycore
>
> I need indexing from system files (word and pdf) and the schema API don’t
> have a field “name” of document, then I Add this field using curl :
>
> curl
t;muge...@gmail.com> wrote:
>
> Hi!
>
> I have a 3 csv table,
> 1.)retuarant
> 2.)User
> 3.)Review
>
> every csv have a unique key, then how i can configure multiple unique key in
> solr
>
>
>
> --
> View this message in context:
> http://
When you index into Solr, you are overlapping the definitions into one
schema. Therefore, you will need a unified uniqueKey.
There is a couple of approaches:
1) Maybe you don't actually store the data as three types of entities.
Think about what you will want to find and structure the data
Hi!
I have a 3 csv table,
1.)retuarant
2.)User
3.)Review
every csv have a unique key, then how i can configure multiple unique key in
solr
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550.html
Sent from the Solr - User mailing list
>>Or perhaps use the UUID auto id feature.
if i use UUID, then how i can update particular document, i think using
this ,there will not any document identity
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550p4240557.html
Sen
>>Or perhaps use the UUID auto id feature.
if i use UUID, then how i can update particular document, i think using
this ,there will not any document identity
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550p4240563.html
Sen
> View this message in context:
> http://lucene.472066.n3.nabble.com/Multiple-unique-key-in-Schema-tp4240550p4240557.html
> Sent from the Solr - User mailing list archive at Nabble.com.
on1>
'
curl 'http://localhost:8983/solr/test-core/schema/fields'
As opposed to running bin/solr start -e cloud (which spins up an example)
before I load a core.
Thank you,
Natasha
On Fri, Oct 30, 2015 at 10:24 AM, Natasha Whitney <nata...@factual.com>
wrote:
> Hi Erick,
>
>
returned
in the schema are the default fields.
If there is an preferable (non-cloud) way to create a core to point at an
existing instanceDir please advise!
Thank you,
Natasha
On Thu, Oct 29, 2015 at 8:58 PM, Erick Erickson [via Lucene] <
ml-node+s472066n4237322...@n3.nabble.com> wrote:
lhost:8983/solr/admin/cores?action=CREATE=test-core=/home/natasha/twc-sessions-dash/collection1>
> '
> curl 'http://localhost:8983/solr/test-core/schema/fields'
>
> As opposed to running bin/solr start -e cloud (which spins up an example)
> before I load a core.
Well yes, the -e
the following content:
1024100012147483647truefalsetermsLUCENE_41
And see schema.xml at bottom of post:
And the core is ostensibly created without an issue (numDocs is 59, which is
correct and no error logged) But, when I ping the newly created core's
schema via:
curl 'http://localhost:8985/solr/test
Note, if I attempt to CREATE the core using Solr 5.3.0 on my openstack
machine (Java version 1.7.0) I have no issues.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-5-3-1-CREATE-defaults-to-schema-less-mode-Java-version-1-7-0-45-tp4237305p4237307.html
Sent from
to CREATE the core using Solr 5.3.0 on my openstack
> machine (Java version 1.7.0) I have no issues.
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-5-3-1-CREATE-defaults-to-schema-less-mode-Java-version-1-7-0-45-tp4237305p4237307.html
Hi,
I have a simple Solr 5.3 cloud setup with two nodes using a manged
schema. I'm creating a collection using a schema that initially only
contains the id field. When documents get added I'm dynamically adding
the required fields. Currently this fails quite consistently as in bug
SOLR-7536
I'm attempting to index my data. Which are autogenerated html
documents, the source being TEI(xml) documents.
I created the collection with
sudo su - solr -c /opt/solr/bin/solr create -c mbepp -n
data_driven_schema_configs
I'm indexing with
find . -mindepth 2 -not -name person_*.* -not
Chris,
mucho thanks. The
solr.RegexReplaceProcessorFactory
looks like what I need. What a fantastic search engine this is!
thanks again,
Scott
On 8/10/2015 5:21 PM, Chris Hostetter wrote:
: meta name=date content=Unknown /
: meta name=dc.date.created content=Unknown /
:
: Most documents
: meta name=date content=Unknown /
: meta name=dc.date.created content=Unknown /
:
: Most documents have a correctly formatted date string and I would like to keep
: that data available for search on the date field.
...
: I realize it is complaining because the date string isn't matching
On 7/18/2015 9:49 AM, Charlie Hubbard wrote:
So I want to allow people to upload any CSV/XML/JSON to solr they want so
having a predefined schema isn't going to cut it. After reading about my
options I figured my choices were schema-less mode and dynamic fields using
the * with a type other
So I want to allow people to upload any CSV/XML/JSON to solr they want so
having a predefined schema isn't going to cut it. After reading about my
options I figured my choices were schema-less mode and dynamic fields using
the * with a type other than ignore. I know the docs say schema-less
bq: So I want to allow people to upload any CSV/XML/JSON to solr they want so
having a predefined schema isn't going to cut it
Piling on to Shawn's excellent comments I would really advise agains this.
Sure, you could make everything a text field using the * catch-all, but.
If a field
Hi,
i'm using Solr 4.10.3, and i'm trying update a doc field using atomic update
(http://wiki.apache.org/solr/Atomic_Updates).
My schema.xml is like this:
!--Fields--
field name=id type=string indexed=true stored=true required=true /
field name=name type=string indexed=true stored=true /
field
This is kinda weird and looks a lot like a bug.
Let me try to reproduce it locally!
I let you know soon !
Cheers
2015-07-15 10:01 GMT+01:00 Martínez López, Alfonso almlo...@indra.es:
Hi,
i'm using Solr 4.10.3, and i'm trying update a doc field using atomic
update
Just tried, on Solr 5.1 and I get the proper behaviour.
Actually where is the value for the dinamic_desc coming from ?
I can not see it in the updates and actually it is not in my index.
Are you sure you have not forgotten any detail ?
Cheers
2015-07-15 11:48 GMT+01:00 Alessandro Benedetti
if 'desc_field' or
'desc_*' where multivalued.
Cheers.
From: Alessandro Benedetti [benedetti.ale...@gmail.com]
Sent: Wednesday, July 15, 2015 12:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Does update field feature work in a schema with dynamic
:
…
if (!destinationField.multiValued() destHasValues) {
throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
ERROR: +getID(doc, schema)+multiple values encountered
for non multiValued copy field +
destinationField.getName() + : + v);
}
...
Because the check is actually checking in already
field feature work in a schema with dynamic fields?
Just tried, on Solr 5.1 and I get the proper behaviour.
Actually where is the value for the dinamic_desc coming from ?
I can not see it in the updates and actually it is not in my index.
Are you sure you have not forgotten any detail ?
Cheers
On 7/15/2015 3:01 AM, Martínez López, Alfonso wrote:
!--Fields--
field name=id type=string indexed=true stored=true required=true /
field name=name type=string indexed=true stored=true /
field name=src_desc type=string indexed=true stored=true /
field name=_version_ type=long indexed=true
/DocumentBuilder.java:89
// Make sure it has the correct number
if( sfield!=null !sfield.multiValued() field.getValueCount() 1 ) {
throw new SolrException( SolrException.ErrorCode.BAD_REQUEST,
ERROR: +getID(doc, schema)+multiple values encountered for
non multiValued field +
sfield.getName
applications that use Solr have at least surface
knowledge of the schema, which should make it possible for the
application writer to just pull the information from the source field.
It seems that your application may not have that knowledge, though.
Thanks,
Shawn
.
From: Alessandro Benedetti [benedetti.ale...@gmail.com]
Sent: Wednesday, July 15, 2015 2:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Does update field feature work in a schema with dynamic fields?
Going through the code in the RunUpdateRequestProcessor we call at one
point :
…
Document
information twice, which is wasteful and increases resource
requirements. Most applications that use Solr have at least surface
knowledge of the schema, which should make it possible for the
application writer to just pull the information from the source field.
It seems that your application
surface
knowledge of the schema, which should make it possible for the
application writer to just pull the information from the source field.
It seems that your application may not have that knowledge, though.
Thanks,
Shawn
-user@lucene.apache.org
Subject: Re: Does update field feature work in a schema with dynamic fields?
Hey Shawn, I was debugging a little bit,this is the problem :
When adding a field from the solr document, to the Lucene one, even if this
document was previously added by the execution of the copy
Hi ,
we are implementing solr search in our e-commerce web application . i have
few question in my mind like
a) How to store the data so that i can give me more relevant results
for queries like red shirts , peter England shirts in global search
where red is color and peter England
I wish to do it in code so schema browser is lesser of an option.
Use case is : I wish to boost particular fields while matching, for that i
need to know My field to Solr field mapping. SO that i can put that in the
query.
Thanks and regards,
Gajendra Dadheech
On Tue, Jul 7, 2015 at 9:23 PM
Dadheech gajju3...@gmail.com
wrote:
I wish to do it in code so schema browser is lesser of an option.
Use case is : I wish to boost particular fields while matching, for that
i need to know My field to Solr field mapping. SO that i can put that in
the query.
Thanks and regards
:
I wish to do it in code so schema browser is lesser of an option.
Use case is : I wish to boost particular fields while matching, for
that
i need to know My field to Solr field mapping. SO that i can put that
in
the query.
Thanks and regards,
Gajendra Dadheech
At the time of forming this request i am not sure which kind of field that
would be. So i read fields in new searcher.
Thanks and regards,
Gajendra Dadheech
On Wed, Jul 8, 2015 at 2:12 PM, Gajendra Dadheech gajju3...@gmail.com
wrote:
I wish to do it in code so schema browser is lesser
Feels like an XY problem. Why do you want to do this? What's
the use-case? Perhaps there's an alternative approach that
satisfies the need.
Best,
Erick
On Tue, Jul 7, 2015 at 4:21 AM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
Just an idea, Solr Admin/Schema Browser reports some info
Hi,
Can i some how translate fields which i read from
newSearcher.getAtomicReader().fields(), to schema fields ? Does solr expose
any method to do this translation ? Alternative approach i am thinking will
involved lots of regex computation as the fields would be _string, _float
etc and i would
Just an idea, Solr Admin/Schema Browser reports some info like this, hence,
you can trace the way in which it does it.
On Tue, Jul 7, 2015 at 10:34 AM, Gajendra Dadheech gajju3...@gmail.com
wrote:
Hi,
Can i some how translate fields which i read from
newSearcher.getAtomicReader().fields
=managedSchemaResourceNamemy-schema.xml/str
/schemaFactory
In 5.1.0 (and maybe prior ver.?) when I enable managed schema per the
above, the existing schema.xml file is left as-is, a copy of it is created
as schema.xml.bak and a new one is created based on the name I gave it
my-schema.xml.
With 5.2.1
On 6/18/2015 8:10 AM, Steven White wrote:
In 5.1.0 (and maybe prior ver.?) when I enable managed schema per the
above, the existing schema.xml file is left as-is, a copy of it is created
as schema.xml.bak and a new one is created based on the name I gave it
my-schema.xml.
With 5.2.1
Dear Solr-Users,
(SOLR 5.0 Ubuntu)
I have xml files with tags like this
claimXXYYY
where XX is a language code like FR EN DE PT etc... (I don't know the
number of language code I can have)
and YYY is a number [1..999]
i.e.:
claimen1
claimen2
claimen3
claimfr1
claimfr2
claimfr3
I would like
Well yes, but the second doesn't do what you say you want,
bq: *claim *equal to all claimXXYYY (all languages, all numbers,
indexed=true, stored false) (search not needed but must be displayed)
You can search this field, but specifying it in a field list (fl) will
return nothing, you need
a test to reproduce.
Can you please create a Solr JIRA issue for this?:
https://issues.apache.org/jira/browse/SOLR/
Thanks,
Steve
On May 7, 2015, at 5:40 AM, User Zolr zolr.u...@gmail.com wrote:
Hi there,
I have come accross a problem that when using managed schema in
SolrCloud
using managed schema in SolrCloud,
adding fields into schema would SOMETIMES end up prompting Can't find
resource 'schema.xml' in classpath or '/configs/collectionName',
cwd=/export/solr/solr-5.1.0/server, there is of course no schema.xml in
configs, but 'schema.xml.bak' and 'managed-schema
Hi there,
I have come accross a problem that when using managed schema in SolrCloud,
adding fields into schema would SOMETIMES end up prompting Can't find
resource 'schema.xml' in classpath or '/configs/collectionName',
cwd=/export/solr/solr-5.1.0/server, there is of course no schema.xml
On May 6, 2015, at 8:25 PM, Yonik Seeley ysee...@gmail.com wrote:
On Wed, May 6, 2015 at 8:10 PM, Steve Rowe sar...@gmail.com wrote:
It’s by design that you can copyField the same source/dest multiple times -
according to Yonik (not sure where this was discussed), this capability has
in the schema refers to copy field rules),
unlike fields, dynamic fields and field types, so
delete-copy-field/add-copy-field works as one would expect.
For fields, dynamic fields and field types, a delete followed by an add is
not the same as a replace, since (dynamic) fields could have dependent
Hi Everyone,
I am using the Schema API to add a new copy field per:
https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-AddaNewCopyFieldRule
Unlike the other Add APIs, this one will not fail if you add an existing
copy field object. In fact, after when I call the API over
a mutiply specified
copy field rule will delete all of them, but this isn’t tested, so I’m not sure.
There is no replace-copy-field command because copy field rules don’t have
dependencies (i.e., nothing else in the schema refers to copy field rules),
unlike fields, dynamic fields and field types
On Wed, May 6, 2015 at 8:10 PM, Steve Rowe sar...@gmail.com wrote:
It’s by design that you can copyField the same source/dest multiple times -
according to Yonik (not sure where this was discussed), this capability has
been used in the past to effectively boost terms in the source field.
Hi Steve, responses inline below:
On Apr 29, 2015, at 6:50 PM, Steven White swhite4...@gmail.com wrote:
Hi Everyone,
When I pass the following:
http://localhost:8983/solr/db/schema/fieldtypes?wt=xml
I see this (as one example):
lst
str name=namedate/str
str name
Hi Everyone,
When I pass the following:
http://localhost:8983/solr/db/schema/fieldtypes?wt=xml
I see this (as one example):
lst
str name=namedate/str
str name=classsolr.TrieDateField/str
str name=precisionStep0/str
str name=positionIncrementGap0/str
arr name=fields
There are no nested schemas as such. It's only a superset schema that
includes all the fields for parents and children. Obviously, the
fields that are not common should be optional.
The rest depends on what parent/child relation you are trying to
setup. Whether it is explicit with block indexing
Hi,
I need some documentation/samples on how to create a SOLR schema with nested
documents.
I have been looking online but could not find anything.
Thank you in advance,
Nick Pandrea
I use Solr to index different kinds of database tables. I have a Solr index
containing a field named category. I make sure that the category field in Solr
gets occupied with the right value depending on the table. This I can use to
build facet queries which works fine.
The problem I have is
My standard answer when you want to really customize how stuff like
this works is to do the Tika processing in SolrJ. That lets you
ignore/modify/whatever anything you want. It also moves the parsing
load off of the Solr node which scales much better. Here's an example:
Hi,
I am looking for possibility to validate document that is about to be
inserted against schema to check if addition of document will fail or
not w/o actually making an insert. Is there a way to that? I'm doing
update from inside the Solr plugin so there is an access to API if that
matters
with solr5, I have tried a number of things without success.
a) copied my custom schema.xml to
server/solr/configsets/basic_configs/conf/custom_schema.xml
- when I typed custom_schema.xml into the schema: field in the
create core dialog, a core is created but the new schema isn't used.
Making
installation by
manually editing the core directories as root.
When playing with solr5, I have tried a number of things without success.
a) copied my custom schema.xml to
server/solr/configsets/basic_configs/conf/custom_schema.xml
- when I typed custom_schema.xml into the schema: field
Hi,
I have around 50 fields in my schema and having 20 fields are stored=”true”
and rest of them stored=”false”
In case partial update (atomic update), it is mentioned at many places
that the fields in schema should have stored=”true”. I have also tried
atomic update on documents having fields
have around 50 fields in my schema and having 20 fields are
stored=”true”
and rest of them stored=”false”
In case partial update (atomic update), it is mentioned at many places
that the fields in schema should have stored=”true”. I have also tried
atomic update on documents having fields
Bhooteshwar
rahul.bhootesh...@hotwaxsystems.com wrote:
Hi,
I have around 50 fields in my schema and having 20 fields are stored=”true”
and rest of them stored=”false”
In case partial update (atomic update), it is mentioned at many places
that the fields in schema should have stored=”true”. I have also
wrote:
Hi,
I have around 50 fields in my schema and having 20 fields are
stored=”true”
and rest of them stored=”false”
In case partial update (atomic update), it is mentioned at many places
that the fields in schema should have stored=”true”. I have also tried
atomic update on documents
I too am running into what appears to be the same thing.
Everything works and data is imported but I cannot see the new field in
the result.
If you can index stuff into a new schema for test, try defining one
with dynamicField name=* stored=true indexed=true type=string. Your
schema may have one like this commented out and/or set to false.
This would show you exactly what you are indexing and solve whether
you have any spelling
Thanks Jack, inorder to not affect the query time , what are the options
available to handle this as index time ? So that I group all the similar
books at index time by placing them in some kind of a set , and retrive all
the contents of the set at query time if any one them matches the query.
On
Hi All,
I have a use case where I need to group documents that have a same field
called bookName , meaning if there are a multiple documents with the same
bookName value and if the user input is searched by a query on bookName ,
I need to be able to group all the documents by the same bookName
HI,
You can use the grouping in the solr. You can does this by via query or via
solrconfig.xml.
*A) via query*
http://localhost:8983?your_query_params*group=truegroup.field=bookName*
You can limit the size of group (how many documents you wants to show),
suppose you want to show 5 documents
501 - 600 of 1703 matches
Mail list logo