Hi Cord,
I think you'd do it like this:
1. Add this to schema.xml
!--
Example of using PathHierarchyTokenizerFactory at index time, so
queries for paths match documents at that path, or in descendent paths
--
fieldType name=descendent_path class=solr.TextField
Thank you Brendan,
I had started to read about the tokenizers and couldn't quite piece
together how it would work. I will read about this and post my
implementation if successful.
Cord
On Mon, May 20, 2013 at 4:13 PM, Brendan Grainger
brendan.grain...@gmail.com wrote:
Hi Cord,
I think
and then
selected resumes I show as search result.
Say, while searching in SOLR, I want to achieve something as below.
1. Search keywords in those users resume whose experience is greater than 5
years.
To achieve My understanding is
1. I need to define a new field in schema
2. During indexing, add
Hello All,
I have two different solr servers. Both server has different schema.
Is it possible to shard these two solr server?
Or is there any other way to combine/merge results of two different solr
servers?
--
View this message in context:
http://lucene.472066.n3.nabble.com/sharding
On 5/15/2013 8:49 AM, vrparekh wrote:
I have two different solr servers. Both server has different schema.
Is it possible to shard these two solr server?
Or is there any other way to combine/merge results of two different solr
servers?
In general, this won't work. If your two schemas use
: J.Button,
member_id: 3,
_version_: 143490402154342
}
]
}
}
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/multiValued-schema-example-SOLVED-tp4062209p4062864.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi, I wish to know how to best design a schema to store comments in stories /
articles posted.
I have a set of fields:
/ lt;field name=quot;subjectquot; type=quot;text_generalquot;
indexed=quot;truequot; stored=quot;truequot;/gt;
lt;field name=quot;keywordsquot; type=quot;text_generalquot
criteria,
and then facet on that.
-- Jack Krupansky
-Original Message-
From: samabhiK
Sent: Monday, May 13, 2013 5:24 AM
To: solr-user@lucene.apache.org
Subject: Best way to design a story and comments schema.
Hi, I wish to know how to best design a schema to store comments in stories
really wish to understand how that works.
Sam
--
View this message in context:
http://lucene.472066.n3.nabble.com/Best-way-to-design-a-story-and-comments-schema-tp4062867p4062913.html
Sent from the Solr - User mailing list archive at Nabble.com.
, it is best to start
with a simple design first.
-- Jack Krupansky
-Original Message-
From: samabhiK
Sent: Monday, May 13, 2013 8:55 AM
To: solr-user@lucene.apache.org
Subject: Re: Best way to design a story and comments schema.
Thanks for your reply.
I generally get confused
-way-to-design-a-story-and-comments-schema-tp4062867p4062929.html
Sent from the Solr - User mailing list archive at Nabble.com.
: Monday, May 13, 2013 5:24 AM
To: solr-user@lucene.apache.org
Subject: Best way to design a story and comments schema.
Hi, I wish to know how to best design a schema to store comments in stories
/
articles posted.
I have a set of fields:
/ lt;field name=quot;subjectquot; type=quot
Subject: Re: Best way to design a story and comments schema.
Jack,
Why are multi-valued fields considered messy?
I think I am about to learn something..
Thanks
Another Jack
On Mon, May 13, 2013 at 5:29 AM, Jack Krupansky j...@basetechnology.com
wrote:
Try the simplest, cleanest design first
.
What is the way to achieve the above scenario.
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/multiValue-schema-example-tp4062209.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes, you can effectively chroot all the configs for a collection (to
support multiple collections in same ensemble) - see wiki:
http://wiki.apache.org/solr/SolrCloud#Zookeeper_chroot
On Tue, Apr 23, 2013 at 11:23 AM, bbarani bbar...@gmail.com wrote:
I have used multiple schema files by using
I have used multiple schema files by using multiple cores but not sure if I
will be able to use multiple schema configuration when integrating SOLR with
zookeeper. Can someone please let me know if its possible and if so, how?
--
View this message in context:
http://lucene.472066.n3
: Yes, you can effectively chroot all the configs for a collection (to
: support multiple collections in same ensemble) - see wiki:
: http://wiki.apache.org/solr/SolrCloud#Zookeeper_chroot
I don't think chroot is suitable for what's being asked about here ...
that would completely isolate two
Ah cool, thanks for clarifying Chris - some of that multi-config
management stuff gets confusing but much clearer from your
description.
Cheers,
Tim
On Tue, Apr 23, 2013 at 11:36 AM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Yes, you can effectively chroot all the configs for a
If I have a Zookeper Cluster for my Hbase Cluster already, can I use same
Zookeper cluster for my SolrCloud too?
2013/4/23 Timothy Potter thelabd...@gmail.com
Ah cool, thanks for clarifying Chris - some of that multi-config
management stuff gets confusing but much clearer from your
Yes - better use of existing resources. In this case, the chroot would
be helpful to keep Solr znodes separate from HBase. For the most part,
Solr in steady-state doesn't put a lot of stress on Zookeeper, for the
most part my zk nodes are snoozing.
On Tue, Apr 23, 2013 at 1:46 PM, Furkan KAMACI
On 4/23/2013 1:46 PM, Furkan KAMACI wrote:
If I have a Zookeper Cluster for my Hbase Cluster already, can I use same
Zookeper cluster for my SolrCloud too?
Yes, you can. It is strongly recommended that you use a chroot with the
zkHost parameter if you are sharing zookeeper. It's a really
I will use Nutch with map reduce to crawl huge data and use SolrCloud for
many users with high response time. Actually I wonder about performance
issues separating Zookeper cluster or using them for both Hbase and Solr.
2013/4/23 Shawn Heisey s...@elyograg.org
On 4/23/2013 1:46 PM, Furkan
To minimize my own typing when setting up SOLR schema or config, I created
a simple Excel that can reduce the amount of typing that is required.
Please feel free to use it if you find it useful.
solr_schema_shared.xlsx
Description: application/vnd.openxmlformats
Hello,
Is somebody kind enough to help me, at least by giving some direction
for my research.
Regards
Le 05/04/2013 15:59, contact_pub...@mail-impact.com a écrit :
Hi all,
well i'm totally newbies on solr, and I need some help.
Ok raw definition of my needs :
I have a product database,
more questions about each area.
It would help if you provided more details, e.g., what are the relationships
between various entities, and what the various fields mean. Ideally, you
would tell us what you have tried, and what is not working for you.
Please provide details about the schema, and what
if you provided more details, e.g., what are the relationships
between various entities, and what the various fields mean. Ideally, you
would tell us what you have tried, and what is not working for you.
Please provide details about the schema, and what queries you are
making, and what the expected
Hi all,
well i'm totally newbies on solr, and I need some help.
Ok raw definition of my needs :
I have a product database, with ordinary fields to describe a product.
Name, reference, description, large description, product specifications,
categories etc...
The needs :
1 - Being able to
The purpose of the schema is to associate a type with a field name.
That's it.
A dynamic field associates a type with a range of names.
An empty field in a Lucene index doesn't take any space, so having 450
fields doesn't in itself cause a problem. The point at which you may
have a problem
. (Anonymous - via GTD book)
On Fri, Mar 15, 2013 at 6:14 AM, Upayavira u...@odoko.co.uk wrote:
The purpose of the schema is to associate a type with a field name.
That's it.
A dynamic field associates a type with a range of names.
An empty field in a Lucene index doesn't take any space, so
On Wed, Mar 6, 2013 at 7:50 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
2) If you wish to use the /schema REST API for read and write operations,
then schema information will be persisted under the covers in a data store
whose format is an implementation detail just like the index file
: 2) If you wish to use the /schema REST API for read and write operations,
: then schema information will be persisted under the covers in a data store
: whose format is an implementation detail just like the index file format.
:
: This really needs to be driven by costs and benefits
To revisit sarowes comment about how/when to decide if we are using the
config file version of schema info (and hte API is read only) vs
internal managed state data version of schema info (and the API is
read/write)...
On Wed, 6 Mar 2013, Steve Rowe wrote:
: Two possible approaches
On Mon, Mar 11, 2013 at 2:50 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: 2) If you wish to use the /schema REST API for read and write operations,
: then schema information will be persisted under the covers in a data store
: whose format is an implementation detail just like
: we needed to, we could just assert that the schema file is the
: persistence mechanism, as opposed to the system of record, hence if
: you hand edit it and then use the API to change it, your hand edit may
: be lost. Or we may decide to do away with local FS mode altogether.
presuming
On Mon, Mar 11, 2013 at 5:51 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: I guess my main point is, we shouldn't decide a priori that using the
: API means you can no longer hand edit.
and my point is we should build a feature where solr has the ability to
read/write some piece of
ZK into the mix as our
centralized config server, we could start using it as such consistently. And
so instead of ZK storing a plain xml file, we split up the schema as native
ZK nodes […]
Erik Hatcher made the same suggestion on SOLR-3251:
https://issues.apache.org/jira/browse/SOLR-3251
On Mar 6, 2013, at 7:50 PM, Chris Hostetter hossman_luc...@fucit.org wrote:
I think it would make a lot of sense -- not just in terms of
implementation but also for end user clarity -- to have some simple,
straightforward to understand caveats about maintaining schema
information...
1
On Mar 8, 2013, at 2:57 PM, Steve Rowe sar...@gmail.com wrote:
multiple collections may share the same config set and thus schema, so what
happens if someone does not know this and hits PUT
localhost:8983/solr/collection1/schema and it affects also the schema for
collection2?
Hmm, that's
I'm working on SOLR-3251 https://issues.apache.org/jira/browse/SOLR-3251, to
dynamically add fields to the Solr schema.
I posted a rough outline of how I propose to do this:
https://issues.apache.org/jira/browse/SOLR-3251?focusedCommentId=13572875page
bq. Change Solr schema serialization from XML to JSON, and provide an XML-JSON
conversion tool.
What is the motivation for the change? I think if you are sitting down and
looking to design a schema, working with the XML is fairly nice and fast. I
picture that a lot of people would start
In response to my thoughts about using DOM as an intermediate representation
for schema elements, for use in lazy re-loading on schema change, Erik Hatcher
argued against (solely) using XML for schema serialization
(https://issues.apache.org/jira/browse/SOLR-3251?focusedCommentId=13571631page
Hmm…I think I'm missing some pieces.
I agree with Erick that you should be able to load a schema from any object - a
DB, a file in ZooKeeper, you name it. But it seems by default, having that
object be schema.xml seems nicest to me. That doesn't mean you have to use DOM
or XML internally
I'm not sure what pieces you might be missing, sorry.
I had thought about adding a web UI for schema composition, but that would be a
major effort, and not in scope here.
I agree, though, especially without a full schema modification REST API, that
hand editing will have to be supported
schema seems
like a lot harder to deal with as a human than an XML schema file.
Hence the rest of my comments - just because we don't use the DOM or XML
internally doesn't seem to mean we need to do JSON through the entire pipeline
(eg the serialized representation)
- Mark
.
Basically, why have schema.json? Perhaps it's just me, but a json schema
seems like a lot harder to deal with as a human than an XML schema file.
Right, absolutely, the existence of schema.json assumes no human editing for
exactly this reason, so it's in direct conflict with the need
config server, we could start using it as such consistently. And so
instead of ZK storing a plain xml file, we split up the schema as native ZK
nodes:
configs
+configA
+--schema
+--version: 1.5
+--fieldTypes
| +---text_en tokenizer:foo, filters: [{name: foo, class
: As far as a user editing the file AND rest API access, I think that
: seems fine. Yes, the user is in trouble if they break the file, but that
Ignoring for a moment what format is used to persist schema information, I
think it's important to have a conceptual distinction between data
On Mar 6, 2013, at 4:50 PM, Chris Hostetter hossman_luc...@fucit.org wrote:
i don't think it's
unreasable to say if you would like to manipulate the schema using an
API, then you give up the ability to manipulate it as a config file on
disk
As long as you can initially work
. deploying config and/or schema changes without interrupting queries
Currently we do (1) with a straight-forward master/slave replication setup. N
master shards that handle updates and N slave shards replicating from these. In
this setup we can do (2) by temporarily stopping replication, deploying
the old master-slave architecture as one
option.
With a small amount of dev, having some polling replication for the index side
and using solrcloud for the search side might be possible, though not
necessarily a perfect marriage.
- Mark
Re (2): Deploying new schema/config should
star - blue - 9,
attribute_size:9, attribute_color:blue, price:49}
]
How can i index/query for using join in order to have only one shoe per
model (the best price for example)?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Field-collapsing-bad-performances-schema-redesign
:
http://lucene.472066.n3.nabble.com/Field-collapsing-bad-performances-schema-redesign-tp4038359p4038500.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics
http://www.griddynamics.com
mkhlud
On Mon, Feb 4, 2013 at 10:34 AM, Mickael Magniez
mickaelmagn...@gmail.com wrote:
group.ngroups=true
This is currently very inefficient - if you can live without
retrieving the total number of groups, performance should be much
better.
-Yonik
http://lucidworks.com
resultpage, so i have
to
retrieve the total number of groups
--
View this message in context:
http://lucene.472066.n3.nabble.com/Field-collapsing-bad-performances-schema-redesign-tp4038359p4038377.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Sincerely yours
Mikhail
On 1/24/13 11:22 PM, Fadi Mohsen wrote:
Thanks Per, would the first approach involve restarting Solr?
Of course ZK need to run in order to load the config into ZK. Solr nodes
do not need to run. If they do I couldnt imagine that they need to be
restarted in order to take advantage of new
POST documents, but the issue
here is the schema.xml.
Is it possible to HTTP POST the schema via Solr to Zookeeper?
Or do I have to know about other service host/IP than SOLR, such as
ZooKeeper (wanted to understand whether there is a way to avoid knowing
about zookeeper in production.)?
This must
data on demand, so my
initial thought is to use different HTTP methods to accomplish a collection
in cluster and then right away start HTTP POST documents, but the issue
here is the schema.xml.
Is it possible to HTTP POST the schema via Solr to Zookeeper?
Or do I have to know about other service host
On 1/24/13 4:51 PM, Per Steffensen wrote:
2) or You can have an Solr node (server) load a Solr config into ZK
during startup by adding collection.configName and bootstrap_confdir
VM params - something like this
java -DzkHost=zk_connection_str
-Dcollection.configName=logical_solr_config_name
methods to accomplish a collection
in cluster and then right away start HTTP POST documents, but the issue
here is the schema.xml.
Is it possible to HTTP POST the schema via Solr to Zookeeper?
I've done some work towards this at
https://issues.apache.org/jira/browse/SOLR-4193
Or do I have
to HTTP POST the schema via Solr to Zookeeper?
I've done some work towards this at
https://issues.apache.org/jira/browse/SOLR-4193
Or do I have to know about other service host/IP than SOLR, such as
ZooKeeper (wanted to understand whether there is a way to avoid knowing
about zookeeper
On Jan 24, 2013, at 5:22 PM, Fadi Mohsen fadi.moh...@gmail.com wrote:
The reason we would like to avoid Zookeeper are
* due to lack of knowledge.
* the amount of work/scripting for developers per module and release
documentation.
* the extra steps of patching ZK nodes for QA and
around, think of Solr as a retrieval system, not a
storage system. What are your queries? What do you want to find, and what
criteria do you use to search for it?
[...]
Um, he did describe his desired queries, and there was a reason
that I proposed the above schema design.
He said he wants
On 14 January 2013 16:59, Jens Grivolla j+...@grivolla.net wrote:
[...]
Then please show me the query to find users that are fluent in spanish and
english. Bonus points if you manage to not retrieve the same user several
times. (Hint, your schema stores only one language skill per row).
Doh
On 01/14/2013 12:50 PM, Gora Mohanty wrote:
On 14 January 2013 16:59, Jens Grivolla j+...@grivolla.net wrote:
[...]
Then please show me the query to find users that are fluent in spanish and
english. Bonus points if you manage to not retrieve the same user several
times. (Hint, your schema
into what the schema defines.
2 Do it in the middleware when assembling the query to pass through. Be
careful with the translations though, there always seem to be edge cases.
3 What you're suggesting. Unless you're really fluent in parsers (they
give me indigestion) I'd think about a query component
On 14 January 2013 17:28, Jens Grivolla j+...@grivolla.net wrote:
On 01/14/2013 12:50 PM, Gora Mohanty wrote:
[...]
Doh! You are right, of course. Brainfart from my side.
Ok, I was starting to wonder if I was the one missing something. Re-reading
what I wrote I see I may have sounded a bit
Hi!
I'm quite new to solr and trying to understand how to create a schema from how
our postgres database and then search for the content in solr instead of
querying the db.
My question should be really easy, it has most likely been asked many times but
still I'm not able to google any answer
Hi Niklas,
Maybe this link helps:
http://www.coderthing.com/solr-with-multicore-and-database-hook-part-1/
D.
On Fri, Jan 11, 2013 at 2:19 PM, Niklas Langvig
niklas.lang...@globesoft.com wrote:
Hi!
I'm quite new to solr and trying to understand how to create a schema from
how our
to to have to
query the database for users courses and languages and update everything but
just update a course document
But perhaps I'm thinking to much in database terms?
But still I'm unsure how the schema should look like
Thanks
/Niklas
-Ursprungligt meddelande-
Från: Niklas Langvig
@lucene.apache.org
Ämne: configuring schema to match database
Hi!
I'm quite new to solr and trying to understand how to create a schema from how
our postgres database and then search for the content in solr instead of
querying the db.
My question should be really easy, it has most likely been asked many times
@lucene.apache.org
Ämne: Re: configuring schema to match database
Hi Niklas,
Maybe this link helps:
http://www.coderthing.com/solr-with-multicore-and-database-hook-part-1/
D.
On Fri, Jan 11, 2013 at 2:19 PM, Niklas Langvig niklas.lang...@globesoft.com
wrote:
Hi!
I'm quite new to solr
meddelande-
Från: Dariusz Borowski [mailto:darius...@gmail.com]
Skickat: den 11 januari 2013 14:56
Till: solr-user@lucene.apache.org
Ämne: Re: configuring schema to match database
Hi Niklas,
Maybe this link helps:
http://www.coderthing.com/solr-with-multicore-and-database-hook-part-1
: configuring schema to match database
Hi,
No, it has actually two tables. User and Item. The example shown on the blog is
for one table, because you repeat the same thing for the other table. Only your
data-import.xml file changes. For the rest, just copy and paste it in the conf
directory. If you
a good solution, I just know need to understand how to query
multiple cores now :)
-Ursprungligt meddelande-
Från: Dariusz Borowski [mailto:darius...@gmail.com]
Skickat: den 11 januari 2013 15:15
Till: solr-user@lucene.apache.org
Ämne: Re: configuring schema to match database
Hi
, lastname Courses has column
coursename, startdate, enddate Languages has column language,
writingskill, verbalskill
[...]
I would like to put this data into solr so I can search for all
users how have taken courseA and are fluent in english.
Can I do that?
1. Your schema for the single
...@mimirtech.com]
Skickat: den 11 januari 2013 15:55
Till: solr-user@lucene.apache.org
Ämne: Re: configuring schema to match database
On 11 January 2013 19:57, Niklas Langvig niklas.lang...@globesoft.com wrote:
Ahh sorry,
Now I understand,
Ok seems like a good solution, I just know need to understand how
not really be of concern. As
your courses and languages tables are connected only to user, the
schema that I described earlier should suffice. To extend my
earlier example, given:
* userA with courses c1, c2, c3, and languages l1, l2
* userB with c2, c3, and l2
you should flatten it such that you get
only to user, the
schema that I described earlier should suffice. To extend my
earlier example, given:
* userA with courses c1, c2, c3, and languages l1, l2
* userB with c2, c3, and l2
you should flatten it such that you get the following Solr documents
userA c1 name c1 startdate...l1 l1 writing
system, not a
storage system. What are your queries? What do you want to find, and what
criteria do you use to search for it?
[...]
Um, he did describe his desired queries, and there was a reason
that I proposed the above schema design.
UserA has taken courseA, courseB and courseC and has
There's no really easy way that I know of. I've seen several approaches
used though
1 do it in the UI. This assumes that your users aren't typing in raw
queries, they're picking field names from a drop-down or similar. Then the
UI maps the chosen fields into what the schema defines.
2 Do
Anyone have experience with internationalizing the field names in the SOLR
schema, so users in different languages can specify fields in their own
language? My first thoughts would be to create a custom search component or
query parser than would convert localized field names back
Hi ,
Our requirement is have a separate schema for every language which differs
in the field type definition for language based analysis.If I have a
standard schema which differs only in the language analysis part ,which can
be inserted by any of the 3 methods in the schema.xml as mentioned
that arguably shouldn't be shown. I don't know if there's away
to filter these out or not...
Best
Erick
On Fri, Dec 28, 2012 at 5:17 PM, jmlucjav jmluc...@gmail.com wrote:
Hi,
I have an index where schema browser histogram reports some terms that I
never indexed. When you run a query to get
Hi,
I have an index where schema browser histogram reports some terms that I
never indexed. When you run a query to get those terms you get of course
none. I optimized the index and same issue. The field is a TrieIntField.
I think I might have seen some post about this (or a similar) issue
Personally I have never given it any attention, so I suspect it doesn't
matter much.
Upayavira
On Thu, Dec 20, 2012, at 05:08 AM, Alexandre Rafalovitch wrote:
Hello,
In the schema.xml, we have a name attribute on the root note. The
documentation says it is for display purpose only. But for
is optional (as per the source code, but no
mention in doc/comments) and defaults to 1.0, with no warning.
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Thursday, December 20, 2012 12:08 AM
To: solr-user@lucene.apache.org
Subject: Where does schema.xml's schema
: Alexandre Rafalovitch
Sent: Thursday, December 20, 2012 12:08 AM
To: solr-user@lucene.apache.org
Subject: Where does schema.xml's schema/@name displays?
Hello,
In the schema.xml, we have a name attribute on the root note. The
documentation says it is for display purpose only. But for display
Yeah... not sure how I missed it, but my search sees it now.
Also, the name will default to schema.xml is you do leave it out of the
schema.
-- Jack Krupansky
-Original Message-
From: Mikhail Khludnev
Sent: Thursday, December 20, 2012 3:06 PM
To: solr-user
Subject: Re: Where does
to be working. (Anonymous - via GTD book)
On Fri, Dec 21, 2012 at 7:50 AM, Jack Krupansky j...@basetechnology.comwrote:
Yeah... not sure how I missed it, but my search sees it now.
Also, the name will default to schema.xml is you do leave it out of the
schema.
-- Jack Krupansky
: On another hand, having @version default to 1.0 is probably an oversight,
: given the number of changes present Should it not default to latest or
: at least to 1.5 (and change periodically)?
If the default value changed, then users w/o a version attribute in their
schema would suddenly
it not default to latest
or
: at least to 1.5 (and change periodically)?
If the default value changed, then users w/o a version attribute in their
schema would suddenly get very different behavior if they upgraded from
one version of solr the the next.
-Hoss
in context:
http://lucene.472066.n3.nabble.com/how-to-understand-this-benchmark-test-results-compare-index-size-after-schema-change-tp4026674p4027544.html
Sent from the Solr - User mailing list archive at Nabble.com.
, Dec 13, 2012 at 2:56 AM, Jie Sun jsun5...@yahoo.com wrote:
I cleaned up the solr schema by change a small portion of the stored fields
to stored=false.
out for 5000 document (about 500M total size of original documents), I ran
a
benchmark comparing the solr index size between the schema
I cleaned up the solr schema by change a small portion of the stored fields
to stored=false.
out for 5000 document (about 500M total size of original documents), I ran a
benchmark comparing the solr index size between the schema before/after the
clean up.
first time run it showed about 40
On Wed, Oct 17, 2012 at 3:20 PM, Dotan Cohen dotanco...@gmail.com wrote:
I do have a Solr 4 Beta index running on Websolr that does not have
such a field. It works, but throws many Service Unavailable and
Communication Error errors. Might the lack of the _version_ field be
the reason?
On Thu, Nov 22, 2012 at 9:26 PM, Nick Zadrozny n...@onemorecloud.com wrote:
Belated reply, but this is probably something you should let us know about
directly at supp...@onemorecloud.com if it happens again. Cheers.
Hi Nick. This particular issue was on a Solr 4 instance on AWS, not on
the
related metadata too.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-defining-Schema-structure-trouble-tp4020305p4021531.html
Sent from the Solr - User mailing list archive at Nabble.com.
.
-- Jack Krupansky
-Original Message-
From: denl0
Sent: Wednesday, November 21, 2012 4:01 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr defining Schema structure trouble.
isn't it possible to combine the document related values and page related
values at query time?
Book1
Page1
/Solr-defining-Schema-structure-trouble-tp4020305p4020471.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ah... sure, you can create a schema that has several different document
types in it, with extra fields that are used in some but not all documents -
books have the metadata fields but no page bodies while pages have page
bodies but no metadata. And maybe even do a Solr join for the block
901 - 1000 of 1703 matches
Mail list logo