Hi Monika:
A few remarks inline.
Thanks,
Richard
On May 14, 2015, at 5:30 PM, Monika C. Mevenkamp moni...@princeton.edu wrote:
Princeton University has had an Open Access mandate in place since fall of
2011. Slowly but surely we are getting to the point of following through by
collecting
Hi Terry:
I’m away from the office with limited network, but based on a quick look at the
code, I have a few suggestions.
You don’t indicate how you run this task, but I’m assuming that for the large
collections at least, you are not doing it in the admin UI but with the
command-line tool.
It
Hi Nathan:
Not sure exactly your process, but the suppression of certain time-stamps (in
the .zip archive itself, e.g) is deliberate: if archive-time was embedded,
then 2 AIPs of the same content would always differ (just by that time-stamp),
so their checksums would be different. That would
Hi Charlene:
It's not a bug, just a confusing name coincidence: you might assume that the
query variable 'scope' expects the same values in discovery and
opensearch, but it doesn't. In the opensearch case, use:
scope=1721.1/123
i.e. the handle of the community or collection you wish to
Hi Nathan:
Not sure I have one lying around, but the bag contents are pretty simple:
All bags have a payload file 'object.properties' that contains the object type
(Item, collection, community),
the handle, and the parent handle. They also have a payload file called
'metadata.xml' that has
Hi Rodrigo:
Question 1:
Yes, you can upload EPUB files. Whether DSpace will automatically recognize
the format depends on:
* whether you have created a new EPUB format in your bitstream format
registry (it is not present by default). This is a standard administrative
Hi Matt:
A very cursory glance at the code suggests that it really is happy only with
'http' not 'https' URLs.
Can you restrict your tests to the former and confirm?
Thanks,
Richard R
(BTW, it would be simple to extend the behavior to SSL use cases)
On Sep 13, 2013, at 9:01 AM, Matthew
. If this is a pain because there are a lot of
dates, just add the script to a 'cron' job that will
run automatically on a schedule that makes sense for you (nightly, weekly,
etc).
Hope this helps,
Richard Rodgers
On Sep 10, 2013, at 5:07 PM, Brown, Jacob wrote:
This is my first email
This is already possible with OpenSearch - simply define and expose an O.S. URL
with the desired query.
Dereferencing the URL will return search results in RSS or Atom format.
Hope this helps,
Richard R
On Aug 14, 2013, at 1:43 PM, helix84 heli...@centrum.sk
wrote:
On Wed, Aug 14, 2013 at
Hi Jose:
The cc stuff was rewritten to use a more recent and better supported web
service API from Creative Commons.
As part of this, the cc license URL file as a bitstream was dropped in favor of
storing this value in a (configurable) metadata field.
There are a few other changes (see the
Hi folks:
Just to add to this thread, I think the best approach is to directly edit these
related CC fields (or add/remove bitstreams) as seldom as possible,
since it is too easy to introduce inconsistencies among them (e.g. the name
doesn't match the URI, or the bitstream, etc).
To address
Hi Andrea:
The case you cite is not as obvious to me: how can we assume that the single
PDF is the primary artifact (i.e. the one that the rest of the GS tags
describe)?
We have cases where (in an Item) the article is in Word, or LaTeX, and a
supplementary file is a PDF. In those cases the
Hi Jose:
With respect to question #1, there are actually 2 different ways to achieve
checking in submission:
(1) You can configure virus scan to be invoked directly in submission - see
config file (I think this is XMLUI only):
this:
virus-scan = false
seems like it should be true?
Thank you!
On Fri, Jan 11, 2013 at 11:49 AM, Richard Rodgers
rrodg...@mit.edumailto:rrodg...@mit.edu wrote:
Hi Jose:
With respect to question #1, there are actually 2 different ways to achieve
checking in submission:
(1) You can configure
Hi Mark:
I'd second helix's suggestion - I think those are quite reasonable and
generally useful features.
In fact, I've already implemented them (except the export) in a rewrite of
'ItemImport'. The problem is that
the code
.
Thanks,
Mark
On Fri, Jan 4, 2013 at 11:12 PM, Richard Rodgers
rrodg...@mit.edumailto:rrodg...@mit.edu wrote:
Hi Mark:
Have you looked at StructBuilder, command-line tool used to create Communities
Collections from XML files:
https://wiki.duraspace.org/display/DSDOC3x/Importing+Community
Hi Mark:
Have you looked at StructBuilder, command-line tool used to create Communities
Collections from XML files:
https://wiki.duraspace.org/display/DSDOC3x/Importing+Community+and+Collection+Hierarchy
Not sure if this fits your needs, but might be worth examining,
Hope this helps,
Hi Peter:
This looks great and we'll have a closer look, since we are likewise in a 1.8
local upgrade process and would like to 'normalize' the CC License
representations.
A few cursory observations in the meantime:
I see the task sets the metadata fields, but does not delete the (now
Just FYI:
SRB classes are always in the code path (even if not using SRB storage) - just
a peculiarity of the implementation.
Thanks,
Richard
On Dec 18, 2012, at 12:46 PM, Chris King wrote:
Nope. We shouldn't be using SRB. I just checked dspace.cfg and all the SRB
properties are commented
Hi Tim:
Is the bitstream retrievable? If not, I think there may be just a quite
reasonable misunderstanding: 1.7 Embargo does not hide the metadata (or Item
page) from view , it only restricts access to the bitstreams. This is the
intended design. The 3.0 behavior is different, where (I
Hi Ying:
There is a companion to ItemImport called ItemUpdate that seems to match your
use-case better.
See:
https://wiki.duraspace.org/display/DSDOC3x/Updating+Items+via+Simple+Archive+Format
If memory serves, it has been available since 1.6 or so
Hope this helps,
Richard R
On Oct 24,
Hi Rodrigo:
No - OpenSearch has been in DSpace since 1.6 for all UIs (JSP and XMLUI). There
is a discovery related issue in XMLUI 1.7+, but:
(1)If you don't run discovery (only available in XMLUI), there should be no
problem on any version 1.6 or later,
(2) There is a fix posted for 1.8+
www.iadb.orghttp://www.iadb.org/
Knowledge for Development Challenges
P Please consider the environment before printing this email
From: Richard Rodgers [mailto:rrodg...@mit.edu]
Sent: Monday, September 10, 2012 12:59 PM
To: Calloni, Rodrigo
Cc: Andrea Bollini;
dspace-tech
Yes, as has been remarked, the bigger questions revolve around access and
usage, rather than ingest.
We recently did a pilot with large video files where we ingested them as
preservation masters (via ItemImport), suppressed the
download link, but offered in it's place a link to a much smaller
Hi Kirti:
Not sure if you are having other problems, but I did want to clarify how
MediaFilter works.
It is a general set of tools for operating on your bitstream content, and the
primary use for most people
is to extract text (for indexing) from PDFs Word files etc, not to produce
thumbnails
PM, Mark Diggory wrote:
I'm determining if we would want to add some of these details to our Advanced
Embargo Support business/technical requirements for DSpace 3.0.
https://wiki.duraspace.org/display/DSPACE/Advanced+Embargo+Support
Best,
Mark
On Tue, Apr 3, 2012 at 2:52 PM, Richard Rodgers
Hi Magnus:
Another direction you may want to look into is the mediafilter curation work:
https://github.com/richardrodgers/ctask
This approach uses the Apache Tika text extraction library, which I am fairly
certain supports reading zip files.
The extracted text can then be indexed by normal
Ignasi:
A little more detail on Tim's point: the code paths are a little different when
submissions go through workflow:
Case 1 (which works) - SWORD deposit goes into workflow. When it exits
workflow, 'InstallItem.installItem()' gets called, which in turn calls
EmbargoManager to set embargo
I think Mark makes a number of good points here - esp. regarding modularity -
and it's worth emphasizing that the net effect should be *less* localization
effort, even if there are potentially more files, since one would only need to
worry about the locally deployed modules - but I'm a bit
Hi Sisay:
There is an administrative tool for this. See the documentation here:
https://wiki.duraspace.org/display/DSDOC18/Managing+Community+Hierarchy
Hope this helps,
Richard
On Jan 13, 2012, at 12:22 AM, Webshet, Sisay (ILRI) wrote:
Hello,
I want to move a dspace sub community
Hi Henry:
DSpace generally provides access to its assets only by returning the files.
However, modern browsers are generally quite good at presenting/rendering PDFs,
so if this not happening, you should make sure that the Bitstream format (and
associated mime-type) are correct for your files.
Hi Henry:
Most emails generated by DSpace are controlled by 'templates' found in the
[dspace]/config/emails
directory. You can freely change any of the text in these templates - they are
essentially simple text files.
Be sure to keep a copy of your modifications, so after upgrades, etc you
Hi Phil:
The error comes from not having a value after the 'None:' in the
embargo.terms.days.property
The code requires that there be a number after the ':' - the 'forever' value
is the only exception to that rule (which looks fine in your property).
Just a quick thought - why have the
Hi Phil:
You would not have to rebuild at all, since the DayTable code is already
included in the base DSpace. It is a simple reconfiguration:
(1) Change the setter property in dspace.cfg:
plugin.single.org.dspace.embargo.EmbargoSetter =
org.dspace.embargo.DefaultEmbargoSetter
Change the
Hi Jesús:
A lot of statistics work has been done for DSpace over time, but each project
focuses on different sets of requirements:
does the data need to appear in the UI, does it offer real-time availability
(just to name two of the strengths of the SOLR-based system)?
One example of an
proverb
On 7/26/11 5:31 PM, Mark Diggory mdigg...@atmire.com wrote:
Hardy,
Be aware that MIT / Richard Rodgers also has some Bagit work available,
currently nested within the modules directory here:
http://scm.dspace.org/svn/repo/modules/dspace-replicate/trunk/src/main/jav
a/org
Hi Tonny:
The embargo system is designed to protect bitstreams, not metadata. While it
certainly would be possible to alter OAI or other code to check for embargo
dates, this has not been done to the best of my knowledge. I am curious why,
given that the content will be inaccessible, is it
Hi Robin:
Wendy B will follow with details, but yes, IP sockets are built into the
design. The main reasons:
(1) Portability: desire not to restrict operation of service/deamon to Unix
systems.
(2) Shareability: with an IP socket - you can have one daemon shared across
multiple DSpace clients
Hi Robin:
No objections, and its long overdue. But a friendly amendment:
we have to keep in mind that the Mets profile is not the same as the
X-Packaging (package type) in the SWORD protocol.
That latter has been a neglected and therefore somewhat problematic area, and
the work you propose
Hi George:
If you are using 1.6.0 + , I'd look at ItemUpdate, instead of ItemImport. You
can easily make single metadata field changes, or addition/deletions of
individual bitstreams, without wholesale replacement.
Consult the doc (chapter 8.5)
Thanks,
Richard
On Dec 20, 2010, at 11:48 AM,
Hi Sean:
I'm not sure what details you might want to include, but since the embargo
information is all carried in standard metadata fields,
you could (using whatever UI stack you are on JSPUI, XMLUI) have the UI detect
if the item is embargoed
(essentially, that just means that there is a
if they aren't present.
Thanks and sorry for any confusion - we will add a note in the docs about this
case.
Richard Rodgers
On Oct 7, 2010, at 4:21 PM, Marvin Weaver wrote:
I built 1.6.2 with embargo.field.terms = SCHEMA.ELEMENT.QUALIFIER and
embargo.field.lift = SCHEMA.ELEMENT.QUALIFIER
/dspace-api/src/main/java/org/dspace/app/mediafilter
for examples of other extractor media filters.
Then post any questions to the tech or dev list.
Hope that is helpful,
Richard Rodgers
On Sep 29, 2010, at 10:14 AM, Jizba, Richard wrote:
Hello,
Are there plans to add a PPT text extractor
On Sep 9, 2010, at 9:41 AM, Mark H. Wood wrote:
On Wed, Sep 08, 2010 at 06:18:18PM -0400, Richard Rodgers wrote:
If you look at the class DefaultEmbargoSetter (in org.dspace.embargo) the
method
'parseTerms' creates the lift date out of what EmbargoManager passes it
(which is the contents
Hi Tim:
Just a remark below:
Richard
On Sep 9, 2010, at 11:29 AM, Tim Donohue wrote:
I'd actually go one further and say:
(1) We should update the manual to make clearer (like Mark suggests)
AND
(2) We should work to ship 1.7 with a default embargo already setup
(i.e.
HI Mark:
If you look at the class DefaultEmbargoSetter (in org.dspace.embargo) the method
'parseTerms' creates the lift date out of what EmbargoManager passes it (which
is the contents
of the metadata field configured for the 'terms'), and the next method in that
class
'setEmbargo' does the
search.max-clauses set at 200,000 and I changed it to 4096 which is twice the
default. I also changed search.maxfieldlength from -1 (unlimited) to 10,000
for the same reason. What do you think? See our numbers below.
Thanks a bunch,
Sue
From: Richard Rodgers [mailto:rrodg...@mit.edu]
Sent: Monday
Hi Sue:
I don't have any immediate help, but I'm struck by how long the indexing job is
taking. I had a comparison done with one of our DSpace 1.6 repositories which
is about half the size of yours
(71,481 items), and is mostly text-based content (which I think yours is also?)
On not
the DSpace database connection is OK
- test-email: Test the DSpace email server settings OK
- sub-daily: Send daily subscription notices
- update-handle-prefix: Update handle records and metadata when moving from
one handle to another
From: Richard Rodgers [mailto:rrodg...@mit.edu]
Sent
Hi Richard:
Try this document for a fuller explanation. Let me know of any questions not
addressed in it. We will try to include it in the next release.
Thanks,
Richard R
On May 5, 2010, at 6:02 PM, Jizba, Richard wrote:
I finally realized that the Embargo Setter is reading dc.embargo.terms
DayTableEmbargoSetter.class
Description: DayTableEmbargoSetter.class
DayTableEmbargoSetter.java
Description: DayTableEmbargoSetter.java
Hi Jason:
I haven't tested it, but here's a setter that might do what you want. I include
both the source and class files (just put the latter in your
Hi Jason:
Bit of an email glitch in my last reply: looks like the text became an
attachment. But the gist is:
I sent you a setter class (source and .class file) that I haven't tested, but
might do what you are looking for.
Let me know if you have any problems, or if the set-up description is
Hi Jose:
We are still working on improving the doc. I attach a draft that might answer
many of your questions.
But briefly, yes, you need to create any new metadata fields you want to use
for embargo, both in the metadata registry, and place them in input-forms.xml
If you don't want embargo
Hi Jason:
One thought: have you added those fields to your input-forms.xml? In other
respects, the embargo fields behave just like any other metadata, and can be
added to the default set - or any collection-specifc set - of metadata fields
used in web submission. The tech doc has instructions
Jason:
Yes, that's right - input-forms must be updated. But I regard it on balance
an advantage that the embargo system 'inherits' all the configurability of
standard DSpace metadata -
not just in submission, but indexing, display and more (so you could, e.g. do
fielded search on embargo
on
them without a migration and re-submission ?
Cheers
hg.
On 31 March 2010 16:21, Richard Rodgers
rrodg...@mit.edumailto:rrodg...@mit.edu wrote:
Hi Jason:
One thought: have you added those fields to your input-forms.xml? In other
respects, the embargo fields behave just like any other
Hi Jason:
I can see that this might be confusing, so let me try to explain a little more
clearly.
At the most basic level, the field containing the 'terms' is where a submitter
specifies how the embargo should work for that item.
At the time of installation into the archive (i.e. when it
to be manually assigned a
lift date and have it's read policies removed - the embargo stuff doesn't have
batch tools for this.
Does this address your question?
Richard
On Mar 31, 2010, at 2:59 PM, Hilton Gibson wrote:
On 31 March 2010 20:35, Richard Rodgers
rrodg...@mit.edumailto:rrodg...@mit.edu
Hi George:
A couple of observations: first, the dc.embargo.terms only get 'applied' when
an item is installed into the repository - it will have no effect on items
already in the repo. So to test, create a new Thesis, and submit it via the web
submission UI (or via batch, etc): be sure that
Hi Gary:
You didn't specify which version of DSpace you are using, but for the just
released 1.6 version the answer is way, mate using the ItemUpdate tool (see
the doc).
Hope this helps,
Richard R.
On Mar 14, 2010, at 7:00 PM, Gary Browne wrote:
Hi all,
I asked this question in 2007
Hi Jose:
Yes, the display you cite is non-optimal (compared to earlier 1.4 behavior).
There is an improvement forthcoming in DSpace 1.6 based on better mime-typing
of the license bitstreams,
and after that, we hope to completely redo CC (using webservice, rather than
Iframes, restore the
Hi Stuart:
I'll take a crack at some of your questions: see remarks inline below.
Thanks,
Richard R
On Tue, 2009-12-08 at 17:40 +1300, stuart yeates wrote:
I have some questions about the Embargo plugin in 1.6. I'm basing this
on http://wiki.dspace.org/index.php/Embargo_1.6 and trolling
Hi Sue:
See remarks inline below, but the general answer is that the SRB
extension was not designed to partition storage along collection lines,
so I don't think it would help you out without a fair bit of additional
work. Also, SRB has been replaced with a new platform called iRods
versions of DSpace
fairly well.
Thanks,
Richard Rodgers
On Wed, 2009-09-30 at 11:59 -0500, Williams, Steven D wrote:
Does anyone have any information on Rich Metadata for Dspace with
Dwell? I have located the following page
http://www.dspace.org/new-user-training/Rich-Metadata-for-DSpace
committed to trunk version and if so, how can I access the search
interface.
Thanks,
Mika
2009/6/5 Richard Rodgers rrodg...@mit.edu:
Mika Alexandre:
There is a widely adopted set of conventions for expressing search results
in standard formats called OpenSearch - http
Mika Alexandre:
There is a widely adopted set of conventions for expressing search
results in standard formats called OpenSearch -
http://www.opensearch.org Mark Wood and I wrote an implementation for
DSpace that includes RSS and Atom, and is available on both the JSP and
XML UIs. We hope
Hi Andrew:
Here's a slightly different perspective that might help illuminate the
checker and it's rationale. While I concur with Mark that there are
engineering issues with the implementation, I think it's a mistake to
view it as a *file* integrity system (for which - as Mark rightly
observes -
At MIT we came up with a similar approach, which takes some of the
grunt work out of managing the skips. We extended MediaFilter to detect PDFBox
(or other) exceptions, then automatically record their handles to a skip list,
which is used for any subsequent runs. We'd be glad to give you the code
at the moment. If you are interested, we
would be glad to share further details with you.
Sorry,
Richard Rodgers
Franzini, Gabriele [Nervianoms] wrote:
Hello,
We are exploring DSpace functionalities, and being in a regulated
environment we absolutely need to look first at the History (Audit
characterize as the truly
awful dirty work of ensuring unique filenames.
Food for thought,
Richard
On Sat, 2008-08-16 at 17:39 -0700, Mark Diggory wrote:
Richard,
I respectfully disagree with you.
On Aug 16, 2008, at 6:54 AM, Richard Rodgers wrote:
Hi Mark:
Let me explain
On Mon, 2008-08-18 at 19:23 +0100, Graham Triggs wrote:
Richard Rodgers wrote:
I do worry about opening door #1 [content rejection],
since taking assets as found seems pretty close to the bedrock
use-case for digital repositories - at least preservation-minded ones.
Well
, Richard Rodgers [EMAIL PROTECTED]
wrote:
On Fri, 2008-08-15 at 10:12 -0700, Mark Diggory wrote:
On Aug 15, 2008, at 9:36 AM, John Preston wrote:
Hi. Can anyone say how I can re-use a bitstream sequence
number. The
use case is the following
On Aug 15
On Fri, 2008-08-15 at 10:12 -0700, Mark Diggory wrote:
On Aug 15, 2008, at 9:36 AM, John Preston wrote:
Hi. Can anyone say how I can re-use a bitstream sequence number. The
use case is the following.
I have a item with a number of bitstreams which are my data files. I
also have a text
On Fri, Aug 15, 2008 at 1:40 PM, Richard Rodgers [EMAIL PROTECTED] wrote:
On Fri, 2008-08-15 at 10:12 -0700, Mark Diggory wrote:
On Aug 15, 2008, at 9:36 AM, John Preston wrote:
Hi. Can anyone say how I can re-use a bitstream sequence number. The
use case is the following.
I have
Hi Jose:
Looks like the doc is a little behind the code - you might have noticed
the thread where we are trying to rationalize the documentation process.
For now, the ItemImporter code is your best bet. But yes, the Bitstream
description can be added as you suggest, but note that the '\t' really
John:
I guess before making any firm recommendation, I'd need to know
what your requirements are. Do you, e.g., require transactional closure
over the content change that generates the first event and the additions
you want your consumer to make based on it? Can you describe what you
are trying
Hi John:
See below
Richard R
On Thu, 2008-06-19 at 11:13 -0500, John Preston wrote:
I guess before making any firm recommendation, I'd need to
know
what your requirements are. Do you, e.g., require
transactional closure
over the content change that
Thanks Mark - also if you eventually also want a code integration site, we can
set something up at dspace-sandbox on GoogleCode fairly easily...
Richard R
Quoting Mark H. Wood [EMAIL PROTECTED]:
I've made a page on the wiki to collect ideas about A/V material:
Hi Feng-chien:
With regard to your first question, there is no support in 1.4.2 for
automatic replication or backup to a secondary store. This is certainly
a desirable feature, and it is 'on the radar' for future storage
development.
With regard to the second question, the 'ItemImport' batch
On Tue, 2008-03-18 at 13:14 +, Simon Brown wrote:
On 13 Mar 2008, at 21:34, Richard Rodgers wrote:
On Thu, 2008-03-13 at 16:23 +, Simon Brown wrote:
I'm still curious about the necessity of the cache, as our removing
it
had no noticeable impact on performance and in fact
Hi Simon:
While I don't doubt for a moment that there are undiscovered memory
leaks in DSpace, I'm not sure I follow the case you describe. By 'object
cache' I'm guessing you mean the cache that is held by the Context
object. This cache is private to the Context instance, and Contexts
as a rule
Hi Simon:
See remarks below...
Thanks,
Richard R
On Thu, 2008-03-13 at 16:23 +, Simon Brown wrote:
On 13 Mar 2008, at 15:51, Richard Rodgers wrote:
Hi Simon:
While I don't doubt for a moment that there are undiscovered memory
leaks in DSpace, I'm not sure I follow the case you
find any setting / file where that is set.
Do you know that?
cheers
maike
On 9-Jan-08, at 6:23 AM, Richard Rodgers wrote:
Hi Maike:
A few explanations to help unravel the enigma:
First, you should understand that there are two different licenses
involved
here, not a choice of one
Hi Maike:
A few explanations to help unravel the enigma:
First, you should understand that there are two different licenses involved
here, not a choice of one. The first - the deposit license - is (roughly)
a licence that the depositor grants to the repository. It is not optional,
and does not
Hi Mark:
Yes, sort of - but all I really wanted to see was 1 or 2 eggs I hadn't
laid myself, to make sure the abstraction was sufficiently flexible
general. I think there is good awareness appreciation of the need for
clean modular boundaries around services like storage, so you can be
believe
that several such attempts are underway (Sun's Honeycomb, e.g.). Anyone can
contribute with questions, criticism, implementation code, etc.
Thanks,
Richard Rodgers
Quoting Ravi S Sathish [EMAIL PROTECTED]:
Hi all,
I sent an email some days ago introducing myself and
my team @ Nirvana
On Thu, 2007-11-29 at 08:14 -0500, John S. Erickson wrote:
Richard Rodgers wrote:
(1) There is a lot of metadata in DSpace (and a lot more to come) that is
not related to user discovery (technical metadata, e.g) - this could live
in a triple store - but would not benefit from it. In fact
Hi Christophe:
See remarks below on Dwell...
Thanks,
Richard
On Fri, 2007-11-23 at 05:29 +0100, Christophe Dupriez wrote:
Hi MacKenzie, Mark and Jim!
Thanks for insisting on the idea of a client based interface!
DWELL:
I will explore Dwell further. I tried it with
Folks:
I'm currently administering the tech dev lists would gladly
reconfigure if the preponderance of opinion is in favor. I'm by no means
a mail admin, and was following the recommendations of the GNU mailman
docs, which I reproduce here:
reply_goes_to_list (general): Where are replies to
Hi Marcelo:
The sub-community functionality was designed as a single-parent
model. I'd need to study the code/DB schema to see what potential
problems there may be, but one question springs to mind immediately:
What behavior do you want when (one of) the parents is deleted?
Normally DSpace
Hi Tiago:
That limit was put in because the progress bar on the top starts to
distort the page beyond 6 (since it has a section for each step).
You can relax it by renumbering the steps in SubmitServlet (they are
constants in the source file), but be prepared to grapple with the
progress bar
the assetstore directory* that Lucene crawls. I just don't know
how that sort of thing is handled when using object-based storage.
On Thu, 2007-05-03 at 13:28 -0400, Richard Rodgers wrote:
Hi Cory:
Not sure about the limits of Lucene, but I think the larger point is
that the back-ends
Hi Cory:
Not sure about the limits of Lucene, but I think the larger point is
that the back-ends are expected only to hold the real content or assets.
Everything else (full-text indices and the like) are *artifacts* (can be
recreated from the assets) that we don't need to manage in the same way.
the whole file, and then resend it, which would
obviously double the transfer time. But some compression/crypto schemes
don't work that way, so maybe we could be OK.
Thanks,
Richard
On Thu, 2007-04-19 at 21:23 +1200, Richard MAHONEY wrote:
Dear Richard,
On Thu, 2007-04-19 at 04:23, Richard Rodgers
Richard:
I'm putting up a prototype implementation of (inter alia) an S3 backend
on the DSpace wiki. (see 'PluggableStorage' page). Would love volunteers
to vet it (not ready for production).
Thanks,
Richard R.
On Thu, 2007-04-12 at 09:49 +1200, Richard MAHONEY wrote:
Dear Robert et al.,
Hi Mark:
You could do, but there is already a tool to accomplish this: check the
doc for CommunityFiliator.
Richard
On Tue, 2007-04-17 at 10:16 -0400, Mark H. Wood wrote:
Our initial community structure has been rethought, and now I need to
move some subcommunities to new locations in the
96 matches
Mail list logo