Hello Paul,

thank you for your answer. I saw your presentation at OR - yes, you are
right, this kind of automation is the way.

However, in case I have small projects, the multitenancy makes still
sense. For example we are just helping our archeologists to preserve the
data for the artifacts they collected at certain locality, just only
about 300 items. There won't be more in this project (since they
probably will not find anything more there).

In such case the scalability would be OK (the target audience is quite
small). Having just the assetstore in case of serious breakdown (I would
say disaster :-)) could be a trouble, but having also the database and
related things backup... Nevertheless, you are right, it is a risk.

Regards,

Vlastik


On 8/14/19 7:41 AM, Paul Münch wrote:
> Hello Vlastik,
> 
> we run a handful DSpace instances at our institution. For every service
> we have a single instance - independent of number of items. We discussed
> your solution, too. But decided against, because of following points:
> 
> * lesser scabalbility and maintenanability if there come more services
> or users
> * all data are in one assetstore unassigend to a service - in case of a
> serious breakdown it could be very hard to separate the data and move
> them consistent in an another system (DSpace or something else)
> 
> For us it is the best way to install and maintain one instance for one
> service. If there are only less items, your single VM don't need that
> much power, too.
> 
> I'm not sure, if you maybe talked to my colleague. But we introduced at
> OR2019 our DSpace-Clustering
> (https://www.conftool.net/or2019/index.php?page=browseSessions&form_session=349).
> Maybe the setup is interesting for you - besides the Puppet
> configuration. At our institution we run both all components on a single
> VM and each component on a single VM (Clustering).
> 
> Kind regards,
> 
> Paul Münch
> 
> Am 13.08.19 um 17:00 schrieb Vlastimil Krejčíř:
>> Hi all,
>>
>> I would like to ask/discuss a (probably?) new topic - a
>> multitenancy/multisite DSpace. What exactly do I mean?
>>
>> I want to have several projects in one instance of DSpace. For example
>> the archive of mathematics and the archive for archeologists. Both are
>> rather small (hundreds or small thousands of Items), but different
>> metadata, different structures). Or another example is to provide the
>> archives for two different institutions. (Yes, I can install two
>> DSpace instances, with help of Puppets, Docker and such tools,
>> however, especially in case of small projects it makes sense to have
>> just only one instance of DSpace.)
>>
>> I would like to achieve absolute transparency to end users:
>>
>> 1. both project have separate URLs
>> 2. different web design
>> 3. separate data for browse and search
>> 4. different metadata schemas (or just metadata fields)
>> 5. separate handles (prefixes)
>> 6. separate administration
>>
>>
>> I have thought about this points and my guess is that I can simulate
>> the projects via communities (one project = one community):
>>
>> 1. possible using mod_rewrite (somehow, I haven't thought of details)
>> 2. definitely possible via XMLUI (not sure about Angular in DSpace 7)
>> 3. possible using browse and search the community
>> 4. possible
>> 5. not possible (or need further development)
>> 6. probably possible
>>
>> The question might be scalability, but I think it should be OK for
>> smaller projects (lets say up to ten projects, each of them no more
>> than thousands of Items).
>>
>> I discussed this topic with some of you at OR 2019 in Hamburg and I
>> was redirected to this page:
>>
>> https://csufresno-dspace.calstate.edu/
>>
>> which is probably something I want to achieve (just click the
>> communities, the URL and design will change).
>>
>> Are there anyone in the community who has already tried to solve this?
>> How do you technically solve for example the point no. 1 in my list? I
>> would be really grateful if you could share you experience.
>>
>> Regards,
>>
>> Vlastik
>>
>>
>> -- 
>> All messages to this mailing list should adhere to the DuraSpace Code
>> of Conduct: https://duraspace.org/about/policies/code-of-conduct/
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "DSpace Technical Support" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to dspace-tech+unsubscr...@googlegroups.com
>> <mailto:dspace-tech+unsubscr...@googlegroups.com>.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/dspace-tech/081375bd-9f45-428f-9084-901b599178cb%40googlegroups.com
>> <https://groups.google.com/d/msgid/dspace-tech/081375bd-9f45-428f-9084-901b599178cb%40googlegroups.com?utm_medium=email&utm_source=footer>.
> 
> -- 
> Philipps-Universität Marburg | UB 
> Digitale Dienste | Deutschhausstraße 9 | D018
> Tel. +49 06421 28-24624  
> --
> 
> -- 
> All messages to this mailing list should adhere to the DuraSpace Code of
> Conduct: https://duraspace.org/about/policies/code-of-conduct/
> ---
> You received this message because you are subscribed to the Google
> Groups "DSpace Technical Support" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to dspace-tech+unsubscr...@googlegroups.com
> <mailto:dspace-tech+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dspace-tech/a8dcb1b9-235c-b47c-1ae0-2297005a3913%40ub.uni-marburg.de
> <https://groups.google.com/d/msgid/dspace-tech/a8dcb1b9-235c-b47c-1ae0-2297005a3913%40ub.uni-marburg.de?utm_medium=email&utm_source=footer>.

-- 
All messages to this mailing list should adhere to the DuraSpace Code of 
Conduct: https://duraspace.org/about/policies/code-of-conduct/
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/426799be-79ab-3560-43f7-5fb4d33edc01%40ics.muni.cz.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to