Re: [Koha] Error in SIP2's encoding when clients connect parallel in same time

2016-12-05 Thread Ahmet Melih Başbuğ
Hello.
I continue about my test over SIP2 and last test; The SIP server and
processes are working 2:30 hours and no clients connect to SIP. Meanwhile,
First client connect to SIP ports and I see the carachters are corrupted.
​
​Thanks
Ahmet Melih​
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] (no subject)

2016-12-05 Thread Indranil Das Gupta
Hello Mehvish,

Thanks for clearing that up! :-)

Since you are using 16.05 you do not need to jump through those hoops
at all. In other words, the instructions in the PDF are not required.

Simply edit the "Other options" section of your MARC21 frameworks 856
$u and set the value for "Plugin" to "upload.pl'.

And fix your koha-conf.xml, the   setting should read as
/var/lib/koha//uploads

hope this helps
-indranil

--
Indranil Das Gupta
L2C2 Technologies

Phone : +91-98300-20971
WWW  : http://www.l2c2.co.in
Blog: http://blog.l2c2.co.in
IRC : indradg on irc://irc.freenode.net
Twitter : indradg


On Tue, Dec 6, 2016 at 8:04 AM, Mehvish Farah  wrote:
> Hi Indranil,
> Thanks for your reply.
> Attached is the file that I followed.
>
> Version 16.5 and deb package
>
> I followed the file (attached) but i am stuck on step 6 where it says upload
> docs while cataloging.book uploads and then says check permission.
>
> On Mon, Dec 5, 2016 at 1:50 PM, Indranil Das Gupta 
> wrote:
>>
>> Hi Mehvish
>>
>> Quick questions:
>>
>> a) what version of Koha are you using and how was it installed - (i)
>> deb package or tarball?
>>
>> b) what is the full path to this folder that you mention has all the
>> permissions for everyone?
>>
>> c) Have you linked your selected MARC21 framework's 856 $u field to
>> the upload.pl plugin?
>>
>> regards
>> -indranil
>> --
>> Indranil Das Gupta
>> L2C2 Technologies
>>
>> Phone : +91-98300-20971
>> WWW  : http://www.l2c2.co.in
>> Blog: http://blog.l2c2.co.in
>> IRC : indradg on irc://irc.freenode.net
>> Twitter : indradg
>>
>>
>> On Tue, Dec 6, 2016 at 2:37 AM, Mehvish Farah 
>> wrote:
>> > I am not able to upload ebooks in the designated folder.it says check
>> > permissions although the destination folder is open to everyone.
>> >
>> > --
>> > *Mehvish*
>> > ___
>> > Koha mailing list  http://koha-community.org
>> > Koha@lists.katipo.co.nz
>> > https://lists.katipo.co.nz/mailman/listinfo/koha
>
>
>
>
> --
> Mehvish
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] (no subject)

2016-12-05 Thread Indranil Das Gupta
Hi Mehvish

Quick questions:

a) what version of Koha are you using and how was it installed - (i)
deb package or tarball?

b) what is the full path to this folder that you mention has all the
permissions for everyone?

c) Have you linked your selected MARC21 framework's 856 $u field to
the upload.pl plugin?

regards
-indranil
--
Indranil Das Gupta
L2C2 Technologies

Phone : +91-98300-20971
WWW  : http://www.l2c2.co.in
Blog: http://blog.l2c2.co.in
IRC : indradg on irc://irc.freenode.net
Twitter : indradg


On Tue, Dec 6, 2016 at 2:37 AM, Mehvish Farah  wrote:
> I am not able to upload ebooks in the designated folder.it says check
> permissions although the destination folder is open to everyone.
>
> --
> *Mehvish*
> ___
> Koha mailing list  http://koha-community.org
> Koha@lists.katipo.co.nz
> https://lists.katipo.co.nz/mailman/listinfo/koha
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


[Koha] (no subject)

2016-12-05 Thread Mehvish Farah
I am not able to upload ebooks in the designated folder.it says check
permissions although the destination folder is open to everyone.

-- 
*Mehvish*
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


[Koha] Re-sending URLs in my previous message (RE: New opac detail view for records)

2016-12-05 Thread King, Fred
I knew that our e-mail system modified all the URLs in incoming messages for 
extra security, but I didn't realize it did the same to outgoing messages. My 
apologies. Try this:

www(dot)philobiblios(dot)info:81/authcatResults(dot)xsl
www(dot)philobiblios(dot)info:81/authcatDetail(dot)xsl
www(dot)philobiblios(dot)info:81/authcatUtils(dot)xsl

I set up a web server on port 81 so I can put up files, documentation, etc., 
for what I've done so other people can try similar projects. All I have right 
now are the XSL files; I hope to put up a "how to" in the next couple of months.

Fred 

-Original Message-
From: Koha [mailto:koha-boun...@lists.katipo.co.nz] On Behalf Of King, Fred
Sent: Monday, December 05, 2016 12:56 PM
To: 'Pedro Amorim'; Elaine Bradtke
Cc: koha@lists.katipo.co.nz
Subject: [EXTERNAL] Re: [Koha] New opac detail view for records

Sorry about the site being down. I rebooted the server and it’s working again. 
One of these days I’ll look into why that happens, but not today.



As far as I can tell, all you need to do is configure the XSLT files to have 
the right file names so they can find each other. Then go into Koha 
Administration – Global System Preferences – OPAC – Appearance and change the 
file locations for OPACXSLTDetailsDisplay and OPACXSLTResultsDisplay.



If you want to look at my modified xsl files, you should be able to see them at

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.philobiblios.info-3A81_authcatDetail.xsl&d=DgIGaQ&c=RvBXVp2Kc-itN3g6r3sN0QK_zL4whPpndVxj8-bJ04M&r=vKh6XwOmjyC51IkP1OfsdjQZoWT2vpi6VZl8El8EPRI&m=1Yk17XgJsIBcIDsiCm-Vpg9xd3o1KNRtOLgOO67K6Vc&s=yFh1neZKqvnLk_QdmcTJ51aJtXdcByAUvxrqljWd7Oc&e=
 

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.philobiblios.info-3A81_authcatResults.xsl&d=DgIGaQ&c=RvBXVp2Kc-itN3g6r3sN0QK_zL4whPpndVxj8-bJ04M&r=vKh6XwOmjyC51IkP1OfsdjQZoWT2vpi6VZl8El8EPRI&m=1Yk17XgJsIBcIDsiCm-Vpg9xd3o1KNRtOLgOO67K6Vc&s=2rH1cl1CDO33G3llov_fLO-jV0l4-ebY-Z18LS3kEOw&e=
 

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.philobiblios.info-3A81_authcatUtils.xsl&d=DgIGaQ&c=RvBXVp2Kc-itN3g6r3sN0QK_zL4whPpndVxj8-bJ04M&r=vKh6XwOmjyC51IkP1OfsdjQZoWT2vpi6VZl8El8EPRI&m=1Yk17XgJsIBcIDsiCm-Vpg9xd3o1KNRtOLgOO67K6Vc&s=jrODD6H0whJf2E-Rt5N7escrlFs9eQL8M4tTSzLiM5I&e=
 



(With apologies to all *real* XSLT coders. ☺ I know XSLT coding the way I know 
cataloging—I picked up things as I went along and I know it’s a bit messy.)



Fred King

Medical Librarian, MedStar Washington Hospital Center

fred.k...@medstar.net

202-877-6670

ORCID -0001-5266-0279



I have one of those metabolisms where I can eat whatever I want and my body 
converts it to energy and stores the excess as fat.

--Randall Munroe, xkcd



From: Pedro Amorim [mailto:pjamori...@gmail.com]

Sent: Monday, December 05, 2016 8:00 AM

To: Elaine Bradtke

Cc: King, Fred; koha@lists.katipo.co.nz

Subject: Re: [Koha] New opac detail view for records



Thanks all for your help,



Yes, I do have a test server and I'm doing all the changes in a Docker 
container, so if something goes wrong I just rollback to where it was initially.

Also, all the changes I make to core including XSLT files modifications go in 
the Dockerfile and hence are documented and need to be considered for any 
future upgrade. So all is well in that regard.



Unfortunately I won't be able to get back to this matter for a couple of days 
but I'll definitely update this thread when I get to it.

Fred, I'll start by following your advice, however I couldn't access the link 
you shared, it's showing a database error, at the time of writing.

Also "as long as Koha knows what they're called, and you can put them anywhere 
you want as long as Koha knows where they are", any special place (system 
preferences, any other xml config file) that need to mention these new files? 
Also, did you change the template files to show the new detail view option?



Thanks again,



Pedro Amorim



2016-11-28 16:48 GMT-01:00 Elaine Bradtke 
mailto:e...@efdss.org>>:

I agree with everything Fred said.  It's taken quite a bit of testing and

retesting to change something in the OPAC display. We managed to

accidentally  hide all the bibliographic data in the OPAC search results on

the first two tries.  Thank goodness for our test system.



On Mon, Nov 21, 2016 at 9:34 PM, King, Fred 
mailto:fred.k...@medstar.net>> wrote:



> Pedro,

>

> If you do end up modifying the XSLT files, here are a few tips:

>

> I created detail display and results display XSLT files for our local

> authors catalog (see 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.philobiblios.info&d=DgIGaQ&c=RvBXVp2Kc-itN3g6r3sN0QK_zL4whPpndVxj8-bJ04M&r=vKh6XwOmjyC51IkP1OfsdjQZoWT2vpi6VZl8El8EPRI&m=1Yk17XgJsIBcIDsiCm-Vpg9xd3o1KNRtOLgOO67K6Vc&s=YEd958D-mvt4X6V_F8wGj369Ox_EHZO2KBk6RZvQjy0&e=
>  
> 

Re: [Koha] MySQL database server, dedicated vs virtual

2016-12-05 Thread Tajoli Zeno

Hi to all,

Il 05/12/2016 18:18, Jonathan Druart ha scritto:

On a side note, we lack testers. Patches are in the queue to support new
versions of MySQL (out of the box, because you can still use MariaDB or
tweak the MySQL config to make it works).


in fact if you want try Koha on Ubuntu 16 LTS the tweak on MySQL config 
(/etc/mysql.ini) is:


[mysqld]
sql_mode=IGNORE_SPACE,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

Source: 
http://askubuntu.com/questions/811831/whats-the-correct-way-to-revert-mysql-5-7-strict-mode-back-to-how-it-was-in-5-6


This is the same mode in mysql 5.6.

Bye
Zeno Tajoli

--
Zeno Tajoli
/SVILUPPO PRODOTTI CINECA/ - Automazione Biblioteche
Email: z.taj...@cineca.it Fax: 051/6132198
*CINECA* Consorzio Interuniversitario - Sede operativa di Segrate (MI)
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] New opac detail view for records

2016-12-05 Thread Pedro Amorim
Hey Fred,

Thanks for your help and sharing your resources, however I think there's
been a misunderstanding.
I have already done that and altered those template files to my needs in
order to define how and what fields are shown in the *normal* detail view.

I misinterpreted you and thought what you did was add a new detail view for
record view to the original 3 Koha brings out of the box.

-

Katrin, the fourth view I intend to add is called NP-405 and I believe it's
a Portuguese bibliographic standard and resembles ISBD style.

The detail view itself is irrelevant as what I am looking for is the
technique on how to add it, after that we can use XSL (similar to normal
view) to create our own custom detail view.

Thanks again,

Pedro Amorim


2016-12-05 16:55 GMT-01:00 King, Fred :

> Sorry about the site being down. I rebooted the server and it’s working
> again. One of these days I’ll look into why that happens, but not today.
>
>
>
> As far as I can tell, all you need to do is configure the XSLT files to
> have the right file names so they can find each other. Then go into Koha
> Administration – Global System Preferences – OPAC – Appearance and change
> the file locations for OPACXSLTDetailsDisplay and OPACXSLTResultsDisplay.
>
>
>
> If you want to look at my modified xsl files, you should be able to see
> them at
>
> http://www.philobiblios.info:81/authcatDetail.xsl
>
> http://www.philobiblios.info:81/authcatResults.xsl
>
> http://www.philobiblios.info:81/authcatUtils.xsl
>
>
>
> (With apologies to all *real* XSLT coders. J I know XSLT coding the way I
> know cataloging—I picked up things as I went along and I know it’s a bit
> messy.)
>
>
>
> Fred King
>
> Medical Librarian, MedStar Washington Hospital Center
>
> fred.k...@medstar.net
>
> 202-877-6670 <(202)%20877-6670>
>
> ORCID -0001-5266-0279
>
>
>
> I have one of those metabolisms where I can eat whatever I want and my
> body converts it to energy and stores the excess as fat.
>
> --Randall Munroe, xkcd
>
>
>
> *From:* Pedro Amorim [mailto:pjamori...@gmail.com]
> *Sent:* Monday, December 05, 2016 8:00 AM
> *To:* Elaine Bradtke
> *Cc:* King, Fred; koha@lists.katipo.co.nz
>
> *Subject:* Re: [Koha] New opac detail view for records
>
>
>
> Thanks all for your help,
>
>
>
> Yes, I do have a test server and I'm doing all the changes in a Docker
> container, so if something goes wrong I just rollback to where it was
> initially.
>
> Also, all the changes I make to core including XSLT files modifications go
> in the Dockerfile and hence are documented and need to be considered for
> any future upgrade. So all is well in that regard.
>
>
>
> Unfortunately I won't be able to get back to this matter for a couple of
> days but I'll definitely update this thread when I get to it.
>
> Fred, I'll start by following your advice, however I couldn't access the
> link you shared, it's showing a database error, at the time of writing.
>
> Also "as long as Koha knows what they're called, and you can put them
> anywhere you want as long as Koha knows where they are", any special
> place (system preferences, any other xml config file) that need to mention
> these new files? Also, did you change the template files to show the new
> detail view option?
>
>
>
> Thanks again,
>
>
>
> Pedro Amorim
>
>
>
> 2016-11-28 16:48 GMT-01:00 Elaine Bradtke :
>
> I agree with everything Fred said.  It's taken quite a bit of testing and
> retesting to change something in the OPAC display. We managed to
> accidentally  hide all the bibliographic data in the OPAC search results on
> the first two tries.  Thank goodness for our test system.
>
>
> On Mon, Nov 21, 2016 at 9:34 PM, King, Fred  wrote:
>
> > Pedro,
> >
> > If you do end up modifying the XSLT files, here are a few tips:
> >
> > I created detail display and results display XSLT files for our local
> > authors catalog (see http://www.philobiblios.info
> 
> for the low-powered
> > test version). I discovered two wonderful things: you can name them
> > anything you want as long as Koha knows what they're called, and you can
> > put them anywhere you want as long as Koha knows where they are. (You
> also
> > have to sync those with the Utils.xsl file.) I put them in /var/www/html
> > and called them authcatUtils.xsl and authcatDetail.xsl.
> >
> > If you do modify them, I highly recommend that you document ALL your
> > changes with comments. Also, it's very easy to break them, so test each
> > change before you go on to the next. Go slowly!
> >
> > You do have a test server for this, right? :-)
> >
> > Fred King
> > Medical Librarian, MedStar Washington Hospital Center
> > fred.k...@medstar.net
> > 202-877-6670
> > ORCID -0001-5266-0279
> >
> > A learning e

Re: [Koha] New opac detail view for records

2016-12-05 Thread King, Fred
Sorry about the site being down. I rebooted the server and it’s working again. 
One of these days I’ll look into why that happens, but not today.

As far as I can tell, all you need to do is configure the XSLT files to have 
the right file names so they can find each other. Then go into Koha 
Administration – Global System Preferences – OPAC – Appearance and change the 
file locations for OPACXSLTDetailsDisplay and OPACXSLTResultsDisplay.

If you want to look at my modified xsl files, you should be able to see them at
http://www.philobiblios.info:81/authcatDetail.xsl
http://www.philobiblios.info:81/authcatResults.xsl
http://www.philobiblios.info:81/authcatUtils.xsl

(With apologies to all *real* XSLT coders. ☺ I know XSLT coding the way I know 
cataloging—I picked up things as I went along and I know it’s a bit messy.)

Fred King
Medical Librarian, MedStar Washington Hospital Center
fred.k...@medstar.net
202-877-6670
ORCID -0001-5266-0279

I have one of those metabolisms where I can eat whatever I want and my body 
converts it to energy and stores the excess as fat.
--Randall Munroe, xkcd

From: Pedro Amorim [mailto:pjamori...@gmail.com]
Sent: Monday, December 05, 2016 8:00 AM
To: Elaine Bradtke
Cc: King, Fred; koha@lists.katipo.co.nz
Subject: Re: [Koha] New opac detail view for records

Thanks all for your help,

Yes, I do have a test server and I'm doing all the changes in a Docker 
container, so if something goes wrong I just rollback to where it was initially.
Also, all the changes I make to core including XSLT files modifications go in 
the Dockerfile and hence are documented and need to be considered for any 
future upgrade. So all is well in that regard.

Unfortunately I won't be able to get back to this matter for a couple of days 
but I'll definitely update this thread when I get to it.
Fred, I'll start by following your advice, however I couldn't access the link 
you shared, it's showing a database error, at the time of writing.
Also "as long as Koha knows what they're called, and you can put them anywhere 
you want as long as Koha knows where they are", any special place (system 
preferences, any other xml config file) that need to mention these new files? 
Also, did you change the template files to show the new detail view option?

Thanks again,

Pedro Amorim

2016-11-28 16:48 GMT-01:00 Elaine Bradtke 
mailto:e...@efdss.org>>:
I agree with everything Fred said.  It's taken quite a bit of testing and
retesting to change something in the OPAC display. We managed to
accidentally  hide all the bibliographic data in the OPAC search results on
the first two tries.  Thank goodness for our test system.

On Mon, Nov 21, 2016 at 9:34 PM, King, Fred 
mailto:fred.k...@medstar.net>> wrote:

> Pedro,
>
> If you do end up modifying the XSLT files, here are a few tips:
>
> I created detail display and results display XSLT files for our local
> authors catalog (see 
> http://www.philobiblios.info
>  for the low-powered
> test version). I discovered two wonderful things: you can name them
> anything you want as long as Koha knows what they're called, and you can
> put them anywhere you want as long as Koha knows where they are. (You also
> have to sync those with the Utils.xsl file.) I put them in /var/www/html
> and called them authcatUtils.xsl and authcatDetail.xsl.
>
> If you do modify them, I highly recommend that you document ALL your
> changes with comments. Also, it's very easy to break them, so test each
> change before you go on to the next. Go slowly!
>
> You do have a test server for this, right? :-)
>
> Fred King
> Medical Librarian, MedStar Washington Hospital Center
> fred.k...@medstar.net
> 202-877-6670
> ORCID -0001-5266-0279
>
> A learning experience is one of those things that says, 'You know that
> thing you just did? Don't do that.’
> --Douglas Adams
>
> -Original Message-
> From: Koha 
> [mailto:koha-boun...@lists.katipo.co.nz]
>  On Behalf Of Katrin
> Sent: Monday, November 21, 2016 4:09 PM
> To: koha@lists.katipo.co.nz
> Subject: Re: [Koha] New opac detail view for records
>
> Hi Pedro,
>
> the normal view can be changed by creating your own XSLT file and
> activating it using the OPACXSLTDetailsDisplay and XSLTDetailsDisplay
> system preferences.
>
> The ISBD view can be changed via the OPACISBD and ISBD system preferences.
>
> I would recommend not to change the templates or to create your own if
> you can avoid it, as it is likely to give you headaches when updating to
> a newer version later on. It's safer to use the preferences, CSS and
> jQuery.
>
> What is the fourth view

Re: [Koha] MySQL database server, dedicated vs virtual

2016-12-05 Thread Paul A

At 04:56 PM 12/5/2016 +, Jonathan Druart wrote:

Paul,
Hum sounds like I already answered you the same things a few months ago...


I'll reply off-list to Jonathan as it could possibly only create on-list 
noise...


P. 


___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] MySQL database server, dedicated vs virtual

2016-12-05 Thread Jonathan Druart
Paul,

Hum sounds like I already answered you the same things a few months ago...

Well, you are still comparing 3.8 with 3.18, last release is 16.11, with
plack and memcached support.
What you missed (living in the past):
Bug 15342 - Performance 3.22 - Omnibus
Bug 7172 - Omnibus for Plack variable scoping problems
Bug 11998 - Syspref caching issues (and all other cache improvements)
And all related.

I will not list all the security issues we fixed during the last 4 years...

For the respect of all the developers and testers working on Koha, I would
appreciate if you could repeat your benchmarks with the stable and
maintained releases (ie. with plack and memcached).
If you find performance issues, I will be happy to find the bottlenecks and
improve them.

But please stop repeating endlessly that 3.18 is slower than 3.8. It does
not bring anything to the discussion.

On a side note, we lack testers. Patches are in the queue to support new
versions of MySQL (out of the box, because you can still use MariaDB or
tweak the MySQL config to make it works).


On Mon, 5 Dec 2016 at 16:33 Paul A  wrote:

> At 02:02 PM 12/4/2016 +, Marcel de Rooy wrote:
> >Please note:
> >Staying at Koha 3.8 is not recommended. The current release (16.11 ==
> >3.26) is already 9(!) release cycles behind..
> >And I would not advise others to do so.
>
> Marcel -- I don't know how you could read my comments as a "recommendation"
> or "advice" to stay with Koha 3.8.24. I mentioned, in response to RG's post
> "... was about 60-80% slower... ", that we had done some lengthy analysis
> of search speed (which I carried out with the assistance of the the Koha
> community and openly made available) and made the disappointing decision to
> remain with that version, under our specific circumstances. And I concluded
> with "YMMV" which stands for "your mileage might vary."
>
> Perhaps I should have been clearer: Koha was our indisputable choice for an
> OPAC (3.4.x) back in 2011; we are not a lending library, so the undoubted
> enhancements for lending libraries are irrelevant to our needs; the
> corollary is that if such enhancements have negative effects on our
> requirements, we very definitely look at them (sandbox level) but will not
> use them in production (i.e. on the public web interface.)
>
> Our production servers are all Ubuntu LTS based: this is a two to five year
> cycle; we do not have the budget or necessary volunteer hours to review
> monthly or bi-monthly changes (except security concerns); our overriding
> concern is total, hands-off stability; I have no idea if we are the only
> Koha-based library that uses a similar LTS approach.
>
> Koha 3.8.24 was released 5/29/2014, corresponding to the Ubuntu 14 LTS
> cycle. We recently upgraded two of our production servers to Ubuntu 16 LTS
> for other databases, but unless I am mistaken Koha 16.11 does not install
> (or at least not easily in our experience, and we have tried) on that o/s.
> So we maintained an additional, dedicated 14 server (with three years
> remaining Ubuntu support to 2019) for the OPAC when we followed the Ubuntu
> cycle. Again, this was disappointing, and again this represents only our
> very specific needs.
>
> So, when I say "your mileage might vary" I mean exactly that -- we probably
> are atypical of many Koha users, but are extremely happy and proud of our
> Koha OPAC -- so, if it wasn't clear, let me state that our analysis of
> search speed is *not* any sort of recommendation.
>
> Best regards -- Paul
>
>
>
> >Marcel
> >
> >
> >
> >Van: Koha  namens Paul A
> >
> >Verzonden: vrijdag 2 december 2016 21:53
> >Aan: koha
> >Onderwerp: Re: [Koha] MySQL database server, dedicated vs virtual
> >
> >Roger,
> >
> >We also looked into the "search performance" last year. We are not a
> >lending library, so cataloguing and OPAC are our primary concerns. Please
> >see  for the detailed tech
> analysis
> >and conclusions.
> >
> >You mention below that "Koha does not benefit a lot from multiple CPU
> cores
> >since each CGI request is typically processed by one CPU except for the
> >Zebra searches and database queries which run as separate processes."
> >
> >I talked to Intel about CPU core cross-threading as it was a very obvious,
> >high-load, show-stopper with Zebra. The Zebra author never replied to my
> >queries. Intel were not optimistic, as kernel (and software) were not
> their
> >bailliwick. I do not know if Plack or Elastic have worked around this.
> >
> >These are hardware (perhaps software usage of core capability), not Koha,
> >restrictions. Tweaking memcached can have appreciable benefits. But the
> >bottom line remains that if a single CPU core hits 101-104%, search
> >functions descend into the "swimming upstream in treacle" world.
> >
> >We made the very conscious (but disappointing) decision to stay with Koha
> >3.8.24 based on our test results. I do spend quite some time testing

Re: [Koha] What is the point of the Undo Import into catalogue feature?

2016-12-05 Thread Paul A

At 01:28 PM 12/5/2016 +, Raymund Delahunty wrote:
I ran repeated tests for up to 48 hours or more. I think the indexes were 
partly updated- 7K out of 13K "remained". I have two files to deal with 
tomorrow (3K, and 30K small MARC records- bibs and items). I am tempted to 
try the Undo again to see if it works. I am asking our support company 
PTFS Europe about any issues there might be with the zebra indexer. If it 
goes wrong again I'll ask for another index rebuild, and give up on the 
Undo feature.


I would agree with what François wrote re: Zebra. Just adding that the 
normal Koha "incremental" cron job appears to be bullet-proof for 
additions, and is normally totally reliable for "admin/staff" adds, 
deletes, edits, etc. We run it every minute and it's a matter of milliseconds.


However, we have had times where a complete (bibs *and* auths) re-index was 
required, but by the time the cataloguers told me "what they had done" I 
could not reproduce it. So I now have a weekly cron job for a complete 
re-index. Although it looks "long" on an SSH screen, the actual down-time 
of the Zebra server is just a few seconds, so our cataloguers have been 
told to call me if in doubt, and I do it manually. I'm sure PTFS can sort 
this out for you.


Paul



Ray Delahunty.


-Original Message-
From: Koha [mailto:koha-boun...@lists.katipo.co.nz] On Behalf Of Francois 
Charbonnier

Sent: 05 December 2016 13:14
To: koha@lists.katipo.co.nz
Subject: Re: [Koha] What is the point of the Undo Import into catalogue 
feature?


Hi,

Have you waited long enough for zebra to re-index your catalogue ? To me, 
what you are describing is not faulty. Once you reverted an imported 
batch, it takes time to zebra to remove the records from the indexes.

Especially if you work with large files.

If you waited long enough and the indexes never got updated, I would say 
it's faulty, yes. But it you reverted the import and search for records 
right away, I would say it's just zebra that hasn't been able to reindex 
everything...



François Charbonnier,


___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


[Koha] What's on in koha-devel #8

2016-12-05 Thread Jonathan Druart
Hello librarians and developers,

Koha 16.11 and its Debian packages have now been released [1].

As Mirko noted Debian Wheezy is no longer supported and Debian packages
won't be provided for 3.22 [2].

I should also added that some important bugs have been reported recently
and it may be better to wait for the 16.11.1 version to be released before
upgrading [3, 4, 5].
Note that if you database password contain Unicode characters, you should
also wait for bug 17720 (CSRF token is not generated correctly).

= Refactoring =
== Move to the Koha namespace ==
We keep on moving the C4::Members module code to the Koha namespace and a
few subroutine of C4::Circulation are going to be moved as well [6].

== Remove the biblioitems.marcxml field ==
I have been asking for help on bug 17196 (Move marcxml out of the
biblioitems table) for months now and this will certainly be my last
attempt to get attention on it. As we are at the beginning of the release
we should focus on it to get it pushed soon. That way we will have time to
catch and fix bugs if we find some.

== Rewrite of the upload feature =
A month ago Marcel started to refactor the upload section [7].
This is the groundwork to start improvements in this area. Once this is
pushed we will be able to add new features, so please test!

= Template Toolkit syntax for notices =
Kyle started a discussion some months ago [8] to switch from our home-made
syntax for notices to the Template Toolkit syntax.
That would bring us a lot of flexibility to write the notices.
But to do so, we need a plan! We already started to support this syntax
(inside 16.11) but then we have to decide what to do next.
Currently the plan would be to replace the default syntax (bug 15278), then
reveal in the editor that you can use this new syntax (bug 15277) and
finally add documentation to help people writing/porting their own notices
(bug 15276).
We need to  get opinions to know how we could move forward. Kyle is going
to start a new topic on the Koha mailing list for that. Stay tuned!

= Standardize our EXPORT =
A bit more technically, I would need developer's attention on bug 17600
(Standardize the EXPORT). At the end of the last release, very bad bugs
appeared at the last minute because of our circular dependencies and the
way we export subroutines from routines. Everything is (quickly) explained
on that bug and a patchset has been submitted for discussion. If you are
aware of how we could fix that cleanly, please jump into the discussion.

The next general IRC meeting is on December 7th, 14 UTC.
https://wiki.koha-community.org/wiki/General_IRC_meeting_7_December_2016

The next dev IRC meeting is on December 14th, 13 UTC.
https://wiki.koha-community.org/wiki/Development_IRC_meeting_14_December_2016

Hope to see your there!

Cheers,
Jonathan

[1] https://koha-community.org/koha-16-11-released-2
[2]
http://lists.koha-community.org/pipermail/koha-devel/2016-November/043330.html
[3] Bug 17676 - Default COLLATE for marc_subfield_structure is not set
[4] Bug 17344 - Can't set guarantor in quick add brief form
[5] Bug 17709 - Article request broken
[6] Bug 17677 - Move C4::Circulation code to the Koha namespace
[7] Bug 17501 - Koha Objects for uploaded files
[8]
http://lists.koha-community.org/pipermail/koha-devel/2016-February/042316.html
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] MySQL database server, dedicated vs virtual

2016-12-05 Thread Paul A

At 02:02 PM 12/4/2016 +, Marcel de Rooy wrote:

Please note:
Staying at Koha 3.8 is not recommended. The current release (16.11 == 
3.26) is already 9(!) release cycles behind..

And I would not advise others to do so.


Marcel -- I don't know how you could read my comments as a "recommendation" 
or "advice" to stay with Koha 3.8.24. I mentioned, in response to RG's post 
"... was about 60-80% slower... ", that we had done some lengthy analysis 
of search speed (which I carried out with the assistance of the the Koha 
community and openly made available) and made the disappointing decision to 
remain with that version, under our specific circumstances. And I concluded 
with "YMMV" which stands for "your mileage might vary."


Perhaps I should have been clearer: Koha was our indisputable choice for an 
OPAC (3.4.x) back in 2011; we are not a lending library, so the undoubted 
enhancements for lending libraries are irrelevant to our needs; the 
corollary is that if such enhancements have negative effects on our 
requirements, we very definitely look at them (sandbox level) but will not 
use them in production (i.e. on the public web interface.)


Our production servers are all Ubuntu LTS based: this is a two to five year 
cycle; we do not have the budget or necessary volunteer hours to review 
monthly or bi-monthly changes (except security concerns); our overriding 
concern is total, hands-off stability; I have no idea if we are the only 
Koha-based library that uses a similar LTS approach.


Koha 3.8.24 was released 5/29/2014, corresponding to the Ubuntu 14 LTS 
cycle. We recently upgraded two of our production servers to Ubuntu 16 LTS 
for other databases, but unless I am mistaken Koha 16.11 does not install 
(or at least not easily in our experience, and we have tried) on that o/s. 
So we maintained an additional, dedicated 14 server (with three years 
remaining Ubuntu support to 2019) for the OPAC when we followed the Ubuntu 
cycle. Again, this was disappointing, and again this represents only our 
very specific needs.


So, when I say "your mileage might vary" I mean exactly that -- we probably 
are atypical of many Koha users, but are extremely happy and proud of our 
Koha OPAC -- so, if it wasn't clear, let me state that our analysis of 
search speed is *not* any sort of recommendation.


Best regards -- Paul




Marcel



Van: Koha  namens Paul A 


Verzonden: vrijdag 2 december 2016 21:53
Aan: koha
Onderwerp: Re: [Koha] MySQL database server, dedicated vs virtual

Roger,

We also looked into the "search performance" last year. We are not a
lending library, so cataloguing and OPAC are our primary concerns. Please
see  for the detailed tech analysis
and conclusions.

You mention below that "Koha does not benefit a lot from multiple CPU cores
since each CGI request is typically processed by one CPU except for the
Zebra searches and database queries which run as separate processes."

I talked to Intel about CPU core cross-threading as it was a very obvious,
high-load, show-stopper with Zebra. The Zebra author never replied to my
queries. Intel were not optimistic, as kernel (and software) were not their
bailliwick. I do not know if Plack or Elastic have worked around this.

These are hardware (perhaps software usage of core capability), not Koha,
restrictions. Tweaking memcached can have appreciable benefits. But the
bottom line remains that if a single CPU core hits 101-104%, search
functions descend into the "swimming upstream in treacle" world.

We made the very conscious (but disappointing) decision to stay with Koha
3.8.24 based on our test results. I do spend quite some time testing later
versions at "sandbox" level, but have not been able to reproduce "older"
search speed.

YMMV -- Paul
Production: Koha 3.8.24 on 14.04.2 LTS (GNU/Linux 3.13.0-43-generic
x86_64), 8-core I7 processors, 64 Gigs RAM, all SSD drives.




At 06:47 PM 12/2/2016 +0100, Roger Grossmann wrote:

>Hi Tobias,
>
>a few month ago we did comparisons measuring the Koha-OPAC-Search
>performance with plack and memcached enabled using version 3.22.
>Our tests were not scientifically organised. We wanted to gather
>experience running Koha in different environments. We used the same MySQL
>database and Koha version in all environments. We did two types of OPAC
>searches on a limited collection of about 50.000 titles: an 'a*'-search of
>the complete word index and some special searches with fixed title search
>terms.
>We tested the following environments:
>
>1) Full Koha installation on a cloud provider hosted VM (the performance
>of VMs of the specific hoster were high ranked in published comparisons):
>Debian 8 on a cloud hosted system with virtual 2CPUs, 8 GB RAM, 256 SSD
>2) Full Koha installation on a physical server: Debian 8 on a well
>equipped physical machine with 64 GB RAM, 4 processors, 250 GB SSD
>3) Full Koha installation on a kvm-VM on a physical 

[Koha] What is the point of the Undo Import into catalogue feature?

2016-12-05 Thread Raymund Delahunty
I ran repeated tests for up to 48 hours or more. I think the indexes were 
partly updated- 7K out of 13K "remained". I have two files to deal with 
tomorrow (3K, and 30K small MARC records- bibs and items). I am tempted to try 
the Undo again to see if it works. I am asking our support company PTFS Europe 
about any issues there might be with the zebra indexer. If it goes wrong again 
I'll ask for another index rebuild, and give up on the Undo feature.

Ray Delahunty.


-Original Message-
From: Koha [mailto:koha-boun...@lists.katipo.co.nz] On Behalf Of Francois 
Charbonnier
Sent: 05 December 2016 13:14
To: koha@lists.katipo.co.nz
Subject: Re: [Koha] What is the point of the Undo Import into catalogue feature?

Hi,

Have you waited long enough for zebra to re-index your catalogue ? To me, what 
you are describing is not faulty. Once you reverted an imported batch, it takes 
time to zebra to remove the records from the indexes.
Especially if you work with large files.

If you waited long enough and the indexes never got updated, I would say it's 
faulty, yes. But it you reverted the import and search for records right away, 
I would say it's just zebra that hasn't been able to reindex everything...


François Charbonnier,
Bibl. prof. / Chef de produits

Tél.  : (888) 604-2627
francois.charbonn...@inlibro.com 

inLibro | Spécialistes en technologies documentaires | www.inLibro.com 
 Le 2016-12-05 à 03:31, Raymund Delahunty a écrit :
> That is exactly the way I have used the feature many times in the past- well, 
> once to correct an incorrectly modified batch a few moments after the data 
> was loaded, when I spotted a second necessary correction I had missed (much 
> to my annoyance), and maybe 2 dozen times to rip out previously loaded (often 
> large) batches of records weeks after they had been loaded, once the data was 
> no longer needed.
>
> The problem isn’t that I don’t understand how to use the feature (indeed I 
> love it) but my recent experience suggests it doesn’t work correctly.
>
> Maybe 2 months ago I loaded 11,300 records for a patron-driven-acquisitions 
> program. They were records for streaming media supplied by Kanopy. We bought 
> 85 of the titles. I was asked to remove the records for the titles we had not 
> purchased. I expected it to be a simple task- press Undo Import, and pull all 
> the records out, and add the 85 records purchased back in (I had been 
> supplied with a file of our purchases).
>
> After the job completed, I searched on Kanopy in both staff side and OPAC and 
> found c.250 results, much to my surprise. (I expected to find zero records.) 
> When I clicked on the MARC 710 I was taken to a list of c.7,200 records via 
> (in effect) the authority record for Kanopy (Firm). I could also navigate to 
> that list on OPAC.
>
> However these tendrils were “faulty”. Clicking on the title (for example) in 
> OPAC, took the user to a server error 404- Page not found. (But if they 
> clicked on the 856 they were taken to the Kanopy platform).  In staff-side, 
> clicking on the record resulted in an error “Record not found” (well, 
> something similar). Attempting to export the MARC file by biblionumber 
> resulted in an empty export file. (I have supplied my screenshots and notes 
> directly to Fred King, who was asking what rubbish was left after my Undo).
>
> In short, my Undo had NOT worked properly. The database had to be re-indexed 
> to get rid of the tendrils. I have another “Undo” scheduled (two files, one 
> of 2,000 records, one of 30,000 records). I am tempted to have one more try, 
> before raising this issue in Bugzilla. I suspect the Undo will not work, and 
> that yet again we will have to have a re-index. And that in future we will 
> have to rely on Batch Record Deletion, rather than the delightful Undo 
> feature.
>
> Ray Delahunty
> University of the Arts London
>
>
> From: Joy Nelson [mailto:j...@bywatersolutions.com]
> Sent: 02 December 2016 14:53
> To: Raymund Delahunty 
> Cc: Koha 
> Subject: Re: [Koha] What is the point of the Undo Import into catalogue 
> feature?
>
> Ray-
> The undo import feature has saved me more than once as I realize immediately 
> after import that I have incorrectly modified the batch of records.  
> Immediately reverting the batch is useful.
> The other main use for the undo import feature is in ebooks as Fred King 
> talks about in the other thread.
> Thanks
> joy
>
> On Thu, Dec 1, 2016 at 10:26 PM, Raymund Delahunty 
> mailto:r.delahu...@arts.ac.uk>> wrote:
> We regularly import large files of MARC records into our Koha (16.05) 
> database which have to be deleted at a later date… sometimes months later, 
> and sometimes maybe 30,000 records. I have been using the Undo import 
> (sort-of “unstage”) as I found this functionality astoundingly useful. It 
> automated a task, reducing a tedious job to a couple of keystrokes.
>
> However I was dismayed to fin

Re: [Koha] What is the point of the Undo Import into catalogue feature?

2016-12-05 Thread Francois Charbonnier

Hi,

Have you waited long enough for zebra to re-index your catalogue ? To 
me, what you are describing is not faulty. Once you reverted an imported 
batch, it takes time to zebra to remove the records from the indexes. 
Especially if you work with large files.


If you waited long enough and the indexes never got updated, I would say 
it's faulty, yes. But it you reverted the import and search for records 
right away, I would say it's just zebra that hasn't been able to reindex 
everything...



François Charbonnier,
Bibl. prof. / Chef de produits

Tél.  : (888) 604-2627
francois.charbonn...@inlibro.com 

inLibro | Spécialistes en technologies documentaires | www.inLibro.com 


Le 2016-12-05 à 03:31, Raymund Delahunty a écrit :

That is exactly the way I have used the feature many times in the past- well, 
once to correct an incorrectly modified batch a few moments after the data was 
loaded, when I spotted a second necessary correction I had missed (much to my 
annoyance), and maybe 2 dozen times to rip out previously loaded (often large) 
batches of records weeks after they had been loaded, once the data was no 
longer needed.

The problem isn’t that I don’t understand how to use the feature (indeed I love 
it) but my recent experience suggests it doesn’t work correctly.

Maybe 2 months ago I loaded 11,300 records for a patron-driven-acquisitions 
program. They were records for streaming media supplied by Kanopy. We bought 85 
of the titles. I was asked to remove the records for the titles we had not 
purchased. I expected it to be a simple task- press Undo Import, and pull all 
the records out, and add the 85 records purchased back in (I had been supplied 
with a file of our purchases).

After the job completed, I searched on Kanopy in both staff side and OPAC and 
found c.250 results, much to my surprise. (I expected to find zero records.) 
When I clicked on the MARC 710 I was taken to a list of c.7,200 records via (in 
effect) the authority record for Kanopy (Firm). I could also navigate to that 
list on OPAC.

However these tendrils were “faulty”. Clicking on the title (for example) in 
OPAC, took the user to a server error 404- Page not found. (But if they clicked 
on the 856 they were taken to the Kanopy platform).  In staff-side, clicking on 
the record resulted in an error “Record not found” (well, something similar). 
Attempting to export the MARC file by biblionumber resulted in an empty export 
file. (I have supplied my screenshots and notes directly to Fred King, who was 
asking what rubbish was left after my Undo).

In short, my Undo had NOT worked properly. The database had to be re-indexed to 
get rid of the tendrils. I have another “Undo” scheduled (two files, one of 
2,000 records, one of 30,000 records). I am tempted to have one more try, 
before raising this issue in Bugzilla. I suspect the Undo will not work, and 
that yet again we will have to have a re-index. And that in future we will have 
to rely on Batch Record Deletion, rather than the delightful Undo feature.

Ray Delahunty
University of the Arts London


From: Joy Nelson [mailto:j...@bywatersolutions.com]
Sent: 02 December 2016 14:53
To: Raymund Delahunty 
Cc: Koha 
Subject: Re: [Koha] What is the point of the Undo Import into catalogue feature?

Ray-
The undo import feature has saved me more than once as I realize immediately 
after import that I have incorrectly modified the batch of records.  
Immediately reverting the batch is useful.
The other main use for the undo import feature is in ebooks as Fred King talks 
about in the other thread.
Thanks
joy

On Thu, Dec 1, 2016 at 10:26 PM, Raymund Delahunty 
mailto:r.delahu...@arts.ac.uk>> wrote:
We regularly import large files of MARC records into our Koha (16.05) database 
which have to be deleted at a later date… sometimes months later, and sometimes 
maybe 30,000 records. I have been using the Undo import (sort-of “unstage”) as 
I found this functionality astoundingly useful. It automated a task, reducing a 
tedious job to a couple of keystrokes.

However I was dismayed to find that after a recent “Undo” of 13,000 records our 
database was left over 7,000 “phantom records”- they didn’t exist but the 
indexing had failed to remove all traces of them

We were advised to use the batch record deletion tool, as the Undo feature 
wasn’t designed to be used in the way I was using it. “… it is meant to unstage 
records nearer to the point in time of being added”. (And what’s the point of 
that?) We had to have our database re-indexed to resolve the problem. Is there 
any point in the Undo feature if the indexer can’t cope? I hate to think what 
other dross I had left behind in earlier “Undos”!

Ray Delahunty
University of the Arts London
This email and any attachments are intended solely for the addressee and may 
contain confidential information. If you are not the intended recipient of this 
email and/or its attac

Re: [Koha] New opac detail view for records

2016-12-05 Thread Pedro Amorim
Thanks all for your help,

Yes, I do have a test server and I'm doing all the changes in a Docker
container, so if something goes wrong I just rollback to where it was
initially.
Also, all the changes I make to core including XSLT files modifications go
in the Dockerfile and hence are documented and need to be considered for
any future upgrade. So all is well in that regard.

Unfortunately I won't be able to get back to this matter for a couple of
days but I'll definitely update this thread when I get to it.
Fred, I'll start by following your advice, however I couldn't access the
link you shared, it's showing a database error, at the time of writing.
Also "as long as Koha knows what they're called, and you can put them
anywhere you want as long as Koha knows where they are", any special place
(system preferences, any other xml config file) that need to mention these
new files? Also, did you change the template files to show the new detail
view option?

Thanks again,

Pedro Amorim

2016-11-28 16:48 GMT-01:00 Elaine Bradtke :

> I agree with everything Fred said.  It's taken quite a bit of testing and
> retesting to change something in the OPAC display. We managed to
> accidentally  hide all the bibliographic data in the OPAC search results on
> the first two tries.  Thank goodness for our test system.
>
> On Mon, Nov 21, 2016 at 9:34 PM, King, Fred  wrote:
>
> > Pedro,
> >
> > If you do end up modifying the XSLT files, here are a few tips:
> >
> > I created detail display and results display XSLT files for our local
> > authors catalog (see http://www.philobiblios.info for the low-powered
> > test version). I discovered two wonderful things: you can name them
> > anything you want as long as Koha knows what they're called, and you can
> > put them anywhere you want as long as Koha knows where they are. (You
> also
> > have to sync those with the Utils.xsl file.) I put them in /var/www/html
> > and called them authcatUtils.xsl and authcatDetail.xsl.
> >
> > If you do modify them, I highly recommend that you document ALL your
> > changes with comments. Also, it's very easy to break them, so test each
> > change before you go on to the next. Go slowly!
> >
> > You do have a test server for this, right? :-)
> >
> > Fred King
> > Medical Librarian, MedStar Washington Hospital Center
> > fred.k...@medstar.net
> > 202-877-6670
> > ORCID -0001-5266-0279
> >
> > A learning experience is one of those things that says, 'You know that
> > thing you just did? Don't do that.’
> > --Douglas Adams
> >
> > -Original Message-
> > From: Koha [mailto:koha-boun...@lists.katipo.co.nz] On Behalf Of Katrin
> > Sent: Monday, November 21, 2016 4:09 PM
> > To: koha@lists.katipo.co.nz
> > Subject: Re: [Koha] New opac detail view for records
> >
> > Hi Pedro,
> >
> > the normal view can be changed by creating your own XSLT file and
> > activating it using the OPACXSLTDetailsDisplay and XSLTDetailsDisplay
> > system preferences.
> >
> > The ISBD view can be changed via the OPACISBD and ISBD system
> preferences.
> >
> > I would recommend not to change the templates or to create your own if
> > you can avoid it, as it is likely to give you headaches when updating to
> > a newer version later on. It's safer to use the preferences, CSS and
> > jQuery.
> >
> > What is the fourth view you want to implement?
> >
> > Hope this helps,
> >
> > Katrin
> >
> > On 10.11.2016 19:32, Pedro Amorim wrote:
> > > Hello all,
> > >
> > > I'm just about to start diving into the core and figure out how the
> three
> > > different opac detail views are implemented (normal, marc and ISBD) so
> I
> > > can implement my own custom detail view and add a fourth one.
> > > However, and before I start doing so, I was wondering if anyone has
> > > done/attempted this before or at least if someone has some knowledge on
> > > what template files should I start and how to go about it.
> > >
> > > Thanks as always,
> > >
> > > Pedro Amorim
> >
> > --
> > MedStar Health is a not-for-profit, integrated healthcare delivery
> system,
> > the largest in Maryland and the Washington, D.C., region. Nationally
> > recognized for clinical quality in heart, orthopaedics, cancer and GI.
> >
> > IMPORTANT: This e-mail (including any attachments) may contain
> information
> > that is private, confidential, or protected by attorney-client or other
> > privilege. If you received this e-mail in error, please delete it from
> your
> > system without copying it and notify sender by reply e-mail, so that our
> > records can be corrected... Thank you.
> >
> > Help conserve valuable resources - only print this email if necessary.
> >
> >
> > ___
> > Koha mailing list  http://koha-community.org
> > Koha@lists.katipo.co.nz
> > https://lists.katipo.co.nz/mailman/listinfo/koha
> >
>
>
>
> --
> Elaine Bradtke
> Data Wrangler
> VWML
> English Folk Dance and Song Society | http

[Koha] What is the point of the Undo Import into catalogue feature?

2016-12-05 Thread Raymund Delahunty
That is exactly the way I have used the feature many times in the past- well, 
once to correct an incorrectly modified batch a few moments after the data was 
loaded, when I spotted a second necessary correction I had missed (much to my 
annoyance), and maybe 2 dozen times to rip out previously loaded (often large) 
batches of records weeks after they had been loaded, once the data was no 
longer needed.

The problem isn’t that I don’t understand how to use the feature (indeed I love 
it) but my recent experience suggests it doesn’t work correctly.

Maybe 2 months ago I loaded 11,300 records for a patron-driven-acquisitions 
program. They were records for streaming media supplied by Kanopy. We bought 85 
of the titles. I was asked to remove the records for the titles we had not 
purchased. I expected it to be a simple task- press Undo Import, and pull all 
the records out, and add the 85 records purchased back in (I had been supplied 
with a file of our purchases).

After the job completed, I searched on Kanopy in both staff side and OPAC and 
found c.250 results, much to my surprise. (I expected to find zero records.) 
When I clicked on the MARC 710 I was taken to a list of c.7,200 records via (in 
effect) the authority record for Kanopy (Firm). I could also navigate to that 
list on OPAC.

However these tendrils were “faulty”. Clicking on the title (for example) in 
OPAC, took the user to a server error 404- Page not found. (But if they clicked 
on the 856 they were taken to the Kanopy platform).  In staff-side, clicking on 
the record resulted in an error “Record not found” (well, something similar). 
Attempting to export the MARC file by biblionumber resulted in an empty export 
file. (I have supplied my screenshots and notes directly to Fred King, who was 
asking what rubbish was left after my Undo).

In short, my Undo had NOT worked properly. The database had to be re-indexed to 
get rid of the tendrils. I have another “Undo” scheduled (two files, one of 
2,000 records, one of 30,000 records). I am tempted to have one more try, 
before raising this issue in Bugzilla. I suspect the Undo will not work, and 
that yet again we will have to have a re-index. And that in future we will have 
to rely on Batch Record Deletion, rather than the delightful Undo feature.

Ray Delahunty
University of the Arts London


From: Joy Nelson [mailto:j...@bywatersolutions.com]
Sent: 02 December 2016 14:53
To: Raymund Delahunty 
Cc: Koha 
Subject: Re: [Koha] What is the point of the Undo Import into catalogue feature?

Ray-
The undo import feature has saved me more than once as I realize immediately 
after import that I have incorrectly modified the batch of records.  
Immediately reverting the batch is useful.
The other main use for the undo import feature is in ebooks as Fred King talks 
about in the other thread.
Thanks
joy

On Thu, Dec 1, 2016 at 10:26 PM, Raymund Delahunty 
mailto:r.delahu...@arts.ac.uk>> wrote:
We regularly import large files of MARC records into our Koha (16.05) database 
which have to be deleted at a later date… sometimes months later, and sometimes 
maybe 30,000 records. I have been using the Undo import (sort-of “unstage”) as 
I found this functionality astoundingly useful. It automated a task, reducing a 
tedious job to a couple of keystrokes.

However I was dismayed to find that after a recent “Undo” of 13,000 records our 
database was left over 7,000 “phantom records”- they didn’t exist but the 
indexing had failed to remove all traces of them

We were advised to use the batch record deletion tool, as the Undo feature 
wasn’t designed to be used in the way I was using it. “… it is meant to unstage 
records nearer to the point in time of being added”. (And what’s the point of 
that?) We had to have our database re-indexed to resolve the problem. Is there 
any point in the Undo feature if the indexer can’t cope? I hate to think what 
other dross I had left behind in earlier “Undos”!

Ray Delahunty
University of the Arts London
This email and any attachments are intended solely for the addressee and may 
contain confidential information. If you are not the intended recipient of this 
email and/or its attachments you must not take any action based upon them and 
you must not copy or show them to anyone. Please send the email back to us and 
immediately and permanently delete it and its attachments. Where this email is 
unrelated to the business of University of the Arts London or of any of its 
group companies the opinions expressed in it are the opinions of the sender and 
do not necessarily constitute those of University of the Arts London (or the 
relevant group company). Where the sender's signature indicates that the email 
is sent on behalf of London Artscom Limited the following also applies: London 
Artscom Limited is a company registered in England and Wales under company 
number 02361261. Registered Office: University of the Arts London, 272 High 
Holborn, London WC1V 7EY
___

Re: [Koha] Wide character error

2016-12-05 Thread Jonathan Druart
This is a bug, I have opened a new bug report and will try to attach a
patch soon.
See bug 17096.

On Sun, 4 Dec 2016 at 11:05 Abdulsalam Yousef  wrote:

> Hello all,
>
> We have an error appears during navigation through ( patron pages) in koha
> 16.11
> for example when trying to enter a patron details page from koha staff
> client
> like (
>
> http://domain_name:8080/cgi-bin/koha/members/moremember.pl?borrowernumber=2953
> )
> page, It gives us this error (
>
> Wide character in subroutine entry
> at /usr/share/perl5/Digest/HMAC.pm line 63.)
>
> Also, The same error appears when trying to open OPAC
> (http://domain_name/cgi-bin/koha/opac-memberentry.pl) page.
>
> Thanks in advance
>
> Regards.
> ___
> Koha mailing list  http://koha-community.org
> Koha@lists.katipo.co.nz
> https://lists.katipo.co.nz/mailman/listinfo/koha
>
___
Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
https://lists.katipo.co.nz/mailman/listinfo/koha