Re: [OPEN-ILS-DEV] Call for nominations for release manager — Evergreen 3.0

2017-03-21 Thread Dan Wells
Dear Evergreen Community,

I am writing to offer myself as release manager for the upcoming 3.0 release.

Our library is an independent academic installation, and having recently 
celebrated 7 years on Evergreen, a fairly early adopter.  I served as release 
manager for 2.5 and 2.6, release cycles distinguished by virtual "medals" 
(which, if I am elected, will certainly be making a comeback), and stirring 
lyrical ballads about "2.next" (which certainly will not be).  I was also the 
buildmaster and am current release maintainer for 2.11.

If elected, my primary focus will be full support for our express goal of a 
fully-featured web client for 3.0.  A particular emphasis will be a 
continuation of Kathy's efforts to push for greater consistency of the user 
experience while still being ever mindful of the particularities of any given 
feature.

My secondary point of emphasis will be on non-functional yet valuable changes 
to the codebase, and this falls into two main areas.  First, I would like to 
spearhead an effort to trim away dead (or dying) and deprecated code.  I 
greatly appreciate recent efforts in this area, but there is more to be done, 
and not just the XUL client.  While some things can be removed outright, 
another useful strategy will be simply increasing log messages (e.g. "XYZ is 
deprecated, consider using ABC instead."), or even more simple than that, at 
least adding code comments when a method (or entire file) has been superceded.  
This last idea leads directly to the second piece of this emphasis, and that is 
a general increase in code comments.  In particular, I will seek volunteers to 
work through the service code in particular and add, at a minimum, a header 
explaining broadly what the contained code is for.

These goals are being informed and inspired by ongoing work here at Calvin to 
document and holistically understand Evergreen in preparation for our 
conference presentation and beyond.  External and internal documentation are 
two separate goals which ultimately will work better when working together.

I know from experience that an RM has limited ability to spur actual new 
developments.  We each must work to scratch our own itches and those of our 
institutions, so I will briefly mention here a few areas where I intend to 
focus my own development efforts in the next cycle.  First, for too long I've 
let a few of our local billing improvements languish, so I will double my 
efforts to produce branches for community review.  Second, as one of the 
original contributors to the serials module, I intend to offer whatever help I 
can in smoothing the transition of those interfaces into the web staff client.

Thank you for your consideration.

Sincerely,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133


Re: [OPEN-ILS-DEV] PostgreSQL version support

2016-03-03 Thread Dan Wells
+1, support this 100%.  Thank you, Galen.

Dan

-Original Message-
From: Open-ils-dev [mailto:open-ils-dev-boun...@list.georgialibraries.org] On 
Behalf Of Galen Charlton
Sent: Wednesday, March 02, 2016 10:16 PM
To: Evergreen Development Discussion List 

Subject: [OPEN-ILS-DEV] PostgreSQL version support

Hi,

I mentioned this in the email I sent just now, but this may be worth a separate 
thread.

I'd like to call folks attention to the discussion at the end of the following 
bug:

https://bugs.launchpad.net/evergreen/+bug/1505286

I had inadvertently introduced an idiom that doesn't work in Pg 9.1.
While it's easy to resolve that and make the stored procedures compatible with 
9.1, the situation sparked a discussion of whether to deprecate support for 9.1.

I encourage folks to read the discussion in the bug, but my position is that we 
should:

* announce a deprecation of 9.1 with the release of Evergreen 2.10
* officially de-support 9.1 with the release of Evergreen 2.11 in the fall.

Thoughts?

Galen
--
Galen Charlton
Infrastructure and Added Services Manager Equinox Software, Inc. / The Open 
Source Experts
email:  g...@esilibrary.com
direct: +1 770-709-5581
cell:   +1 404-984-4366
skype:  gmcharlt
web:http://www.esilibrary.com/
Supporting Koha and Evergreen: http://koha-community.org & 
http://evergreen-ils.org


Re: [OPEN-ILS-DEV] Library card number via OSRF gateway?

2016-01-21 Thread Dan Wells
Hello Ken,

Unfortunately, I think your average patron won’t have access through PCRUD even 
to their own data.  You get empty payloads in PCRUD if you make a requests you 
don’t have permission to view.

You can use the user API instead, as it has a method to retrieve the fleshed 
user with an exception to allow any user to view their own data.  Something 
like:

https://ulysses.calvin.edu/osrf-gateway-v1?service=open-ils.actor=open-ils.actor.user.fleshed.retrieve=%22AUTHTOKEN%22=USER_ID

Sincerely,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: Open-ils-dev [mailto:open-ils-dev-boun...@list.georgialibraries.org] On 
Behalf Of Ken Cox
Sent: Wednesday, January 20, 2016 8:22 PM
To: Evergreen Development Discussion List 

Subject: Re: [OPEN-ILS-DEV] Library card number via OSRF gateway?

Thank you for the quick responses.  But I am missing something important about 
calling these methods through the gateway.

I seem to have both the user ID and the card ID in the response from 
open-ils.auth.session.retrieve.  But when I try calling 
open-ils.pcrud.search.ac with either 
{"id":cardID} or {"usr":userID} I get an empty payload in response.

Can you spot what I am doing wrong?

Here are the redacted URLs I tried:

https://catalog.cwmars.org/osrf-gateway-v1?service=open-ils.auth=open-ils.auth.session.retrieve=%22authtoken%22
  ->  "usrname": "coxken", ... "id": 409071, ... "card": 1344653

https://catalog.cwmars.org/osrf-gateway-v1?service=open-ils.pcrud=open-ils.pcrud.search.ac=%22authtoken%22=%7B%22id%22:1344653%7D
  -> {"payload":[],"status":200}

https://catalog.cwmars.org/osrf-gateway-v1?service=open-ils.pcrud=open-ils.pcrud.search.ac=%22authtoken%22=%7B%22usr%22:409071%7D

  -> {"payload":[],"status":200}


On Wed, Jan 20, 2016 at 10:44 AM, Bill Erickson 
> wrote:

On Wed, Jan 20, 2016 at 10:23 AM, Bill Erickson 
> wrote:

On Wed, Jan 20, 2016 at 10:18 AM, Thomas Berezansky 
> wrote:
As a note, I believe that PCRUD search will also return all non-primary and 
non-active cards.

Indeed that's true when fleshing the "cards" (virtual) field on the user.  When 
fleshing "card", you get the primary/active card.

Oh, right...  If you search for cards by user ID, then it will return all 
cards, active or otherwise for the user.  (I was thinking about fleshing cards 
on the user).  Thanks for pointing that out, Thomas.  Apologies for the 
misleading response.

-b





--
-Ken


Re: [OPEN-ILS-DEV] Questions about security bugs/process

2015-03-13 Thread Dan Wells
 I think I can highlight another area where we are assuming different things.
We probably have different thoughts on what kinds of activity can lead to
a solution. It's not just coding, testing, or helping to publicize the security
release. It might be putting pressure on a support provider or on a developer
community to get the problem fixed. It could be pressuring the EOB to finally
gain some traction in setting up an emergency fund that could be used for
security bugs.

If someone is willing to do any or all of that, then I’m assuming they are not 
joining “simply as a means of gaining early access to information about 
security vulnerabilities in Evergreen.”

So… they’re in!  ☺

Dan



Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: Open-ils-dev [mailto:open-ils-dev-boun...@list.georgialibraries.org] On 
Behalf Of Kathy Lussier
Sent: Friday, March 13, 2015 5:03 PM
To: open-ils-dev@list.georgialibraries.org
Subject: Re: [OPEN-ILS-DEV] Questions about security bugs/process

Thank you for your thoughts Dan.

To be clear, I don't think people are being excluded as a punishment, and I 
understand perfectly that the intent is to decrease overall risk. I also want 
you to know I'm not trying to assign blame on the time it takes to fix security 
bugs. We are a small community with a lot of work on our hands.

I'm also not advocating for broad alerts to the entire community regarding 
security problems. I guess what I'm really saying is that, if a trusted person 
from an Evergreen site wants to apply for access to the LP security bugs simply 
to gain early access to information about security vulnerabilities in 
Evergreen, they should be allowed to do so. I'm guessing there would continue 
to be some have-nots out there because they never took the trouble to apply 
for access. But the option would be available to them if they wanted to use it.


Second, by calling them “haves” and “have-nots”, this response seems to 
indicate that there is some intrinsic advantage to knowing about a security 
bug.  I would argue that knowing about a bug and not being able to do anything 
about it is actually a burden.

Oh, sure. Knowledge quite often is a burden for people, but I would argue that 
information that places a burden on people is quite often the information that 
is the most important for them to have, even if it causes some discomfort.

I think I can highlight another area where we are assuming different things. We 
probably have different thoughts on what kinds of activity can lead to a 
solution. It's not just coding, testing, or helping to publicize the security 
release. It might be putting pressure on a support provider or on a developer 
community to get the problem fixed. It could be pressuring the EOB to finally 
gain some traction in setting up an emergency fund that could be used for 
security bugs. It could be the thing that galvanizes someone to get more 
involved in the community because they don't want to feel that burden anymore.

I want more people to feel that burden. It shouldn't just be placed on the 
shoulders of our small developer community.

Have a nice weekend everyone!

Kathy
On 03/13/2015 02:32 PM, Dan Wells wrote:
Hello Kathy,

I appreciate the concern expressed here, and struggled a lot with how to 
respond to the overall security problems.  Perhaps it will help to highlight a 
few places where we are assuming different things.

First, this response seems to assume that the average library will be able to 
mitigate security issues if they know about them.  In some limited cases this 
might be true (e.g. don’t use credit card payments if the settings can be 
compromised), but the majority of bugs will not be simple to work around.  It 
might make sense in some instances for the security team to put out a broad but 
nonspecific alert which says something like “a vulnerability exists in XYZ, 
please disable feature if possible until a fix is developed”, but even that 
must be weighed against inviting increased scrutiny and malicious intent.

Second, by calling them “haves” and “have-nots”, this response seems to 
indicate that there is some intrinsic advantage to knowing about a security 
bug.  I would argue that knowing about a bug and not being able to do anything 
about it is actually a burden.

The goal of the security policy is to attempt to lower the overall risk to 
every interested party.  It tries to include anyone the team is confident can 
and will help, and exclude everyone else, not as a punishment, but as the most 
viable and available means to actually *decrease* their overall risk.

All that said, I do think we can try harder to err on the side of being more 
open, and the team is currently reevaluating some of the criteria for what kind 
of bug truly benefits from limited exposure and what does not.

Sincerely,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: Open-ils-dev

[OPEN-ILS-DEV] Fine Generation Code Question

2014-11-21 Thread Dan Wells
Hello all,

I've been mucking about in the fine code and getting a bit frustrated, so I'm 
looking for input before I move ahead.

As it stands, fine generation is part of the storage API.  As such, when it is 
called during checkin, it runs in its own transaction (independent of the 
circ/copy transaction).  The issue is that (increasing) various pieces of the 
circ code also mangle handle fines, particular the lost item related code, and 
any code related to voiding/zeroing balances.

I think we have two (not necessarily exclusive) paths to move forward:


1)  Move the fine generation code into open-ils.circ and rewrite the 
necessary bits to use cstore.  This would at least bring in the possibility of 
having all the checkin-time fine handling done in the same transaction, and 
thus give us a lot of flexibility in how we structure things.

2)  Refactor the code so that more (or all) fine handling (not just 
generation, but lost bill processing, voiding, etc.) happens *after* the 
checkin transaction rather than during.

I'm leaning toward #1 under the hope that it would mean fewer changes overall, 
but there are certainly merits to #2.  Does anyone see anything in particular 
to suggest we should instead pursue #2 exclusively?

Thanks for any input.

Sincerely,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Fine Generation Code Question

2014-11-21 Thread Dan Wells
Hello Liam,

Thanks for the input.  One thing I do want to clarify is that fine generation 
even in case #1 would still be functionally independent of the checkin code.  
(It has to be, since fine generation for most installations is set to happen 
continually, or at least daily.)  Plan #1 would simply open up the possibility 
of doing the checkin/fine processes in a single transaction.

The main problem with plan #2 is not the generation component of fine handling, 
but all of the other possible alterations and adjustments to fines which can 
happen depending on the checkin conditions.  I do agree that #2 is better from 
a design perspective, but it is quite a bit more complex than your outline 
suggests, and it would (in my opinion) be much safer to approach that design 
iteratively, which is what I hope doing #1 first would allow us to do.

Dan


[OPEN-ILS-DEV] Suggestion for New Upgrade Script Policy

2014-05-30 Thread Dan Wells
Hello all,

I know that eventually we want to move toward a more fluid DB upgrade model, 
but leaving that plan aside for the moment, I'd like to suggest a new policy 
for maintenance releases.

My sense is that, as Evergreen matures, people are staying on the previous 
version for longer periods, so it is becoming increasingly likely that the 
one-time upgrade moment (e.g. 2.5.3-2.6.0 ) won't be what most people need.  I 
propose that each maintenance release include two scripts instead of just the 
one we do now.  For maintenance releases, we currently generate only an upgrade 
script from minor to minor within the same major release (e.g. 2.5.4 - 2.5.5 
and 2.6.0 - 2.6.1).  For completeness, though, we really also need a minor to 
minor upgrade *across* the major releases (e.g. 2.5.5 - 2.6.1).

Doing this would mean more work for the maintainers, but I think it would be 
worth it, since newest to newest is the most sensible upgrade path at any given 
time.  If we start doing this, I would also suggest that maintenance releases 
be staggered by one week to minimize potential churn (i.e. 2.5.5 should be 
fully settled before we try to make a 2.5.5-2.6.1 script).  The alternative 
would be to make, in this case, a 2.5.4-2.6.1 upgrade, but that would mean the 
cross-version script would always lag behind.

So, this is really two proposals to vote/comment on:

1) Should we add a new maintenance requirement of creating minor-to-minor 
scripts across major versions with every release?
2) Should we stagger maintenance releases by one week, ordered by oldest to 
newest maintained release?  (e.g. 2.5.x first, 2.6.x a week later)

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

2014-05-22 Thread Dan Wells
In the default configuration, Evergreen logs Apache errors via syslog, not the 
Apache error log.  You can either figure out where your syslog is logging to 
(sometime defaults to /var/log/syslog), or you can adjust your Apache config to 
use the error log you are used to.  Once you find the error message, feel free 
to post it if it doesn't make sense to you.

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Thursday, May 22, 2014 11:17 AM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

You have been amazing, thank you! The modified script ran with no issues after 
issuing the INSERT INTO command suggested. I only needed to modify one 
additional script (2.5.0-2.5.1-upgrade-db.sql) in order to complete the DB 
upgrade process. 

It looks like I am almost there with a complete upgrade. After running all 
reingest scripts, finishing the last few upgrade steps and starting all the 
services I can successfully connect via the staff client (either directly to 
the test server IP or the subdomain.domain.org format) and search patrons, 
books etc... with no apparent issues. From the web though, I can view the 
initial web landing page for our individual branches, login to my account and 
look through account settings. As soon as I try and perform an actual search, I 
get a 500 Internal Server Error (Screen shot attached)

So far I haven't been able to find anything in any of the logs that would 
indicate the cause and the error log for Apache is empty...

Thanks again, the assistance is greatly appreciated.

Jesse McCarty
City of Burlington
IT Technical Assistant

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan Wells
Sent: Wednesday, May 21, 2014 10:50 AM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Huh.  It's hard for me to guess how you might be missing that, but that's 
something which can be sorted out later if needed.  In the meantime, you'll 
want to recreate it with something like this:

INSERT INTO config.metabib_field ( id, field_class, name, label, format, xpath) 
VALUES
(30, 'identifier', 'lccn', 'LC Control Number', 'marcxml', 
$$//marc:datafield[@tag='010']/marc:subfield[@code=a or @code='z']$$);

We won't worry about actually populating the metabib.identifier_field_entry 
table, since you'll probably end up reingesting for 2.5 anyway.  Otherwise, a 
quick and dirty population would be something like:

INSERT INTO metabib.identifier_field_entry(source, field, value)
SELECT record, 30, value FROM metabib.full_rec WHERE tag = '010' AND 
subfield IN ('a', 'z');

Let us know how it goes.

Thanks,
Dan

Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Wednesday, May 21, 2014 1:10 PM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Thanks again for the great information. This particular upgrade has been quite 
the learning experience (Thank God for Virtual Test servers and snapshots!) and 
much more involved than the 2.4.0-2.4.4 upgrade I performed and the 
2.4.4.-2.4.7 test upgrade.

To run the commands listed, I opened a PostgreSQL terminal (typing psql from 
the command line) and then ran the command: SELECT id FROM config.metabib_field 
WHERE field_class = 'identifier' AND name = 'lccn';

This resulted in the following output:

id

(0 rows)

Thanks again, the help is very much appreciated.

Jesse McCarty
City of Burlington
IT Technical Assistant

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan Wells
Sent: Wednesday, May 21, 2014 9:27 AM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Hello Jesse,

There isn't really such a thing as a 'fresh install' when talking about the DB 
side.  But yes, DB upgrades that fall outside the one supported path are 
essentially expected to be handled in a custom, case-by-case fashion.  Another 
factor to consider is that the farther you upgrade from the 'expected' upgrade 
path, the more likely you will have additional conflicts in even the smaller 
point-release scripts.  For instance, if you are at 2.4.7, you would likely 
have conflicts with not only the 2.4.3-2.5.0 script, but also the 2.5.0-2.5.1 
or the 2.5.1-2.5.2 scripts, etc.  It's all going to depend

Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

2014-05-21 Thread Dan Wells
Hello Jesse,

There isn't really such a thing as a 'fresh install' when talking about the DB 
side.  But yes, DB upgrades that fall outside the one supported path are 
essentially expected to be handled in a custom, case-by-case fashion.  Another 
factor to consider is that the farther you upgrade from the 'expected' upgrade 
path, the more likely you will have additional conflicts in even the smaller 
point-release scripts.  For instance, if you are at 2.4.7, you would likely 
have conflicts with not only the 2.4.3-2.5.0 script, but also the 2.5.0-2.5.1 
or the 2.5.1-2.5.2 scripts, etc.  It's all going to depend on timing of the 
various fixes, and which version of each line they made it into.

As I said before, this is a known issue, and the most likely solution will be 
getting rid of the packaged scripts entirely, and having each upgrade be 
smart enough to check for and apply the individual upgrades it requires.  Many 
of the pieces to support this workflow are already in place, and it's mostly a 
matter of finding time to work out the details.


Now, for your specific upgrade issue, you're install does not have the lccn 
metabib_field entry in the expected place (id 30).  I'm not sure why this is, 
but the history of the table is a little messy due to us not originally 
reserving the lower IDs for system use.  This was corrected at some point in a 
best effort fashion, but there are many cases where these entries could be 
out of place.  Since it appears you have nothing with id = 30, your best bet is 
to find your 'lccn' entry (assuming it exists):

SELECT id FROM config.metabib_field WHERE field_class = 'identifier' AND name = 
'lccn';

then update it back to 30:

BEGIN;
UPDATE config.metabib_field SET id = 30 WHERE id = [id you found, no brackets];
UPDATE metabib.identifier_field_entry SET field = 30 WHERE field = [id you 
found, no brackets];
COMMIT;

I think any other key relationships will cascade, so if the above works, then 
your 2.4.4-2.5.0 script should now work as well.

Good luck,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Wednesday, May 21, 2014 11:09 AM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Thanks for the additional information Dan. Is this going to carry over when 
trying to upgrade to the 2.6 branch (or 2.7 etc...) as well? Seems like 
everyone on 2.4.4+ would be stuck needing to do a fresh install to upgrade then?

I have modified the 2.4.3-2.5.0-upgrade-db.sql script several times trying to 
flush out all the errors and get a clean run to proceed with the next step, but 
continue to have issues come up and am at a loss of where to go next (I do have 
VM snapshots to revert to and try again). So far where I am now is detailed 
below with attached log files and modified upgrade-db.sql file for reference.

After running into the initial error I commented out lines #592  593 and 
re-ran the script using the command psql -U evergreen -f 
2.4.3-2.5.0-upgrade-db.sql  /tmp/log.apply243-250_2 21  (command provided 
by Galen in a previous response). The resulting file (log.apply243-250_2.rtf) 
is attached with an added .rtf file extension. This produced more errors in the 
file so next I commented out lines #12126 through #12133 re-ran script that 
produced the results shown in log.apply243-250_3 (also attached as an .rtf file)

After more errors, I commented out lines #12148 through #12167 re-ran the 
script which produced the results detailed in log.apply243-250_4.rtf

Again, more errors. This time commenting out lines #15791 through #15869 and 
then ran the script again producing log.apply243-250_5.rtf

After running and seeing still more errors, I commented out lines #15872 
through #15961 to produce log.apply243-250_6.rtf (Screenshot of the error 
portion also attached).

I am stuck here, the other error messages gave an error where I could comment 
out that part of the script where the latest error has to deal with table 
modifications. I have also attached the 2.4.3-2.5.0-upgrade-db.sql file 
(renamed to 2.4.4-2.5.0-upgrade-db.sql) I have modified during this process.

Thanks for all the help.

Jesse McCarty
City of Burlington
IT Technical Assistant


-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan Wells
Sent: Tuesday, May 20, 2014 10:56 AM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Hello Jesse,

You want to comment everything from the failing SELECT 
evergreen.upgrade_deps_block_check... up to (but not including) the next  
SELECT evergreen.upgrade_deps_block_check  You may need to do this several 
times before you

Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

2014-05-21 Thread Dan Wells
Huh.  It's hard for me to guess how you might be missing that, but that's 
something which can be sorted out later if needed.  In the meantime, you'll 
want to recreate it with something like this:

INSERT INTO config.metabib_field ( id, field_class, name, label, format, xpath) 
VALUES
(30, 'identifier', 'lccn', 'LC Control Number', 'marcxml', 
$$//marc:datafield[@tag='010']/marc:subfield[@code=a or @code='z']$$);

We won't worry about actually populating the metabib.identifier_field_entry 
table, since you'll probably end up reingesting for 2.5 anyway.  Otherwise, a 
quick and dirty population would be something like:

INSERT INTO metabib.identifier_field_entry(source, field, value)
SELECT record, 30, value FROM metabib.full_rec WHERE tag = '010' AND 
subfield IN ('a', 'z');

Let us know how it goes.

Thanks,
Dan

Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Wednesday, May 21, 2014 1:10 PM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Thanks again for the great information. This particular upgrade has been quite 
the learning experience (Thank God for Virtual Test servers and snapshots!) and 
much more involved than the 2.4.0-2.4.4 upgrade I performed and the 
2.4.4.-2.4.7 test upgrade.

To run the commands listed, I opened a PostgreSQL terminal (typing psql from 
the command line) and then ran the command: SELECT id FROM config.metabib_field 
WHERE field_class = 'identifier' AND name = 'lccn';

This resulted in the following output:

id

(0 rows)

Thanks again, the help is very much appreciated.

Jesse McCarty
City of Burlington
IT Technical Assistant

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan Wells
Sent: Wednesday, May 21, 2014 9:27 AM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Hello Jesse,

There isn't really such a thing as a 'fresh install' when talking about the DB 
side.  But yes, DB upgrades that fall outside the one supported path are 
essentially expected to be handled in a custom, case-by-case fashion.  Another 
factor to consider is that the farther you upgrade from the 'expected' upgrade 
path, the more likely you will have additional conflicts in even the smaller 
point-release scripts.  For instance, if you are at 2.4.7, you would likely 
have conflicts with not only the 2.4.3-2.5.0 script, but also the 2.5.0-2.5.1 
or the 2.5.1-2.5.2 scripts, etc.  It's all going to depend on timing of the 
various fixes, and which version of each line they made it into.

As I said before, this is a known issue, and the most likely solution will be 
getting rid of the packaged scripts entirely, and having each upgrade be 
smart enough to check for and apply the individual upgrades it requires.  Many 
of the pieces to support this workflow are already in place, and it's mostly a 
matter of finding time to work out the details.


Now, for your specific upgrade issue, you're install does not have the lccn 
metabib_field entry in the expected place (id 30).  I'm not sure why this is, 
but the history of the table is a little messy due to us not originally 
reserving the lower IDs for system use.  This was corrected at some point in a 
best effort fashion, but there are many cases where these entries could be 
out of place.  Since it appears you have nothing with id = 30, your best bet is 
to find your 'lccn' entry (assuming it exists):

SELECT id FROM config.metabib_field WHERE field_class = 'identifier' AND name = 
'lccn';

then update it back to 30:

BEGIN;
UPDATE config.metabib_field SET id = 30 WHERE id = [id you found, no brackets]; 
UPDATE metabib.identifier_field_entry SET field = 30 WHERE field = [id you 
found, no brackets]; COMMIT;

I think any other key relationships will cascade, so if the above works, then 
your 2.4.4-2.5.0 script should now work as well.

Good luck,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Wednesday, May 21, 2014 11:09 AM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Thanks for the additional information Dan. Is this going to carry over when 
trying to upgrade to the 2.6 branch (or 2.7 etc...) as well? Seems like 
everyone on 2.4.4+ would be stuck needing to do a fresh install to upgrade then?

I have modified the 2.4.3-2.5.0-upgrade-db.sql script several times trying to 
flush out all the errors and get a clean run to proceed

Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

2014-05-20 Thread Dan Wells
Hello Jesse,

You want to comment everything from the failing SELECT 
evergreen.upgrade_deps_block_check... up to (but not including) the next  
SELECT evergreen.upgrade_deps_block_check  You may need to do this several 
times before you find them all.

Those checks exist precisely to cause the upgrade to fail if a piece of the 
script has already been applied.  Commenting out only these check lines *might* 
work (depending on the content of that section), but is certainly not 
recommended.

Also, the root of this problem is that there is no supported way to upgrade 
from 2.4.4+ to 2.5.0.  This is a known issue with no simple fix, other than to 
say the 2.x maintainer needs to make a new upgrade script for every 2.(x-1) 
release.  I am not outright opposed to that, but up to this point nobody has 
argued that the benefits would justify the cost.

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jesse 
McCarty
Sent: Tuesday, May 20, 2014 11:19 AM
To: 'Evergreen Development Discussion List'
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Thanks Martha,

How much is there to comment out? Is it just the single line(s) that reads:

SELECT evergreen.upgrade_deps_block_check('0841', :eg_version); SELECT 
evergreen.upgrade_deps_block_check('0842', :eg_version);

Thanks again,

Jesse McCarty
City of Burlington
IT Technical Assistant

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Martha 
Driscoll
Sent: Monday, May 19, 2014 2:30 PM
To: open-ils-dev@list.georgialibraries.org
Subject: Re: [OPEN-ILS-DEV] Errors Upgrading Evergreen 2.4.4 to 2.5.4

Jesse,
I found that upgrade scripts 0841 and 0842 which are contained in the
2.4.3-2.5.0 upgrade script were already applied in the upgrade to 2.4.4. 
  I commented out those two parts of the upgrade script.

Martha Driscoll
Systems Manager
North of Boston Library Exchange
Danvers, Massachusetts
www.noblenet.org

On 5/19/2014 1:31 PM, Jesse McCarty wrote:
 Hi,


 I am going through the process of upgrading our 2.4.4 installation of 
 Evergreen to 2.5.4. Currently running through the install on a test 
 server (which is an exact copy of our production server as it sat 
 several weeks ago, albeit with a different IP Address).

 After upgrading OpenSRF to 2.2.2 and the Evergreen code, I started 
 getting errors when running the update DB scripts. I started by 
 running the 2.4.3-2.5.0-upgrade-db.sql (There was no
 2.4.4-2.5.0-upgrade-db.sql) with more errors that could fit on a 
 screen shot, then ran into additional errors on different update scripts:

 For 2.4.3-2.5.0-upgrade-db.sql errors see attached 243-250Error.JPG 
 (this also shows the reingest records script information, which I ran 
 after running all the DB upgrade scripts).

 For 2.5.0-2.5.1-upgrade-db.sql errors see attached 250-251Error.JPG

 No errors where reported when running the 2.5.1-2.5.2-upgrade-db.sql script.

 For 2.5.2-2.5.3-upgrade-db.sql errors see attached 252-253Error.JPG

 For 2.5.3-2.5.4-upgrade-db.sql errors see attached 253-254Error.JPG


 After starting Evergreen, I can connect via the web browser and login 
 to my account with no issues, searching the catalog produces and 
 internal server error. I can also connect to the staff client and 
 register the workstation and test/add SSL exemption with no issues.
 Once the work station is registered I get a Network error (can ping 
 the server from the workstation connecting) show in the attached 
 StaffClientError.JPG file. If I close that error the Staff Client 
 loads and then I can browse patrons and interact with the Server OK.

 I have a snapshot of the VM I can revert to that was taken prior to 
 any update scripts running against the DB to work through the process 
 again for further testing if needed.

 Thanks,


 Jesse McCarty

 City of Burlington

 IT Technical Assistant



Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser client update 2014-05-01 / feedback requested on catalog integration

2014-05-01 Thread Dan Wells
Hello Bill,

Thanks for the update.

I know others feel differently, but I don’t think we should rule out using 
iframes for catalog integration.  Iframes are actually pretty close in many 
respects to the way the catalog “integrates” into the current (XUL) staff 
client.  The usual problems with iframes are bookmarking (not really a concern 
the way we would use it), SEO (not a concern here at all), and cross-site 
scripting issues (we should have the same origin, which solves the biggest 
headaches).  If we want to keep things close to the current design, especially 
for phase one, iframes could be our shortest path there.

That said, I believe we could rework some bits of the TPAC (and it might not 
take much at all) to essentially use the same templates to dynamically fetch a 
frameless TPAC “core” for displaying inline in the staff client.

I do also like the idea of having (more or less) a grid-based OPAC (call it the 
gridPAC, of course) for use within staff client interfaces.

Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: open-ils-general-boun...@list.georgialibraries.org 
[mailto:open-ils-general-boun...@list.georgialibraries.org] On Behalf Of Bill 
Erickson
Sent: Thursday, May 01, 2014 10:40 AM
To: Evergreen Discussion Group; Evergreen Development Discussion List
Subject: [OPEN-ILS-GENERAL] browser client update 2014-05-01 / feedback 
requested on catalog integration

Hi All,

Here's another mixed bag of updates:

http://evergreen-ils.org/dokuwiki/doku.php?id=dev:browser_staff:dev_notes

Feedback on catalog integration appreciated.

Thanks!

-b

--
Bill Erickson
| Senior Software Developer
| phone: 877-OPEN-ILS (673-6457)
| email: ber...@esilibrary.commailto:ber...@esilibrary.com
| web: http://esilibrary.com
| Equinox Software, Inc. / The Open Source Experts



Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser client update 2014-05-01 / feedback requested on catalog integration

2014-05-01 Thread Dan Wells
Struggling with some connectivity issues and lost part of my message.  This:

That said, I believe we could rework some bits of the TPAC (and it might not 
take much at all) to essentially use
the same templates to dynamically fetch a frameless TPAC “core” for displaying 
inline in the staff client.

Should end with:

Of course, remapping the action elements (buttons, forms, links) within the 
included TPAC core would be challenging.

Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133


Re: [OPEN-ILS-DEV] Ejabberd error

2014-04-10 Thread Dan Wells
Hello Glenn,

Have you checked that ejabberd is actually running?  Some subsystems do not (by 
default) restart when the system restarts.

On many distros, you can check the status of ejabberd using 'ejabberdctl 
status' (run as root or use 'sudo' as appropriate).

What Linux distro are you using?

Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of BUNTON, 
GLENN
Sent: Thursday, April 10, 2014 5:31 PM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Ejabberd error

Just checking again. I've had some helpful suggestions from Brent regarding the 
issue mentioned below but still no success.

Before we have to blow the whole thing up and start over again I thought I'd 
check and see if anyone else has any ideas what the cause of this problem might 
be.


From: 
open-ils-dev-boun...@list.georgialibraries.orgmailto:open-ils-dev-boun...@list.georgialibraries.org
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of BUNTON, 
GLENN
Sent: Wednesday, April 09, 2014 2:50 PM
To: Evergreen Development Discussion List
Subject: [OPEN-ILS-DEV] Ejabberd error

We have Evergreen up and running and the world was good. Then something 
happened that locked things up. Our only recourse was to reboot the system. Now 
when we try to start Evergreen it fails. We then checked ejabberd and find this 
error when we try to start it:

opensrf@sandbox:~$ osrf_ctl.sh -l -a start_all
/openils/bin/osrf_ctl.sh is deprecated.  Use osrf_control instead
Use of uninitialized value $@ in concatenation (.) or string at 
/usr/local/share/perl/5.14.2/OpenSRF/Transport/SlimJabber/Client.pm line 162.
Exception: OpenSRF::EX::Jabber 2014-04-09T13:16:05 
OpenSRF::Transport::SlimJabber::Client 
/usr/local/share/perl/5.14.2/OpenSRF/Transport/SlimJabber/Client.pm:162 Jabber 
Exception: Could not authenticate with Jabber server:

Other than running some system updates we did nothing else. Any ideas or 
suggestions?


Glenn Bunton 
(803) 777-2903
Director of Library Technology Services   
bunto...@mailbox.sc.edumailto:bunto...@mailbox.sc.edu
Thomas Cooper Library
University of South Carolina
Columbia, South Carolina  29208




Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser staff feedback request / integration

2014-04-07 Thread Dan Wells
I'm fine with the decision and consensus, but want to add one thing.  I've met 
a fair number of users who have a difficult time managing multiple windows in 
an ongoing way (call them the closers).  We obviously don't have any such 
folks responding to this thread, but I think we should be open to such feedback 
(should it come) and possibly reconsider this decision if necessary.

Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Bill 
Erickson
Sent: Monday, April 07, 2014 1:20 PM
To: Evergreen Discussion Group
Cc: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser staff feedback request / 
integration


Agreed on fleshing out modules on a workflow-by-workflow basis as much as 
possible.  This is one area where user testing early in the process can really 
pay off.

So, I think it's safe to say we have a consensus on avoiding the XUL/mixed 
integration path entirely.  From a development perspective, this is certainly a 
relief.

-b

--
Bill Erickson
| Senior Software Developer
| phone: 877-OPEN-ILS (673-6457)
| email: ber...@esilibrary.commailto:ber...@esilibrary.com
| web: http://esilibrary.com
| Equinox Software, Inc. / The Open Source Experts



Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser staff feedback request / integration

2014-04-07 Thread Dan Wells
Galen, I was deliberately vague in my earlier email, and hope you will 
understand if I continue to be :)

I think it is fine to proceed with a separation mindset and see how it turns 
out.

Thanks,
Dan

Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Galen 
Charlton
Sent: Monday, April 07, 2014 5:52 PM
To: Evergreen Development Discussion List
Cc: Evergreen Discussion Group
Subject: Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] browser staff feedback request / 
integration


To clarify, are you talking about computer users in general, or folks you've 
met who are already using the Evergreen staff client but who you believe would 
not be able to adapt to switching contexts at all?

Regards,

Galen


Re: [OPEN-ILS-DEV] installation problems

2014-04-01 Thread Dan Wells
Hello Glenn,

No, Business::ISBN would not be related to the error you are getting.  I'd 
first try the following command:

osrf_ctl.sh -l -a restart_c

then try running autogen.sh -u once more.  If that doesn't work, try and see if 
the 'restart_c' command is giving any errors in the logs (normally found in 
/openils/var/log/).  Please let us know if you need more specific instructions 
evaluating the logs.

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of BUNTON, 
GLENN
Sent: Tuesday, April 01, 2014 2:35 PM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] installation problems

Thanks Jason. The path comments helped. We seem to be sooo close.

We're now at the autogen command and get the following. Anyone have any ideas? 
We did seem to have a failure ahead of this regarding installation of the 
Business::ISBN perl module. Does that relate to what we see below? Thanks.


opensrf@evergreen-sc:~$ autogen.sh -u
Updating Evergreen organization tree and IDL

Updating fieldmapper
Empty filename at /usr/local/share/perl/5.14.2/OpenILS/Utils/Fieldmapper.pm 
line 199.
 - /openils/var/web/opac/common/js//fmall.js
Updating web_fieldmapper
Empty filename at /usr/local/share/perl/5.14.2/OpenILS/Utils/Fieldmapper.pm 
line 199.
 - /openils/var/web/opac/common/js//fmcore.js
Updating OrgTree
Empty filename at /usr/local/share/perl/5.14.2/OpenILS/Utils/Fieldmapper.pm 
line 199.
Empty filename at /usr/local/share/perl/5.14.2/OpenILS/Utils/Fieldmapper.pm 
line 199.
Exception: OpenSRF::EX::Session 2014-04-01T14:05:51 OpenSRF::Transport 
/usr/local/share/perl/5.14.2/OpenSRF/Transport.pm:83 Session Error: 
router@private.localhost/open-ils.cstore IS NOT CONNECTED TO THE NETWORK!!!
opensrf@evergreen-sc:~$



-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Jason 
Stephenson
Sent: Monday, March 31, 2014 3:36 PM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] installation problems

On 03/31/2014 02:52 PM, BUNTON, GLENN wrote:
 Hope someone can assist.
 
  
 
 We have followed the instructions for installing OSRF and Evergreen 
 and have run into a couple of problems that have led to a couple of questions:
 
  
 
 1. Does Evergreen require java to function?
 When we try the --enable-java directive with the configure command the 
 make fails with the error message missing dependency 
 java_memcached-release_2.0.1.jar
 
 If we use the --enable-python directive it seems to make correctly

I answered these questions when you asked them on the sysadmin list a couple of 
weeks ago. So that others don't spend their time on them, the answer is:

No, Evergreen does not need Java to function. In fact, installing the Java 
components is most broken, as are the Java components themselves.
These should probably be removed from OpenSRF at some point.

You also don't need Python, either.


 
 2.  When we try to start osrf we receive two errors.
 
 If we try just osrf_ctl.sh -l -a start_all as instructed we receive 
 the error command not found. We followed the path and .srfsh.xml 
 instructions correctly.
 
 If we force the command with /openils/bin/osrf_ctl.sh -l -a start_all 
 we receive the error message
 
 Starting OpenSRF Router
 /openils/bin/osrf_ctl.sh: 158: /openils/bin/osrf_ctl.sh: opensrf_router:
 not found

Are you absolutely certain that you installed OpenSRF and Evergreen with the 
same prefix?

What does running the command ls /openils/bin/ produce?

What about echo $PATH ?

The error points to a problem with your PATH.



 
  
 
  
 
 Any ideas or suggestions?
 
  
 
  
 
  
 
  
 
 Glenn
 Bunton
 (803) 777-2903
 
 Director of Library Technology Services  
 bunto...@mailbox.sc.edu mailto:bunto...@mailbox.sc.edu
 
 Thomas Cooper Library
 
 University of South Carolina
 
 Columbia, South Carolina  29208
 
  
 
  
 


--
Jason Stephenson
Assistant Director for Technology Services Merrimack Valley Library Consortium
1600 Osgood ST, Suite 2094
North Andover, MA 01845
Phone: 978-557-5891
Email: jstephen...@mvlc.org


[OPEN-ILS-DEV] Hackfest 2014 Ideas Page [Cf. Browser client dev log update / feedback request on grids]

2014-03-13 Thread Dan Wells
Hello Bill,

Thanks for bringing this up!  I think this is absolutely the linchpin for 
getting the web staff client rolling.

Since this code is still so fresh, it seems ripe for hacking, so I've created a 
conference Hackfest page and added this there.

http://evergreen-ils.org/dokuwiki/doku.php?id=dev:hackfest:eg2014

If anyone else has ideas, please post them!

Thanks again,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Bill 
Erickson
Sent: Thursday, March 13, 2014 3:09 PM
To: Evergreen Development Discussion List; Evergreen Discussion Group
Subject: [OPEN-ILS-DEV] Browser client dev log update / feedback request on 
grids

Hi,

I just posted a wordy update to the browser client dev log with a call for 
input on how we build our UI grids (tables, lists, etc.).  See the first 
section titled User Interface Grids / Tables.  This particular component will 
be used heavily, so I want to make sure we explore all options before we pick a 
path.

There are a few other miscellaneous updates in there, as well.

http://evergreen-ils.org/dokuwiki/doku.php?id=dev:browser_staff:dev_notes

Thanks,

-b

--
Bill Erickson
| Senior Software Developer
| phone: 877-OPEN-ILS (673-6457)
| email: ber...@esilibrary.commailto:ber...@esilibrary.com
| web: http://esilibrary.com
| Equinox Software, Inc. / The Open Source Experts



Re: [OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

2014-03-11 Thread Dan Wells
Thanks for +1, Galen.

To help others better evaluate this idea, I've gone ahead and created a new 
account and used it for a test run (setting 2.5.3 bugs from 'Fix Committed' to 
'Fix Released').  The account name is Evergreen Bug Maintenance, and the 
email address (for filter purposes) is bugmas...@evergreen-ils.org.  

This is just a trial run at this point, more feedback is welcome.

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Galen 
Charlton
Sent: Wednesday, March 05, 2014 2:02 PM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

Hi,

On Wed, Mar 5, 2014 at 7:39 AM, Dan Wells d...@calvin.edu wrote:
 Here's an idea I had a while back which I think would make the 
 Launchpad email system more useful.  As it stands, the system sends 
 out an email for every little change, and, from my perspective, the 
 more meaningful changes sometimes get lost in the noise.  I think we 
 would benefit from a shared maintenance account which bug wranglers, 
 RMs, or whoever else could use to do more systematic and routine edits.

This seems like a reasonable work-around to me. +1

Regards,

Galen
--
Galen Charlton
Manager of Implementation
Equinox Software, Inc. / The Open Source Experts
email:  g...@esilibrary.com
direct: +1 770-709-5581
cell:   +1 404-984-4366
skype:  gmcharlt
web:http://www.esilibrary.com/
Supporting Koha and Evergreen: http://koha-community.org  
http://evergreen-ils.org


Re: [OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

2014-03-11 Thread Dan Wells
Thanks for the feedback, Kathy.  Yes, it would be my preference to use it for 
moving milestones as well, but the 'bugmaster' account has not yet been 
empowered to do so during this testing phase.

Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

-Original Message-
From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Kathy 
Lussier
Sent: Tuesday, March 11, 2014 1:31 PM
To: open-ils-dev@list.georgialibraries.org
Subject: Re: [OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

I like it! I've already created a filter for that e-mail address and was happy 
to see those notifications quietly go in the trash. Could we use it for 
changing milestones too?

Kathy

Kathy Lussier
Project Coordinator
Massachusetts Library Network Cooperative
(508) 343-0128
kluss...@masslnc.org
Twitter: http://www.twitter.com/kmlussier

On 3/11/2014 1:26 PM, Dan Wells wrote:
 Thanks for +1, Galen.

 To help others better evaluate this idea, I've gone ahead and created a new 
 account and used it for a test run (setting 2.5.3 bugs from 'Fix Committed' 
 to 'Fix Released').  The account name is Evergreen Bug Maintenance, and the 
 email address (for filter purposes) is bugmas...@evergreen-ils.org.

 This is just a trial run at this point, more feedback is welcome.

 Thanks,
 Dan


 Daniel Wells
 Library Programmer/Analyst
 Hekman Library, Calvin College
 616.526.7133

 -Original Message-
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of 
 Galen Charlton
 Sent: Wednesday, March 05, 2014 2:02 PM
 To: Evergreen Development Discussion List
 Subject: Re: [OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

 Hi,

 On Wed, Mar 5, 2014 at 7:39 AM, Dan Wells d...@calvin.edu wrote:
 Here's an idea I had a while back which I think would make the 
 Launchpad email system more useful.  As it stands, the system sends 
 out an email for every little change, and, from my perspective, the 
 more meaningful changes sometimes get lost in the noise.  I think we 
 would benefit from a shared maintenance account which bug 
 wranglers, RMs, or whoever else could use to do more systematic and routine 
 edits.
 This seems like a reasonable work-around to me. +1

 Regards,

 Galen
 --
 Galen Charlton
 Manager of Implementation
 Equinox Software, Inc. / The Open Source Experts
 email:  g...@esilibrary.com
 direct: +1 770-709-5581
 cell:   +1 404-984-4366
 skype:  gmcharlt
 web:http://www.esilibrary.com/
 Supporting Koha and Evergreen: http://koha-community.org  
 http://evergreen-ils.org



Re: [OPEN-ILS-DEV] a request regarding development communication in the bug tracker

2014-03-05 Thread Dan Wells
Dan, thanks for adding this.  I was curious about this point exactly, since he 
states rather bluntly “the bug tracker is a pretty cumbersome place to have a 
discussion”, which had me scratching my head a bit.

I’ve always been slightly partial towards forum-like environments since they 
have a certain level of inherent organization which isn’t as strong in a 
mailing list, but I’m comfortable with either, and I’d be fine with people 
steering more discussions to the dev list as needed.

Regardless of where discussions happen in the future, I think we should 
consider a change to keep the noise down in the LP bug mail.  I’ll post that 
idea as a separate thread, since we could do it independently of whatever we 
decide here.

Thanks,
Dan

From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan Scott
Sent: Wednesday, March 05, 2014 9:58 AM
To: Evergreen Development Discussion List
Subject: Re: [OPEN-ILS-DEV] a request regarding development communication in 
the bug tracker

One side note from an update on Fogel's effort to produce a 2.0 version of that 
book (and why it has been delayed beyond the original Kickstarter delivery 
date):

I started hearing from a ton of people who had thoughtful, constructive 
suggestions for areas to update. One example: the original edition had a 
section about not having long conversations in the bug tracker (because the 
tracker is a poor tool for that, and mailing lists are better).  Well, multiple 
people have independently written in to say that that might not be so true 
anymore: bug trackers have evolved, they now integrate better with other 
communications mechanisms, and for these and other reasons actual practice in 
many projects has changed -- certain kinds of development conversations do 
happen in the bug tracker now, and it works.

https://www.kickstarter.com/projects/kfogel/updating-producing-open-source-software-for-2nd-ed/posts/578279


[OPEN-ILS-DEV] Shared Maintenance Account for Launchpad?

2014-03-05 Thread Dan Wells
Hello all,

Here's an idea I had a while back which I think would make the Launchpad email 
system more useful.  As it stands, the system sends out an email for every 
little change, and, from my perspective, the more meaningful changes sometimes 
get lost in the noise.  I think we would benefit from a shared maintenance 
account which bug wranglers, RMs, or whoever else could use to do more 
systematic and routine edits.  Then the readers, either by practice or by a 
rule in their mail client, could simply filter out (based on the sender) when 
things, for instance, go from Fix committed to Fix released, or from New 
to Triaged.  I am not saying these notices don't have some use, but I think 
they do deserve a different level of attention than a new bug, or a review 
comment.

What do you all think?

Thanks,
Dan



Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





[OPEN-ILS-DEV] Staff Client Prototype - General Feedback

2014-01-27 Thread Dan Wells
Hello all,

I've spent a fair number of hours over the last week getting more familiar with 
Bill's prototype code.  I still have a *lot* more studying to do before I can 
fully appreciate what is happening, but I have at least read through all of the 
code involved, and wanted to share a few thoughts on what I have encountered.

Overall, my first impression of the code is very positive.  Everything is very 
clean and relatively understandable, even to somebody with basically no 
AngularJS experience.  I never found myself thinking I can't believe you have 
to write all that just to do XYZ, so I think the AngularJS team has made a lot 
of good choices in how they combine both new and common code structures in a 
way which is both familiar and fantastic.  Naturally, Bill also deserves some 
kudos here for building a highly usable prototype which better prepares the 
rest of us to learn AngularJS ideas through an Evergreen lens.  Thanks, Bill!

One thing I found a little surprising about AngularJS is that it doesn't have a 
first-party library of widgets.  That is, there is no equivalent of Dojo's 
Dijit library in the AngularJS world.  A number of AngularJS widgets are 
available from third parties, with one of biggest collections being the 
AngularUI suite.  I think Bill's choice this suite's UI Bootstrap is a great 
place to start, since it is the intersection of two very popular projects.  
While I don't agree with everything Bootstrap does aesthetically, the nuts and 
bolts of its layout-oriented CSS is well-tested and solid.

There has been a fair amount of discussion already about the use of Template 
Toolkit in combination with AngularJS in the prototype.  I am very glad to see 
the amount of thought and consideration which has gone into the discussion, and 
I hope to see it continue.  I don't have a lot to add to the debate, but do 
want to make a few observations.  First, in simple quantity terms, Template 
Toolkit (outside of i18n) is not a major factor in how the prototype is coded.  
We have (based on a quick regex count), not counting translation blocks, a 
total of 141 TT blocks in the entire prototype.  With the prototype code 
weighing in at 6000+ lines, we could move the TT bits to a different solution 
down the road and still preserve a clear majority of the work Bill has done 
here.  Second, TT is being used for i18n in 229 places in the prototype, making 
it a more significant decision.  However, AngularJS does not offer a built-in 
i18n string replacement mechanism, so since we are left to find our own 
solution, working with a known commodity is a smart place to start.  Third, the 
community has always had a working code wins mentality.  In other words, we 
use existing, working code until something demonstrably better is produced.  
It's not a foolproof strategy, but it does keep the ball rolling, and the 
door is never closed on improvements.

I have a few more specific concerns which came up in my review, but since they 
are fairly targeted, they will probably be better served to be discussed 
separately.

Thanks again to Bill and to everyone else who has taken the time to contribute 
to the staff client development discussion.

Sincerely,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





Re: [OPEN-ILS-DEV] Features and defaults and navigation (was: [Bug 1261939] Re: Add per-library TPAC pages with schema.org structured data support)

2014-01-22 Thread Dan Wells
Hello all,
Thanks, Dan, for bringing this to the list.  It is a better place to have the 
discussion, I think.  I don’t have a lot to add beyond the original post, but 
do wish to clarify a couple of my views.

As far as I know slam dunk criteria has never been formally applied to any 
previous changesets. Perhaps we should formalize how we make such decisions 
and apply it consistently for _all_ new features, otherwise conflict, hurt 
feelings, and wasted time may result.
I agree with this 100%.  I hope it was clear that “slam dunk” was simply an 
attempt to articulate my personal criteria for whether a new behavior should be 
optional or not. I’d love to see us reach a consensus with guidelines which can 
consistently applied.  It might also make a difference whether we are talking 
about options which affect data or options which affect only display, as 
display level options are easier and less dangerous all-around.

The power of the default dictates whether libraries are likely to even be 
aware of the feature…
I also agree with this, and I would not argue that “slam dunk”-age be applied 
to choosing default behavior.  I would say a new feature should be “on” by 
default (for the reasons you gave) unless:

a)  It is *clearly* meant to serve a niche audience; OR

b)  It significantly changes an already entrenched behavior or expectation
In the case of the new library pages, neither of these concerns apply, so I’d 
vote that it should be on by default.  I would also say that defaults should be 
revisited as needed, as gradual adoption of a new feature may shift either of 
these concerns to a different side.

While hover help via a title attribute or the like might help for desktop 
browsers, the mobile browser situation is more complicated.
Agreed, but shouldn’t we take what we can get?  I could really go either way 
(hence my posing it as a question in the first place).
I'll admit that I shudder a bit at the thought of 90's-era external link / 
more info icons and such for navigation; I think that would clutter up the 
display even further.
I know that Wikipedia still uses external link icons, and I’m sure other major 
sites do as well.  I think it is more important when a site has a high 
percentage of links per page (which we certainly do).  They also use a 
“tooltip” feature for their citations, which is really handy.  Obviously this 
is just one site, but we could do worse in selecting a site to draw inspiration 
from.  I think it is at least possible to add icons and styling in way which 
adds clarity, not clutter.  Besides, the 90s weren’t *that* bad ☺
For all the rest, I am happy to wait and hear other thoughts.

Thanks again, Dan!

Dan


[OPEN-ILS-DEV] Vote Needed: Release Manager 2.6 Approval

2013-12-19 Thread Dan Wells
Hello all,

Back at the dev meeting on Nov. 12, I made an offer to continue as RM for an 
abbreviated 2.6 release.  In the meeting, the discussion concluded with Galen 
suggesting that voting the same day as the proposal seems rushed -- I propose 
that we give a week, then assent via email if no other proposal is made.

A week has turned into a month, but in the spirit of no time like the present, 
I think it is time we make this official in one way or the other.  Obviously no 
other proposal has been made, and rather than continue to wait passively for 
assent, I'd like to call for a simple up/down vote on whether we accept my 
offer of continuation.

Thanks all,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





Re: [OPEN-ILS-DEV] 2.5 client issues

2013-12-02 Thread Dan Wells
Hello,

I can't reproduce this, but here is something you could try which might shed 
some light on your problem.  First, login using srfsh to get a auth token:

srfsh# login admin YOUR_PASSWORD

Next, use the returned token (the long hex value after Login Session:) to try 
running the ranged.retrieve manually.

srfsh# request open-ils.actor 
open-ils.actor.org_unit_setting.values.ranged.retrieve YOUR_AUTH_TOKEN, 4

The 4 there is BR1 in the example data; adjust if needed.

If that request doesn't at a minimum return an empty hash, something else is 
wrong, and hopefully we'll get a better error message.

Good luck,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133

From: open-ils-dev-boun...@list.georgialibraries.org 
[mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Peter Lux
Sent: Thursday, November 21, 2013 10:29 AM
To: open-ils-dev@list.georgialibraries.org
Subject: [OPEN-ILS-DEV] 2.5 client issues

Hi,

I have two fresh installs, one on debian squeeze, one on ubuntu precise.

For both instances, the http response is fine as are srfsh checks against the 
admin login supplied when the base sql script is run. However, with just that 
raw ( no records or changes to default 2.5 sql) the 2.5 windows client fails 
after authentication and workstation registry with the following error.

Network or server failure. Please check your Internet connection to 
137.149.200.52 and choose Retry Network. If you need to enter Offline Mode, 
choose Ignore Errors in this and subsequent dialogues. If you believe this 
error is due to a bug in Evergreen and not network problems, please contact 
your help desk or friendly Evergreen administrators, and give them this 
information:
method=open-ils.actor.org_unit_setting.values.ranged.retrieve
params=[9b3fd3b2e90fb00dd79585b7b119ab63,4]
THROWN:
{fileName:oils://remote/js/dojo/dojo/dojo.js,lineNumber:152}
STATUS:


I'm not seeing errors in the /openils/var/log files although you can see 
successful connections.   The client in all cases authenticates successfully 
and has registered the workstation.

Any idea if this is a known issue?


[OPEN-ILS-DEV] Question About Translation Process

2013-11-26 Thread Dan Wells
Hello all,

I've become a little more familiar with the translation setup, and based on 
that limited understanding, I have a concern about how we use translation files 
in older versions.

Right now, it seems like our practice is to (occasionally) run update_pofiles 
on older branches.  However, I think those .po files are only coordinated with 
master.  The end result is that we end up losing translation strings in older 
versions whenever that string is replaced or removed from master.

For example, 2.3 had the string Lost, Claimed Returned, Long Overdue, Has 
Unpaid Billings.  This string got changed in master, but not in 2.3.  
Therefore, it is commented out in, for instance, po/lang.dtd/fr-CA.po, even 
though it is still needed for 2.3 interface translation.

Does this make sense?  If so, is it the problem I think it is?

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





[OPEN-ILS-DEV] Questions About Query Parser

2013-11-13 Thread Dan Wells
Hello all,

Having made the leap from 2.3 to 2.5, we have run into a few issues with the 
updated query parser, and rather than dig into something I know from the 
surface to be very complex, I am hoping someone might shed some light on what 
is going on.  The issues I am finding *seem* like bugs, but maybe I am missing 
something fundamental in my queries.

You see, we have an external system (SFX) which does automated queries of the 
catalog to find book content.  These queries are largely based on ISBN, and 
when we upgraded to 2.5, they started to fail in strange ways.  After much 
poking and prodding, I have boiled things down to a couple simple cases with 
surprising results.  In both cases, the order of operands changes the result 
set despite using what I think is a commutative operator (||).  You can test 
these queries using our current catalog, http://ulysses.calvin.edu/ , but I can 
also take some time to find similar issues using Concerto records if it comes 
to that.

Case 1a:
item_form(s)  identifier|isbn:0830837035 || identifier|isbn:1844743829 (no 
results)
vs.
identifier|isbn:1844743829 || identifier|isbn:0830837035  item_form(s) (1 
result)

In this case, I would have expected both to return 1 result.  I also see the 
same behavior even if the given ISBN is identical (a contrived example):

Case 1b:
item_form(s)  identifier|isbn:0830837035 || identifier|isbn:0830837035 (no 
results)
vs.
identifier|isbn:0830837035 || identifier|isbn:0830837035  item_form(s) (1 
result)



The next case is similar, but with slightly more nuance.  I have two of the 
same title, one print, one electronic.  If I OR the ISBNs together, it works:

Case 2a:
identifier|isbn:074944990X || identifier|isbn:0749452897 -- WORKS

identifier|isbn:0749452897 || identifier|isbn:074944990X - WORKS

However, if I add a third ISBN to the mix, I now get different results 
depending on the order of operands:

Case2b:
identifier|isbn:074944990X || identifier|isbn:0749452897 || 
identifier|isbn:7313054289 -- DOESN'T WORK (print result)

identifier|isbn:074944990X || identifier|isbn:7313054289 || 
identifier|isbn:0749452897 -- DOESN'T WORK (print result)

identifier|isbn:7313054289 || identifier|isbn:074944990X || 
identifier|isbn:0749452897 -- DOESN'T WORK (no results)

identifier|isbn:7313054289 || identifier|isbn:0749452897 || 
identifier|isbn:074944990X -- DOESN'T WORK (no results)

identifier|isbn:0749452897 || identifier|isbn:074944990X || 
identifier|isbn:7313054289 -- DOESN'T WORK (e result)

identifier|isbn:0749452897 || identifier|isbn:7313054289 || 
identifier|isbn:074944990X -- DOESN'T WORK (e result)

I do seem to get the same behavior when using a more compact query notation 
(which I believe should be identical in effect):

Case 2c:
identifier|isbn:(074944990X || 0749452897) -- WORKS

identifier|isbn:(074944990X || 0749452897 || 7313054289) -- DOESN'T WORK (print 
result)

identifier|isbn:(0749452897 || 074944990X || 7313054289) -- DOESN'T WORK (e 
result)



Based on when development in query parsing was most active, I imagine this 
behavior has existed since 2.4.  Can anyone verify that?  Also, is there an 
explanation for this behavior which I may be missing?  If not, can anyone more 
familiar with this code at least narrow down what is causing these issues?  I'm 
willing to dive in if necessary, but given the complexity of this code, I may 
not soon have enough free time to effectively troubleshoot this.

Finally, I am happy to move the conversation over to LP if that is a better 
venue, but I was struggling with pinpointing exactly what this bug affects (and 
therefore how to file it properly), so I thought I would first seek input from 
the list.

Thanks,
Dan


Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





[OPEN-ILS-DEV] Release Manager for 2.6

2013-11-12 Thread Dan Wells
Hello all,

Now that 2.5.0 is officially out the door, we can turn our attention toward 
establishing a release manager for 2.6.  At the Hack-a-Way, I took a few 
minutes to reflect on how I felt 2.5 had gone, and at one point joked about my 
desire to stay on for 2.6.  During that talk, and in the days and weeks since, 
I have been surprised (and honored) by individuals expressing their support for 
this idea, so I think it is time to officially address it.

Like all jokes, this one had a grain of truth.  I can be a perfectionist, and 
while the 2.5 release went well, I do tend to dwell on the things I could have 
done better.  Were I to stay on for 2.6, my primary goal would simply be to 
compress the release into 4 months, and get us back on track for 
March/September releases.  This is something I had hoped to pull off for 2.5, 
but in this case, my inexperience became my failure.

So, in short, I think it is very important that we get back to the prescribed 
schedule, and I now have a better understanding of how to set and keep 
realistically scheduled milestones which will accomplish that.  If others agree 
with this goal, and want that I stay on for a few more months, I will certainly 
accept this duty.  On the other hand, I do admit that I am also somewhat worn 
by my work for 2.5, so if anyone else wants to carry out this task (or proposes 
and finds support for a different plan altogether), I would also be quite happy 
to step aside.

Sincerely,
Dan



Daniel Wells
Library Programmer/Analyst
Hekman Library, Calvin College
616.526.7133





[OPEN-ILS-DEV] NEW Hack-a-Way Dates - Sept. 17-19

2013-05-24 Thread Dan Wells
Hello all,

The Hack-a-Way is now scheduled for Sept. 17-19.  Thank you to everyone for 
your patience as we sorted out the details.  As I stated before, we will put 
out a call for firm commitments later on this summer.

Sincerely,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] Notice: Hack-a-Way Dates Being Reconsidered, Stay Tuned

2013-05-17 Thread Dan Wells
Hello again,

I apologize for my lack of event juggling skills, but we are being forced to 
reconsider the Hack-a-Way dates due to some issues with the facilities.  Please 
disregard my prior announcement, and expect a final determination early next 
week.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] Hack-a-Way Dates - Sept. 24-26

2013-05-16 Thread Dan Wells
Hello all,

Thank you to everyone who replied to the Doodle poll.  Based on those results, 
the Sept. 24-26 dates have been selected.

Later on this summer we will put out a call for firm commitments, but in the 
meantime, if you didn't reply to the poll and think you will attend, please let 
me know right away.  I don't want to be caught without enough reserved space 
when the time comes!

Thanks again,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] 2.5 Release Manager Proposal

2013-04-23 Thread Dan Wells
Hello all,

I wish to offer myself to be the release manager for Evergreen 2.5.  Since we 
have neither any guidelines nor any history of release manager proposals, I 
will try to outline in broad strokes how I would guide the development process.

A release cycle is fairly short, so I think a release manager might start by 
coordinating efforts around an idea which is simple, meaningful, and doable.  
In my case, I would like to tackle some of our lingering whitespace issues.  To 
keep the scope to something reasonable, I would limit the efforts of this round 
to tab replacement only, and only in the Perl code.

Second, to help manage both the whitespace transition and also the other parts 
of the roadmap, I would like to try splitting up the development period into 
smaller segments.  While other terms would also work, for simplicity, we will 
generally call these milestones.  We would likely need just two additional 
breaks beyond the usual alpha/beta/RC schedule.  I am thinking something along 
the lines of:

June 1: 2.5-M1 (milestone 1)
July 1: 2.5-M2 (milestone 2)
August 1: 2.5-alpha
August 15: 2.5-beta
Sept. 1: 2.5-RC
Sept. 15: 2.5 Final

The first day of the months are general guidelines, not deadlines, and are 
chosen simply to be easy to remember (for instance, since June 1 is a Saturday, 
M1 likely wouldn't come out until the following Monday).  Still, if 2.5 is to 
come out before the end of September, we only have two weeks of buffer space, 
so the schedule will be followed as tightly as possible.

The milestones will be used in three ways:

1) All features on the roadmap will be targeted at one of the first three 
milestones (milestone 1 - alpha).  It will be highly suggested (but not 
required) to split larger features into smaller distinct components which fit 
into a single milestone.  Feature creators should plan to complete their work 
with enough time for testing and inclusion in the intended milestone.

2) Immediately following each milestone release, 1/3 of the Perl codebase will 
have their tabs replaced.  The division will be announced well in advance of 
the first milestone, so that feature creators can suggest alternative division 
arrangements with the goal of minimizing potential conflicts with feature work. 
 I will also attempt to develop and document best practices to work around any 
whitespace conflicts which do occur.

3) The above milestones will also serve as the primary communication schedule 
for the wider community.  At each break, I will summarize what got in, what 
didn't, and what is planned for future.

Finally, in complete contrast to the simple goal of whitespace cleanup, I would 
like for the community to make another run at improving some of the broad 
issues of Evergreen search.  I am mentioning this last because I do not want 
this to be a primary focus of the 2.5 release, as that is likely a setup for 
failure.  Nevertheless, like most hard problems, we would benefit from 
continually revisiting this issue in an iterative fashion, and I am interested 
in helping to foster discussion and organizing work toward any kind of 
improvement in this area.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] OpenSRF 2.1.3?

2013-03-25 Thread Dan Wells
Hello Galen,

Thanks for pointing out this thread to me.  Since I was personally confused by 
#799 showing up in only 2.2 but badly affecting my 2.1 install, I am also 
highly in favor of a 2.1.3 release.

Thank you,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 3/15/2013 at 11:11 AM, Galen Charlton g...@esilibrary.com wrote:
 Hi,
 
 I'm also willing to do an OpenSRF 2.1.3 release.  At first glance, the
 recent committed fixes that seem most useful for a bugfix release are:
 
 #799 MultiSession response polling results in high CPU usage
 
 and maybe
 
 #1015273 Debian Wheezy install target
 
 I'd appreciate feedback on whether a (possibly final) release in the
 2.1.x series is needed.
 
 Regards,
 
 Galen
 --
 Galen Charlton
 Manager of Implementation
 Equinox Software, Inc. / The Open Source Experts
 email:  g...@esilibrary.com 
 direct: +1 770-709-5581
 cell:   +1 404-984-4366
 skype:  gmcharlt
 web:http://www.esilibrary.com/ 
 Supporting Koha and Evergreen: http://koha-community.org 
 http://evergreen-ils.org



Re: [OPEN-ILS-DEV] Define the hold pickup location based on the physical location of the workstation

2013-02-15 Thread Dan Wells
Hello Simon,

I think ctx.user.ws_ou() is what you are looking for.

Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 2/15/2013 at 5:27 PM, Mai, Hieu Trung hieu@mnsu.edu wrote:
 Hi all,
 I’m wondering if we have the way to do this?
 Because our libs want to define the hold pickup location (for patrons that 
 don’t have the default hold pickup location) based on the physical
location 
 of the workstation that that staff has been using.
 For example, the workstation is at Library C (the organization was he/she 
 selected at the 1st time creating his/her profile with the staff client)
 When he places hold for a patron A, he want the pickup location will be 
 defined as his above physical library (Library C).
 In this file: /openils/var/template/opac/parts/place_hold.tt2, I think I 
 have to fix the below code:
 
 [% l('Pickup location:') %]
 [%-  test = What_Variable_Should_I_Use_Here;
 INCLUDE build_org_selector name='pickup_lib' id='pickup_lib'

 value=test can_have_vols_only=1 %]
 
 So do we have any variable (something like ctx.default_pickup_lib) that help

 me to get the physical location of the workstation I am using?
 Thank  you !
 Simon.
 
 ==☺ ♥ ♫ ☺ ♥ ♫ ☺ ♥ ♫ ☺ ♥ ♫ ☺ ==
 Hieu Mai (Simon)
 Systems Developer - PALS
 A Program of the Minnesota State Colleges and Universities
 Email: hieu@mnsu.edumailto:hieu@mnsu.edu 
 (master.simo...@yahoo.commailto:master.simo...@yahoo.com)
  Every day may not be good… but there’s something good in every day


Re: [OPEN-ILS-DEV] Request for installation assistance

2012-10-15 Thread Dan Wells
Hello Larry,

Based on what I see here, step 9 was successful.  The 'admin'/'open-ils' 
credentials are old values which used to be hard-coded into the install, so 
your 'egadmin' test is what is relevant here, and it looks fine.  I have 
updated step 9 of the 'checking_for_errors' page to better reflect the fact 
that the admin account is no longer always the same out-of-the-box.

It also looks like you are expecting your Linux system accounts to be able to 
login to Evergreen, but that is not correct.  The only Evergreen account which 
exists after a typical install will be the admin account you created (i.e. 
'egadmin'), so when testing the staff client login, make sure to use those 
credentials.

Dan


Thank you for the troubleshooting link - I should have tried it before my
initial contact.

I have followed the troubleshooting procedure as per the instructions and
Bombed at step 9 as follows:
opensrf@eg-test:~$ srfsh
srfsh# login admin open-ils

Received Data: 0bc600d1a3da01b16e102fa63f2c77be


Request Completed Successfully
Request Time in seconds: 0.009520


Received Data: {
  ilsevent:1000,
  textcode:LOGIN_FAILED,
  desc:User login failed,
  pid:2713,
  stacktrace:oils_auth.c:596
}


Request Completed Successfully
Request Time in seconds: 0.015721

Login Session: (none).  Session timeout: 0.00


If I attempt to login using my egadmin account I get success as follows:
opensrf@eg-test:~$ srfsh
srfsh# login egadmin egadmin

Received Data: c7dba4da8811043445a24e5395d595e0


Request Completed Successfully
Request Time in seconds: 0.008854


Received Data: {
  ilsevent:0,
  textcode:SUCCESS,
  desc:Success,
  pid:2713,
  stacktrace:oils_auth.c:523,
  payload:{
authtoken:fdb02acf251bd887ff953fd2b8820c4b,
authtime:420
  }
}


Request Completed Successfully
Request Time in seconds: 0.381639

Login Session: fdb02acf251bd887ff953fd2b8820c4b.  Session timeout:
420.00

--

The requested log files have been attached for the exception of a router log
which does not exist.
PS.  I can't authenticate using the staff client with the Linux accounts for
opensrf or the user account larry.
There is an OPAC running but I see no way to use it for authentication.


Thank You,
Larry Arnold
WVLC UNIX Admin.


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 10/11/2012 at 10:10 AM, Sharp, Chris csh...@georgialibraries.org 
 wrote:
 Hi Larry,
 
 I recommend you follow the steps for troubleshooting at this link:
 
 http://evergreen-ils.org/dokuwiki/doku.php?id=troubleshooting:checking_for_e 
 rrors
 
 Then feel free to report back any problems you encounter from there.
 
 Hope that helps,
 
 Chris
 



Re: [OPEN-ILS-DEV] adding library hours of operation to fine accrual

2012-08-30 Thread Dan Wells
Hello Peter,

This has been on our wish-list for a long time, but at present has not yet made 
it onto the do-list.  I am interested in collaborating, but given my current 
project list, my contribution would likely be confined to a testing/code-review 
role.

Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 8/30/2012 at 10:37 AM, Peter Lux p...@upei.ca wrote:
 Hi,
 We are interested in looking at some options to make fine generation check
 Library Hours of Operation as well as the current check against the Closed
 Dates. The aim would be for hourly fine items to accrue based on actual
 time that a Library is open on an hourly basis as well as factoring in the
 Closed Dates.
 
 An example would be Laptops checked out on an hourly basis. Currently, the
 still accrue hourly fines over the early morning hours when the library is
 technically closed for business.
 
  I thought I would post this here first to see if anyone currently has a
 hack for this and to see if others have any interest in this as a
 configurable feature .
 
 
 
 Peter
 
 Robertson Library
 UPEI



Re: [OPEN-ILS-DEV] 2012 Hack-A-Way - Sign up and Dates

2012-08-24 Thread Dan Wells
Hello Hack-A-Way Folks,

I am planning my travel times and budget, and have one question:  Does the 
subsidized hotel rate apply to the night of Oct. 9 and/or Oct. 13?  I would 
think 'yes' for the 9th at least, but want to be sure.

Thanks,
Dan

 On 8/10/2012 at 11:20 AM, rogan.ha...@yclibrary.net wrote:

 The 2012 Hack-A-Way now has a registration page:   
 http://www.planetreg.com/E8108472214594 
 
 Registration is free!
 
 The Evergreen Hack-A-Way will be held at the Equinox offices in  
 Duluth, Ga. The hack will run from October 10th through October 13th,  
 roughly 9-5 each day. We'll try to fit in plenty of time for  
 interesting food and excursions along the way.
 
 *Breakfast and lunch are included.
 *Transportation to/from the airport is included.
 *Transportation to any excursions is included.
 *Hotels will be subsidized by Equinox and will cost attendees no more  
 than $50 (US) per night for single occupancy.
 
 We look forward to seeing you!
 
 We're still finalizing some hotel details but many thanks to the folks  
 at ESI for their hard work on this.
 
 --
 Rogan Hamby
 Manager Rock Hill Library  Reference Services
 York County Library System
 
 Outside of a dog, a book is a man's best friend. Inside of a dog it's  
 too dark
 to read. - Groucho Marx



Re: [OPEN-ILS-DEV] Feature proposal: Detailed serials-view in OPAC

2012-08-20 Thread Dan Wells
Hello Olli,

First, I am really glad to see you will be building on Lebbeous's current code. 
 It's certainly the best public view we have of the managed serials data, and I 
would love to see it enhanced to the point that it can integrate more fully 
with the summary statement view (something which I've been working toward since 
March, but with a lack of tuits).  I am not sure if that is really on your 
agenda, but what I want to emphasize more than anything is that we need (in my 
opinion) to avoid adding any more 'either-or' type options (i.e. 'this does A 
and not B, and this does B and not A'), but instead really focus on 
configurable integration if at all possible (i.e. 'this does A, B, or C, or any 
combination thereof').  To achieve that goal is going to take a lot of open 
communication, so please don't hold back with letting us know your progress, or 
any questions or ideas you might have (the overly reserved man says to the guy 
with the pony-tail sticking out of the top of his head).  ;)

Second, Lebbeous will sometimes call me the academic serials guy, but what he 
usually means is that my library emphasizes the holdings statement over 
item-level management.  We work this way because we don't circulate any of our 
serials, we generally keep them forever (though with greater electronic access, 
this is changing all the time), and we have vast amounts of holdings data which 
is in statement form only.  While I think this is common for academics, I can't 
say with any certainty that the majority of academics work this way, and if we 
create a system which is good enough that we can get item-level management for 
free (that is, without the staff needing to worry about it on a day-to-day 
basis), I believe we would take it.  So, while we might struggle for labels, I 
am truly excited to have another perspective and skilled developer onboard, and 
I hope we can keep pushing together toward something which comprehensively 
solves the problem of serials for everyone.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] 2.2 upgrade from rc1

2012-08-14 Thread Dan Wells
Hello Martha,

Yes, you will want to add that line between the UPDATE and the first ALTER 
TABLE.  Since 0706 and 0710 are two versions of the same idea, the upgrade file 
probably combines the two (so that is where the line came from).  To help 
explain what is happening, some of the constraints on the table are (by 
default) deferred while in a transaction, so that line tells the DB to go ahead 
and process them right away so we can alter the table in this same transaction.

Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 8/14/2012 at 4:42 PM, Martha Driscoll drisc...@noblenet.org wrote:
 I'm trying to upgrade a training server from 2.2-rc1 to 2.2.1.  The last 
 script registered in config.upgrade_log is 0709 so I ran 0710 but got an 
 error:
 
 Here is what I typed:
 psql -U evergreen -h localhost -v eg_version=null -f 0710*
 
 Here is what I got:
 ERROR:  cannot ALTER TABLE issuance because it has pending trigger events
 
 I looked through the 2.1-2.2-upgrade-db.sql script and it contains this 
 line after 'SELECT evergreen.upgrade_deps_block_check('0710', :eg_version);'
 
 -- If we don't do this, we have unprocessed triggers and we can't alter 
 the table
 SET CONSTRAINTS serial.issuance_caption_and_pattern_fkey IMMEDIATE;
 
 Should I just add that to the 0710 script or are there other differences 
 I should be aware of?



[OPEN-ILS-DEV] 'My Account' vs. 'Your Account'

2012-07-31 Thread Dan Wells
Hello all,

Apologies if I missed any prior discussion, but as I am working on implementing 
TPAC for my library, I noticed that the 'My Account' link from JSPAC is now 
labeled as 'Your Account Log in'.  I have no problem with either wording, and 
it isn't a huge deal, but I want to make sure we are being consistent and 
deliberate with this label.  In particular, the current alpha Android app uses 
'My Account', so I am trying to determine for certain which direction we wish 
to go as a group before I go ahead and change it to match the TPAC.

Since it interested me, I just spent the last 30 minutes or so looking at 20 
top Internet retailers across a variety of product types to see which sort of 
labeling is more common.  Here are the results:

My Account:
barnesandnoble.com
bestbuy.com
buy.com
costco.com
dell.com
homedepot.com
jcp.com
kohls.com
officemax.com
petco.com (in 'My Petco' menu)
petsmart.com (only visible after 'Sign in')
sears.com (listed as 'My Profile')
staples.com
target.com
toysrus.com
zappos.com

Your Account:
amazon.com
lowes.com
nordstrom.com

Other:
newegg.com ('Your Account' within 'My NewEgg')


Rather than spend any more time providing an analysis of this data, I would 
like to step back and hear if others have any strongly held opinions about this.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Scheduling the next developer's meeting

2012-05-21 Thread Dan Wells
Hello,

I guess I was too late in filling out the survey, as that is the only slot I 
cannot make.  Since there aren't any better slots in terms of overall 
availability, I don't think there is anything to be done, so this is just an 
FYI concerning my absence.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 5/21/2012 at 1:53 PM, Kathy Lussier kluss...@masslnc.org wrote:
 Hi all,
 
 The next Evergreen developer's meeting will be held in IRC on Wednesday, 
 May 23 at 2 p.m. EDT, 11 a.m. PDT, 18:00 UTC.
 
 
 Kathy
 
 --
 Kathy Lussier
 Project Coordinator
 Massachusetts Library Network Cooperative
 (508) 756-0172
 (508) 755-3721 (fax)
 kluss...@masslnc.org 
 Twitter: http://www.twitter.com/kmlussier 
 
 On 5/18/2012 2:51 PM, Kathy Lussier wrote:
 Hi all,

 Please excuse any cross postings.

 It's been a few weeks since the last developer's meeting. I've set up a
 Doodle to schedule a meeting for next week. Monday is a Canadian
 holiday, but please let us know your availability on any of the other
 days by going to http://www.doodle.com/6rafsivdm7ui7pr3.

 Thanks!
 Kathy





Re: [OPEN-ILS-DEV] After 2.2 naming / schedule

2012-05-17 Thread Dan Wells
Hello all,

To answer the most immediate questions, I certainly agree that the next version 
should be called 2.3 and come out in September.

Beyond that, I think I prefer traditional version numbering over Ubuntu style.  
I think Rogan is dead-on with his anxiety comment, but it can also work the 
other way.  That is, there is something about a major version number change 
which makes it easier get excited about and rally around.  It has a pronounced 
big-deal feel which is almost completely lacking from Ubuntu releases.  Then 
again, I suppose the actual goal might be to avoid big deal releases, but 
where's the fun in that?

Another common use of major versions is implied external compatibility, and I 
could see us possibly wanting that someday.  For instance, we might say that if 
you write a VuFind connector now using public APIs, we will do what it takes to 
not break it for 2.x, but when 3.x rolls around all bets are off.

The biggest argument behind date-based versions is that they are simple and 
easy, but are we giving up too much potential meaning for that convenience?  
Ubuntu is both an amalgamation of many different products and in some ways a 
consumer of software, so I think their versioning priorities are 
understandably different.

Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Interesting in contribute to the create android client(s) for ever green project

2012-03-30 Thread Dan Wells
Hello Gayan,

Thank you for your interest in the GSoC Evergreen project.  I apologize for not 
replying sooner, but our answers are pretty generic at this point, so maybe you 
figured things out from other questions being asked.

There is currently one week left in the GSoC application timeline.  We have 
many students already who are interested in the Android idea, so I expect the 
competition for that opening will be very high.  That said, this is my first 
year as a mentor, so I really don't know how things will turn out.  If you have 
some Android code to share, please do.  You may send it to the list or directly 
to me.

You may have heard all this already, but a good starting point for more 
information about developing for Evergreen is our ideas page [1] and our 
documentation of the developer virtual image [2], which is probably the fastest 
way for you to get up and running with a test Evergreen system.

Our main application requirement is that students should submit a patch or 
point to a Git branch that fixes a small bug or adds a small enhancement to 
Evergreen.  One weakness of Evergreen is that the developer documentation is 
scant, so the best way to begin learning the system is to both try it out and 
read through some of the code itself.  You can also see a list of known bugs 
[3], and you can limit that list to bite-size bugs, which are simple bugs we 
have identified to help new developers get started.

Hope this helps,
Dan


[1] http://evergreen-ils.org/dokuwiki/doku.php?id=dev:summer_of_coding_ideas 
[2] 
http://evergreen-ils.org/dokuwiki/doku.php?id=dev:quick-start_introduction_and_virtual_image
 
[3] https://bugs.launchpad.net/evergreen 

 On 3/24/2012 at 9:25 PM, gayan sukumal wiharagoda 
 gayanwiharag...@gmail.com
wrote:
 Hi,
 I am third year computer engineering student at university of Peradeniya in
 Sri Lanka. I have decided to work in mobile development in my hole
 career. I am relay Interesting in contribute to the create android
 client(s) for ever green project.I think that it is new project.Can some
 one guide me to what I want to do now.
 
 
 cheers,
 Gayan.



Re: [OPEN-ILS-DEV] [OPEN-ILS-GENERAL] Reshelving

2011-10-25 Thread Dan Wells
Hello Tim,

Cron can be tricky for a number reasons, but one thing which stands out to me 
here is the '/usr/bin/perl'.  The 'reshelving_complete.srfsh' script is not a 
perl script but a SRF Shell script, so we don't need to call the Perl 
interpreter in this case.  Assuming you can successfully run 
'reshelving_complete.srfsh' from the normal shell prompt, I would first try 
removing the '/usr/bin/perl' from your Cron command and see what happens.

Dan

 On 10/25/2011 at 4:14 PM, Tim Spindler tspind...@cwmars.org wrote:
 We have the cron job running on the reshelving status but nothing ever
 changes.  The cron job log is
 
 Oct 25 16:05:01 eg-training /USR/SBIN/CRON[25245]: (opensrf) CMD (cd
 /openils/bin  /usr/bin/perl ./reshelving_complete.srfsh)
 Oct 25 16:05:01 eg-training /USR/SBIN/CRON[25243]: (CRON) error (grandchild
 #25245 failed with exit status 255)
 
 Any ideas what may be wrong? I'm not sure what the grandchild error is.  We
 have gone through and assigned a 6 hour reshelving interval and all the
 items are well passed the time period.
 
 Thanks for any help you can provide.



Re: [OPEN-ILS-DEV] Printing reciepts from Evergreen with extended charset

2011-09-14 Thread Dan Wells
Hello Vaclav,

Please open a bug for this at https://bugs.launchpad.net/evergreen so that we
can keep better track of this issue.

The attached diff is at least a possible avenue for fixing this problem.  I am
not very familiar with this code, but what I have done so far is set the
encoding for both the preview and at least one print path to 'utf-8', then
passed the actual message text through unescaped.  This diff will not apply
directly unless you are running rel_2_0_9, as that is part of one of the paths,
but applying the changes by hand should be very doable.

This change worked for me in very brief testing.  Please comment on the bug
you open if this change works for you.  

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 9/14/2011 at 7:32 AM, Václav Jansavaclav.ja...@techlib.cz wrote:
 Hello,
 I apologize for a crosspost. We have started running Evergreen 2.0.9 in 
 a small school library in the Czech Republic
 We have two problems. One is with printing reciept with Czech chars 
 inside. Example from preceding email:
 
 When I try to insert the expression probe +ěščřžýáíé=ú into the input 
 field in the Receipt Template Edidor, it returns probe 
 +%u011B%u0161%u010D%u0159%u017Eýáíé=ú.
 The problem is independent to location setings (en_US, or cs_CZ)
 
 I could not find any solution.
 
 Thanks in advance
 
 Vaclav Jansa


util_print.diff
Description: Binary data


Re: [OPEN-ILS-DEV] Call for votes to merge template toolkit opac to master (was: template toolkit master merge branch)

2011-08-19 Thread Dan Wells
+1

I'm excited.

Dan

 On 8/17/2011 at 1:20 PM, Bill Erickson erick...@esilibrary.com wrote:
 On Mon, Aug 08, 2011 at 03:02:57PM -0400, Lebbeous Fogle-Weekley wrote:
 Bill's merge seems to work well to me.  I haven't come across any
 features that don't work in this branch.  Let's do this!
 
 Hi All,
 
 As discussed in the IRC dev meeting yesterday, I'd like to make an
 official call for yeas and nays for merging the Template Toolkit OPAC
 branch into Evergreen master in preparation for delivery with Evergreen
 2.2.
 
 The code will come from the combined master merge branch:
 
 http://git.evergreen-ils.org/?p=working/Evergreen.git;a=shortlog;h=refs/heads 
 /collab/berick/template-toolkit-opac-master-merge
 
 Consider this a +1 from me.  
 
 Thanks
 
 -b



Re: [OPEN-ILS-DEV] installation issue with Evergreen 2.0.8 on Ubuntu 11.04

2011-08-17 Thread Dan Wells
Hello Jp,

I think your original message got through, so no need to worry.

As for your problem, I think it is likely that you are running into this bug:

https://bugs.launchpad.net/evergreen/+bug/826844

I hit it myself a few days ago installing 2.0.8.  It can effectively prevent 
INSERTs into the actor.org_unit table, which then means no users, which then 
means no login.  If you are willing, please try replacing (in your Evergreen 
source download):

Open-ILS/src/sql/Pg/005.schema.actors.sql

with the code found at:

http://git.evergreen-ils.org/?p=working/Evergreen.git;a=blob_plain;f=Open-ILS/src/sql/Pg/005.schema.actors.sql;hb=890fa6b9dfa5ec7968ba1f79a5e2c96c84196cb6

After that, recreate the schema using the 
Open-ILS/src/support-scripts/eg_db_config.pl step, stop all, then restart all 
(all meaning router and perl and C services).  Please report back to let us 
know where you end up.

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 8/17/2011 at 1:30 PM, jean-philippe robichaud
jean.philippe.robich...@gmail.com wrote:
 It seems that this message wasn't originally delivered.  I'm trying again
 now.
 
 Please let me know if this is the wrong mailing list.  The Documentation
 Pages are telling users to send this kind of report to the dev' list, let
 me know if I should use the General list instead.
 
 Thanks.
 
 Jp
 
 On Mon, Aug 15, 2011 at 10:36 PM, jean-philippe robichaud 
 jean.philippe.robich...@gmail.com wrote:
 
 Hello everyone.

 Originally, I installed EG 2.0.3 on a fresh Ubunto 10.04 box and the
 installation went fine.  Unfortunately, I didn't had the chance to put this
 box in production (for a very small church library) and time passed by.  I
 decided to upgrade the box to Ubuntu 11.04 and push the EG version to 2.0.8
 so I followed the instructions again (the opensrf release at that time was
 still on the 1.6 branch, now it is 2.0.0).

 So I dropped the postgres evergreen db and followed the new installation
 guide (installing opensrf, creating the postgres db, then installing
 evergreen).

 Now everything seems fine according to settings-tester.pl, but I can't
 login using the srfsh shell nor the xulrunner interface (the web-site shows
 up properly it seems)

 Here is what I have when I try to login (the 'egadmin/eglise' user/pw are
 the one I gave as admin-user/admin-password to the eg_db_config.plcommand).  
 The srfsh.log (attached) is not helping me ...

 **


 opensrf@biblio:~/evergreen$ srfsh
 srfsh# login egadmin eglise

 Received Data: bba887e71566956dfb8cbc1c76b6f0fa

 
 Request Completed Successfully
 Request Time in seconds: 0.004986
 

 Received Data: {
   ilsevent:1000,
   textcode:LOGIN_FAILED,
   desc:User login failed,
   pid:9640,
   stacktrace:oils_auth.c:565
 }

 
 Request Completed Successfully
 Request Time in seconds: 0.024701
 
 Login Session: (none).  Session timeout: 0.00
 srfsh# exit


 Now here is what the setting-tester.pl is reporting.  In the zip file
 attached are the openils config files, the srfsh log as well as the
 .srfsh.xml config file.  Any help would be really appreciated.  I can
 provide more files and run some more tests if needed.  Thanks for this great
 software and for your help!

 Jp

 opensrf@biblio:~/evergreen$
 ./Evergreen-ILS-2.0.8/Open-ILS/src/support-scripts/settings-tester.pl
 LWP::UserAgent version 5.835
 XML::LibXML version 1.70
 XML::LibXML::XPathContext version 1.70
 XML::LibXSLT version 1.70
 Net::Server::PreFork version 0.99
 Cache::Memcached version 1.29
 Class::DBI version 3.0.17
 Class::DBI::AbstractSearch version 0.07
 Template version 2.22
 DBD::Pg version 2.17.2
 Net::Z3950::ZOOM version 1.26
 MARC::Record version 2.0.3
 MARC::Charset version 1.31
 MARC::File::XML version 0.93
 Text::Aspell version 0.09
 CGI version 3.43
 DateTime::TimeZone version 1.23
 DateTime version 0.66
 DateTime::Format::ISO8601 version 0.06
 DateTime::Format::Mail version 0.3001
 Unix::Syslog version 1.1
 GD::Graph3d version 0.63
 JavaScript::SpiderMonkey version 0.20
 Log::Log4perl version 1.29
 Email::Send version 2.198
 Text::CSV version 1.21
 Text::CSV_XS version 0.80
 Spreadsheet::WriteExcel::Big version 2.37
 Tie::IxHash version 1.21
 Parse::RecDescent version 1.965001
 SRU version 0.99
 JSON::XS version 2.32

 Checking Jabber connection for user opensrf, domain private.localhost
 * Jabber successfully connected

 Checking Jabber connection for user opensrf, domain public.localhost
 * Jabber successfully connected

 Checking Jabber connection for user router, domain public.localhost
 * Jabber successfully connected

 Checking Jabber connection for user router, domain private.localhost
 * Jabber successfully connected

 Checking database 

Re: [OPEN-ILS-DEV] Export/re-import of bib records as part of an external clean-up process

2011-07-27 Thread Dan Wells
Hello Jeff,

We do something like this annually (in our case, using LTI for authority 
cleanup).  I can't say our methods are particularly advanced, but I would be 
happy to share what we have done.  Are you on 2.0, or an earlier version?

Dan

 On 7/27/2011 at 9:57 AM, Jeff Godin jgo...@tadl.org wrote:
 Greetings-
 
 Does anyone on this list have experience with exporting bib records,
 transforming/cleaning up the records, then re-importing the records
 without disrupting existing holdings, without resulting in changed
 record ids, etc?
 
 This specific context would be where bibs are being sent off to an
 external service for clean-up.
 
 I'm also somewhat interested in metabib/fingerprint re-generation
 after the re-load, and handling of merges.
 
 I can think of a few possible approaches, but I'm interested in
 hearing from others who have done this before.
 
 Thanks!
 
 -jeff



[OPEN-ILS-DEV] Extending Authentication

2011-07-27 Thread Dan Wells
Hello all,

This email is meant to get the ball rolling again concerning alternate 
authentication schemes for Evergreen.  What follows is some rough ideas (most 
borrowed from the original thread and other software), not a complete plan, so 
all input is highly encouraged.  The original thread was pretty fractured, but 
you can read it here:

http://list.georgialibraries.org/pipermail/open-ils-dev/2009-December/005481.html
 

The first big decision is this: does the client need to learn new 
authentication techniques, or do all negotiations happen via a proxy?  Despite 
our current authentication protocol being partially handled client-side, I 
think, ultimately, authentication via proxy will cover the vast majority of 
cases in a much more doable way.  The current native authentication has an 
advantage of being usable over insecure connections, but I cannot see that 
working out for many other protocols, if any, so is it worth the trouble?

Also, for the initial implementation, I think we should limit our design to 
single-factor authentication (that is, just username and password).  Again, 
this a trade-off for covering the vast majority of needs without overdesigning.

So, I believe step one should be to create an 'open-ils.auth_proxy' server to 
mediate all authentication requests (code which does this for the native auth 
already exists in WWW/Proxy.pm, but it is unclear to me whether it can or 
should be leveraged directly).  It would have an 'authenticate' method which 
expects at least two parameters, 'username' and 'password', and an optional 
third parameter, called perhaps 'authenticator', which would supply the 'name' 
of a defined authenticator (in the auth_proxy config).  This server would also 
have an 'authenticators' method which would return to the client a list of 
available authenticators, including labels for display.

One possible layout for defining the available authenticators might be:

authenticators
authenticator
namecalvin_college_ldap/name
labelCollege Login/label
targetopen-ils.ldap_auth/target
ldap_serverudirectory.calvin.edu/ldap_server
some_other_parammy_param_value/some_other_param
...
/authenticator
authenticator
namecalvin_seminary_ldap/name
labelSeminary Login/label
targetopen-ils.ldap_auth/target
ldap_serverudirectory.calvinseminary.edu/ldap_server
...
/authenticator
authenticator
namenative/name
labelEvergreen Login/label
/authenticator
/authenticators

Each entry would have, at minimum, a 'name' and a 'label', and anything other 
than the 'native' would also need a 'target' (the server which the request 
would be passed off to).  It would also contain any other configuration 
information which is specific to that authenticator.  We can use this 
configuration area to:
  - define the available entry points, including any data the service needs to 
authenticate other than the username and password
  - define which authenticators are tried (and the order they are tried in) in 
the absence of a choice from the client

I am specifically using the term 'authenticator' rather than 'protocol' to 
better clarify the fact that you may define more than one 'authenticator' which 
use the same underlying protocol.

With this setup, the client's job is very easy.  At a mininum, send your 
username and password to 'open-ils.auth_proxy.authenticate', then wait for a 
token or a denial.  The client could also first query the '.authenticators' 
method and allow the user to choose which service to use.  The authenticator 
servers would be expected to supply their own 'validate' method which would 
take as arguments 'username', 'password', and an array of options, and return 
(at minimum) a 'pass' or 'fail' code.  If the credentials 'pass', it is then up 
to auth_proxy (perhaps via a new exposed method in open-ils.auth) to generate 
and return the token needed.

Thoughts?

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133



Re: [OPEN-ILS-DEV] Extending Authentication

2011-07-27 Thread Dan Wells
Hello Joseph,

That is certainly a related issue but is not directly covered by this proposal. 
 Initially I would expect this case to simply return a 'user not found' type 
error.  The authentication system will only answer the question Is this person 
who they say they are?.  How the user account gets into the system (whether 
periodically or 'on-the-fly') will need to covered separately.

Dan

 On 7/27/2011 at 2:20 PM, Joseph Lewis joehm...@gmail.com wrote:
 Hey Dan,
 
 Forgive my ignorance on the issue, so this may be a stupid question, but
 what would happen to users that existed in the external auth system, and
 therefore passed authentication, but didn't have an actual user account in
 Evergreen; could this happen? If so, what would the system do?
 
 Thanks,
Joseph
 
 --
 Public Key: 
 [0xF8462E1593141C16]http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xF846 
 2E1593141C16
 
 In a time of drastic change it is the learners who inherit the future. The
 learned usually find themselves equipped to live in a world that no longer
 exists.
  - Eric Hoffer
 
 
 
 On Wed, Jul 27, 2011 at 12:11 PM, Dan Wells d...@calvin.edu wrote:
 
 Hello all,

 This email is meant to get the ball rolling again concerning alternate
 authentication schemes for Evergreen.  What follows is some rough ideas
 (most borrowed from the original thread and other software), not a complete
 plan, so all input is highly encouraged.  The original thread was pretty
 fractured, but you can read it here:


 
 http://list.georgialibraries.org/pipermail/open-ils-dev/2009-December/005481.
 html

 The first big decision is this: does the client need to learn new
 authentication techniques, or do all negotiations happen via a proxy?
  Despite our current authentication protocol being partially handled
 client-side, I think, ultimately, authentication via proxy will cover the
 vast majority of cases in a much more doable way.  The current native
 authentication has an advantage of being usable over insecure connections,
 but I cannot see that working out for many other protocols, if any, so is it
 worth the trouble?

 Also, for the initial implementation, I think we should limit our design to
 single-factor authentication (that is, just username and password).  Again,
 this a trade-off for covering the vast majority of needs without
 overdesigning.

 So, I believe step one should be to create an 'open-ils.auth_proxy' server
 to mediate all authentication requests (code which does this for the native
 auth already exists in WWW/Proxy.pm, but it is unclear to me whether it can
 or should be leveraged directly).  It would have an 'authenticate' method
 which expects at least two parameters, 'username' and 'password', and an
 optional third parameter, called perhaps 'authenticator', which would supply
 the 'name' of a defined authenticator (in the auth_proxy config).  This
 server would also have an 'authenticators' method which would return to the
 client a list of available authenticators, including labels for display.

 One possible layout for defining the available authenticators might be:

 authenticators
authenticator
namecalvin_college_ldap/name
labelCollege Login/label
targetopen-ils.ldap_auth/target
ldap_serverudirectory.calvin.edu/ldap_server
some_other_parammy_param_value/some_other_param
...
/authenticator
authenticator
namecalvin_seminary_ldap/name
labelSeminary Login/label
targetopen-ils.ldap_auth/target
ldap_serverudirectory.calvinseminary.edu/ldap_server
...
/authenticator
authenticator
namenative/name
labelEvergreen Login/label
/authenticator
 /authenticators

 Each entry would have, at minimum, a 'name' and a 'label', and anything
 other than the 'native' would also need a 'target' (the server which the
 request would be passed off to).  It would also contain any other
 configuration information which is specific to that authenticator.  We can
 use this configuration area to:
  - define the available entry points, including any data the service needs
 to authenticate other than the username and password
  - define which authenticators are tried (and the order they are tried in)
 in the absence of a choice from the client

 I am specifically using the term 'authenticator' rather than 'protocol' to
 better clarify the fact that you may define more than one 'authenticator'
 which use the same underlying protocol.

 With this setup, the client's job is very easy.  At a mininum, send your
 username and password to 'open-ils.auth_proxy.authenticate', then wait for a
 token or a denial.  The client could also first query the '.authenticators'
 method and allow the user to choose which service to use.  The authenticator
 servers would be expected to supply their own 'validate

[OPEN-ILS-DEV] Security Notice: Serials Module in Evergreen 2.0

2011-05-12 Thread Dan Wells
Evergreen Users,

In the course of adding various improvements to the Serials module, it was 
found that permissions had not been correctly applied to the Serial Control 
View interface. This affects Evergreen versions 2.0.0 thru 2.0.6. If you are 
using the Serials module in Evergreen 2.0, it is strongly suggested that you 
follow the steps listed here:

http://www.open-ils.org/dokuwiki/doku.php?id=security_notice:serials_module 

If you are not using the Serials module, you may safely ignore this notice.

Thank you.



Re: [OPEN-ILS-DEV] bug in 1.6 client if very long 856?

2011-05-09 Thread Dan Wells
Hello all,

I spent a few more minutes boiling this down today, and it looks like the 
problem is in the OSRF gateway itself.  This can be seen very simply by sending 
a fake test url to any OSRF gateway which includes the trouble sequence.  For 
instance: (and note: %25 in the following examples is the URL encode for '%', 
and I am leaving the quotes unencoded for clarity, as it seems to not matter)

http://75.101.133.94/osrf-gateway-v1?service=testmethod=testparam=%251a;

successfully returns an empty payload:

{payload:[],status:200}

but if we change the 'a' to an 'n':

http://75.101.133.94/osrf-gateway-v1?service=testmethod=testparam=%251n;

the gateway fails to return any data at all.  I tested this on a handful of 
Evergreen catalogs with identical results, but did find one with a slightly 
more useful result:

http://demo.evergreencatalog.com/osrf-gateway-v1?service=testmethod=testparam=%251n;

This machine, for whatever reason, displays a stock HTTP 500:

An internal server error occurred. Please try again later.



Peeking in my Apache error logs, I finally found a useful trail to follow:

*** %n in writable segment detected ***

At that point I had to shelve this for now, especially not being very familiar 
with the OSRF code, but I am guessing someone else can find the apparently 
rogue printf much faster than I can in any case.


Dan



 On 5/6/2011 at 4:02 PM, Brian Feifarek bfeifa...@q.com wrote:

 The community server at http://75.101.133.94 has 2.0.6 available if anybody 
 has the time to try the process there.
 Brian
 
 Date: Fri, 6 May 2011 12:26:50 -0400
 From: d...@coffeecode.net 
 To: open-ils-dev@list.georgialibraries.org 
 Subject: Re: [OPEN-ILS-DEV] bug in 1.6 client if very long 856?
 
 To follow up on this:
  
 Given that there are no 9/w/n subfields in the 856 fields, that leaves
 us with the possibility that the parentheses and ampersands and
 escaped characters in the URL are giving something in the ingest code
 fits. When I get a chance, I'll see if I can reproduce that problem in
 2.0.6 using the same URLs as your record contains. (But it would be
 great if someone on 2.0.6 beat me to the punch!)
 
 -- 
 Dan Scott
 Laurentian University
 



Re: [OPEN-ILS-DEV] bug in 1.6 client if very long 856?

2011-05-06 Thread Dan Wells
Hello all,

I have done some experiments and made some interesting findings.

1) to get around the massive error box, we have success with 
Tab-Tab-Tab-Space-Tab-Tab-Space :)  I think we are checking a box, then hitting 
ok, and there may be a simpler way, but that works.

2) This problem persists on 2.0.4 at least.

3) This problem is not tied to the 856 field.  Instead, in any field, if you 
insert the a percent sign, followed by any number, followed by an 'n', you get 
this error.  And this is not a late April Fools!

4) The error is entirely client side.  There are no errors on the server, and 
the record saves just fine.

At this point I am hoping '%1n' (for example) rings somebody's bell, because 
the hole looks pretty deep from here.

Dan


 On 5/6/2011 at 4:02 PM, Brian Feifarek bfeifa...@q.com wrote:

 The community server at http://75.101.133.94 has 2.0.6 available if anybody 
 has the time to try the process there.
 Brian
 
 Date: Fri, 6 May 2011 12:26:50 -0400
 From: d...@coffeecode.net 
 To: open-ils-dev@list.georgialibraries.org 
 Subject: Re: [OPEN-ILS-DEV] bug in 1.6 client if very long 856?
 
 To follow up on this:
  
 Given that there are no 9/w/n subfields in the 856 fields, that leaves
 us with the possibility that the parentheses and ampersands and
 escaped characters in the URL are giving something in the ingest code
 fits. When I get a chance, I'll see if I can reproduce that problem in
 2.0.6 using the same URLs as your record contains. (But it would be
 great if someone on 2.0.6 beat me to the punch!)
 
 -- 
 Dan Scott
 Laurentian University
 



[OPEN-ILS-DEV] Normalization Concerns, Present and Future

2011-03-08 Thread Dan Wells
Hello all,

We have recently accelerated our timetable to moving to 2.0, and this sudden 
immersion is what is causing our recent increase in bug reporting.  Yesterday 
morning I was exploring a problem with ISSN normalization, and I have spent the 
last day trying to understand not only the immediate issues, but also what I 
think is a more serious issue going forward.

First, the immediate problem.  There is currently code in the 
OpenILS::Application::Storage::Driver::Pg::QueryParser::query_plan::node::atom 
package which attempts to apply normalizations to the incoming query term.  
Because of the way the normalizations are mapped specifically but can be 
applied generically (more on that later), the code does not allow the same 
normalization to be applied more than once.  Here, currently, sameness is 
defined by the normalizer name, but since we can now have params for the 
normalizer, comparing the name is no longer enough to determine sameness.  In 
the case of ISSNs, the default config maps two replace normalizations to the 
ISSN entries, one to remove ' ' (space), the next to remove '-'.  As it stands, 
only one of the two gets applied to the incoming query, as the other is 
discarded for being the same, so the query can fail, even when specifying 
'identifier|issn:', as both are properly applied when the record is ingested.

While there are a number of ways to solve the immediate problem, I think we 
should not do so without considering the larger problem of how normalizations 
are being applied to the incoming query.  As it stands, if you do a 'class' 
level query (e.g. subject:foo), *all* the normalizations for that class are 
applied in sequence.  In the default configuration this doesn't cause immediate 
problems, as they all get the same normalizations anyway, with one exception.  
That exception is the 'identifier' class.  In a stock config, if you do a 
search for 'identifier:foo', you get *both* the ISBN and the ISSN 
normalizations applied to your search term.  If you are searching for an 
identifier which is *not* one of these but happens to look like one, your query 
will fail, as your input will get these incorrect normalizations, while your 
index did not.

Add more normalizations for other subclasses and they would all get applied as 
well.  The potential for bad interactions seems pretty clear.

I am new to this code and may be missing some subtleties, but if this is more 
or less correct, what are our best options for fixing this?  I'll start by at 
least stating two of the more obvious options:

1) create a specific superfield for each class (e.g. subject|subject or 
title|title).  This field will get only the most generic normalizations, and 
queries without a subclass (e.g. subject: or title:) would use only these 
fields with the same generic normalizations applied to the term.  This is 
pretty simple, but could cause some significant bloat to an already heavy 
scheme, and would also mean that these more generic queries do not benefit from 
any particularly beneficial normalizations (e.g. an identifier:foo search where 
foo actual *is* an ISBN would not get the ISBN10/13 interchangeability).

2) apply all the distinct normalization-groups separately, then OR the 
resulting terms.  So, for example, a search for identifier:foo in the default 
setup would result in (as pseudo-query) foo | translate_isbn1013('foo') | 
replace(replace('foo', '-',''), ' ', ''), that is, the unnormalized, or the 
ISBN normalized, or the ISSN normalized.  Cases where all the normalizations 
can be determined to be the same would be unchanged (e.g. subject:foo would 
still be split_date_range(naco_normalize('foo'))), as all the subject 
subclasses use this same normalization-group, so there is nothing to OR 
together.  Under this scheme we maintain the special treatment of particular 
field_entries at the expense of query-time overhead, particularly as the 
normalizations diverge within a class.

Thoughts?

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133



Re: [OPEN-ILS-DEV] Normalization Concerns, Present and Future

2011-03-08 Thread Dan Wells


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133
 On 3/8/2011 at 12:57 PM, Mike Rylander mrylan...@gmail.com wrote:
 
 The drawbacks you mention in both of the above options you list are
 among the reasons we don't do this currently.  In particular, a
 tsquery containing many ORed conditions can significantly slower than
 the currently constructed queries.
 
 So, an obvious option is to not use whole classes in those cases.
 Now, if knowing that you need to say identifier|isbn or
 identifier|issn instead of simply identifier, there's a mechanism
 for providing aliases such that identifier|isbn can be spelled
 isbn in the config.metabib_search_alias table.  In fact, it's
 already there, just spelled eg.isbn (along with eg.issn).  If you
 want to search both in the same time, using the same value and only
 the appropriate normalizations for the specific fields, you can write:
 eg.isbn: foobar || eg.issn: foobar
 
 With the understood restriction that class-wide searches apply all
 unique normalizations (definition of unique to be addressed, as you
 point out above), is this an unacceptable documentation-based
 solution?

I understand that the specific searches will work in any case, but it makes me 
more than a little uncomfortable that adding normalizers to a specific field 
will silently break searches at the class level.  For instance, using the 
default config, *any* identifier which contains a '-' but is *not* an ISBN or 
an ISSN will no longer be findable when doing an 'identifier' class search.  
Consider a more specific example, a search for an ISMN using the term 
'identifier:979-0-060-11561-5' will not find the record containing that number, 
as the dashes will be removed from the query but still be present in the DB (as 
ISMN fields are not currently normalized), while 
'identifier|ismn:979-0-060-11561-5' works fine.  In my opinion this is more 
than confusing, its broken.

In addition to the two options I mentioned previously, a third simple option 
would be to only allow normalizers at the class level, not the field level.  
This would require a bit more care, perhaps cause some mild to moderate 
bloating, and would lead to some false positives in some cases, but false 
positives beat false negatives every time (especially if the false negatives 
amount to eliminating entire subclasses, as in the ISMN case).  Since we are 
already essentially normalizing at the class level for the vast majority of the 
fields, any transition should be pretty painless.

Dan



Re: [OPEN-ILS-DEV] Normalization Concerns, Present and Future

2011-03-08 Thread Dan Wells
 I understand that the specific searches will work in any case, but it makes 
 me more than a little uncomfortable that adding normalizers to a specific 
field will silently break searches at the class level.  For instance, using 
 the default config, *any* identifier which contains a '-' but is *not* an 
 ISBN 
 or an ISSN will no longer be findable when doing an 'identifier' class 
 search.  Consider a more specific example, a search for an ISMN using the 
 term 'identifier:979-0-060-11561-5' will not find the record containing that 
 number, as the dashes will be removed from the query but still be present in 
 the DB (as ISMN fields are not currently normalized), while 
 'identifier|ismn:979-0-060-11561-5' works fine.  In my opinion this is more 
 than 
 confusing, its broken.

 
 That's where we disagree.  Especially in the case of the identifier
 class (which, given, means nothing special to the /software/, but what
 we as humans put in there /is/ special) I think a strong case can be
 made for the documentation approach.
 
 What this really comes down to, remembering that the compromises made
 were all intended with full knowledge that class-wide searches would
 over-normalize some fields, is intended uses.  What, exactly, are you
 trying to do that can't be done with the prescribed spelling of
 searches I note above?
 

The capabilities of the system as-is are powerful, but it really seems to me to 
be a poor end-user experience.  When a search for 'foo:bar' systematically 
excludes (not just misses, but excludes*) results found by 'foo|baz:bar' 
(something which by all accounts looks like a more specific version of the 
first), and does so via invisible alteration of the query term, no amount of 
documentation is going to make that sensible to our patrons.  If we do not end 
up tweaking how the query works, we should strongly consider hiding or somehow 
disabling the class-level 'identifier' search OOTB.

* my 'excludes' distinction is trying to say that *no* version of the term will 
get the record to come up using that index, as it is impossible to get past the 
non-applicable normalizer

 In addition to the two options I mentioned previously, a third simple option 
 would be to only allow normalizers at the class level, not the field level.  
 This would require a bit more care, perhaps cause some mild to moderate 
 bloating, and would lead to some false positives in some cases, but false 
 positives beat false negatives every time (especially if the false negatives 
 amount to eliminating entire subclasses, as in the ISMN case).
 
 I don't actually think you can do what you need to for all cases with
 only class-level normalizations.  That was the conclusion I came to
 when designing this bit, and without evidence to the contrary I'm
 inclined to work on additive solutions.  Specifically, what of when we
 need some normalizations for some fields in a class but explicitly
 don't want those same normalizations for others.

When normalizing for the purposes of search, all we are really trying to do is 
remove meaningless differences while preserving as much meaning as we can, 
right?  As evidenced by our current setup, this is usually best done in as 
general a way as possible.  For many other situations, normalizations which add 
to the index (rather than remove) will always be safe to apply on the index 
side and are unnecessary on the query side.  If the contents of two fields are 
truly so different that a reasonable normalization cannot be achieved via 
generalized methods, index additions, or simple 'duck-typing' of the data, then 
perhaps they are not in the same class to begin with.

 
 Actually, there's a fourth way, which (if not conceptually,
 physically) removes existing bloat: allow /both/ class and field
 normalization; only apply class normalization to class-wide searches
 and all applicable normalizations to specific fields.
 
 Of course, this falls down in the other direction: if you don't supply
 all the field-level normalizers at the class level then you miss some
 normalizations and therefore miss hits.
 

I think this would be a huge step in the right direction.  This makes it at 
least *possible* to use the class-level queries and find everything in the 
subclasses.  If we could consider combining this with a convention that 
field-level normalizers (for purposes of search) do not /remove/ the 
class-normalized version from their index entry (but add whatever they like), 
then I would really think we are golden for the vast majority of 
user-expectations (with a big one being if I see 'foo-bar' in the record and 
type it in exactly, at least that record will come up).

Thoughts?

And, as always, thank you for your consideration.

Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133



Re: [OPEN-ILS-DEV] Display due date on brief display

2011-02-18 Thread Dan Wells
Hello Grant,

Dan's solution looks great, but I wanted to add that in 2.0 only there is also 
a 'open-ils.circ.copy.due_date.retrieve' method which will get just the due 
date and nothing else, which could save on some overhead if you really just 
need the date.  Let us know if you want to try it and need any help dropping it 
in to Dan's code.

Dan

 On 2/18/2011 at 4:29 PM, Dan Scott d...@coffeecode.net wrote:
 On Thu, Feb 17, 2011 at 03:35:14PM -0400, Grant Johnson wrote:
 Hi,
 
 Dan gave UPEI some awesome functionality that display's Call Number,
 Location and Status on the brief results screen.
 As well as the Due-Date in the full record display.
 
 The dojo query that returns Status is:
 
dojo.query('status', cp).forEach(function (status) {
   var pfx = dojo.doc.createTextNode(' (');
   output.appendChild(pfx);
   dojo.create('b', { innerHTML:
 dojox.xml.parser.textContent(status)  }, output);
   var sfx = dojo.doc.createTextNode(')');
   output.appendChild(sfx);
  });
 
 
 Can I, should I, How would I...  modify/add the Due-Date in the brief
 results when the status is checked-out.
 
 On 2.0, you can replace that chunk of text with the chunk at
 http://paste.lisp.org/display/119822 (see the annotation for better date
 formatting).
 
 Caveats: that code tests against the English value for Checked out
 which isn't guaranteed to be the case for other languages, and uses a
 hard coded string for Due date. And it has only been tested on Firefox
 and Chrome.
 
 Also, the dojo.date.locale.format function formats the date according to
 the browser language preference, so most people with a default en-us
 language preference get the questionable MM/dd/YY format instead of the
 perfectly sensible -MM-DD. You can override this in the options.
 
 Hopefully this is a start, anyway.



Re: [OPEN-ILS-DEV] Apparent MARC import problem in 2.0.x

2011-02-17 Thread Dan Wells
Hello John,

You're example data got me thinking, as in EG the 901 tag is almost always the
last tag, while yours is not.  We are still working out the details, but it
seems in fact that a database trigger meant to strip out the 901 is in fact
stripping out all the datafield tags starting at 901.

So, while a fix is probably imminent, in the meantime you could either:

1) reorder your 901 fields to the bottom of your records
2) test out this attached version of the maintain_901 function

Please do report back if you get a chance to test this new version.

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 2/17/2011 at 11:43 AM, John Craig
jc-mailingl...@alphagconsulting.com
wrote:

 
 
 I just did the simplest possible thing (called
 open-ils.cat.biblio.record.xml.update from srfsh using a record that
 includes a 907, 910, 945, etc field) and checked;  all of the 9xx
 tags came through just fine in the database.
 
 Maybe if you included your program and a sample record we would be able
 to help you, but without that I have to assume the problem is in your
 custom program or in your data.
I understand what you're saying, but including the program is both 
 impractical (no one is going to have the time to dissect it) and beside the

 point given these two facts:
 1) I can see the data just fine in whatever tool I apply to the source 
 records. An example:
 901 â–¼a4416208â–¼bUnknownâ–¼c428581â–²
 906
â–¼a7â–¼bcbcâ–¼corignewâ–¼d1â–¼eocipâ–¼f19â–¼gy-gencatlgâ–²
 955 â–¼apc14 to la00 08-24-93; lg05 08-24-93; to lg03
08-24-93; 
 to sl 08-31-93; lg11 09-02-93; lb00 09-03-93; lb10 09-13-9
 3
 ;aa03 9-13-93; CIP ver. pv04 11-01-94â–²
 991 â–¼bc-GenCollâ–¼hDU740â–¼i.F64 1994â–¼tCopy
1â–¼wBOOKSâ–²
 2) The tags show up in the osrfsys.log for the call to the method in 
 question--so my program is not stripping them out.
 
 So, how it can be my program's problem is beyond me. What I really wanted to

 know was whether there was some setting or parameter of which I was unaware

 that would make the behavior I'm seeing expected. Nothing seems to come to 
 mind, so I'll just work with the situation as I find it.
 
 Thanks for taking the time to run the srfsh test.
 
 John


maintain_901_new_regex.sql
Description: Binary data


Re: [OPEN-ILS-DEV] Monograph Parts

2011-02-16 Thread Dan Wells
Hello Mike,

 On 2/16/2011 at 12:37 PM, Mike Rylander mrylan...@gmail.com wrote:
 On Tue, Feb 15, 2011 at 4:35 PM, Dan Wells d...@calvin.edu wrote:
 Hello Mike,

 At first glance I think this is a very welcome development, and I have a 
 just a few comments.  First, I would advocate for some kind of a 
 'label_sortkey' on biblio.monograph_part.  Even if all it did was pad 
 numbers, it would solve 95% of 'part' sorting problems.
 
 That's a very good idea ... I will make it so.  Proposed initial
 algorithm: strip non-spacing marks, force everything to upper case,
 remove all spaces, left-pad numeric strings with '0' to 5 characters.
 Thoughts?


Sounds good to me.


 
 (NOTE: I'm kinda loath to invent something like the call number
 classification normalizer setup for this, and I don't think that will
 work directly with these strings.  And without some field testing we
 won't do a good jobs of covering our bases with anything non-trival.)
 
 Second, and perhaps this was already discarded as a simplification measure,

 but I think we should consider dropping the primary key on 
 asset.copy_part_map.target_copy to allow for multiple parts per copy.  This

 would not only better reflect reality in certain cases, but I think it could

 also lay some groundwork for future bound-with functionality (put 'part's on

 your boundwith records (or let the map point to a part *or* a record), then

 sever the link from call_number to record).

 
 Before spec'ing out this, I'd already begun working up something
 separate to cover (among several other use-cases) bound-with.  I'll
 post that soon (hopefully today).  The short version of why I
 intentionally kept bound-with and monograph parts separate is that the
 former is about aggregating multiple bib records (several metadata
 records involved in one physical thing) and the latter is about
 dis-integration (one metadata records covering multiple physical
 things).  While we /could/ design a subsystem that goes both ways, the
 implicit complexity (and split-brain logic) required outweighs both
 the design and maintenance simplicity of single-function
 infrastructure.  I'm normally in favor of infrastructure re-use, but
 in this case the concepts being modeled have opposite purposes (from
 the bib record point of view).
 
 Is that too ramble-y to make sense? ;)
 


It makes good sense, but I think we could ultimately benefit by putting less
emphasis on a bib record point of view and tilting things a bit more towards
the item point of view.  From the item perspective these proposals are modeling
the same thing, a mapping of items to contents, and the fewer ways we have to
do that, the better (as long as we cover all the cases).  With a simpler
mapping table of item to record(-part), we easily traverse in either direction,
and we have ultimate flexibility.  So, if I have a bib record on my screen, and
I ask the question, which items' contents does this record represent?, we can
simply go record-part(s)-item(s).  On the other hand, if I have an item, and
I ask the question what are the contents of this item?, we can go
item-part(s)-record(s).  Naturally we can traverse related records (via
items) and related items (via records/parts) as well.  This also eliminates the
primacy of call numbers when managing items, which I see as a benefit.

Or stated more simply, I feel our foundational assumptions in relating items
to records should be:
1) Records describe contents
2) Items contain contents
3) Item content boundaries can overlap record content boundaries in various
ways

All that said, I know from experience to trust your judgement (most of the
time ;).  For my own future benefit, do you have cases already in mind where
this flexibility would end up causing 'split-brain' logic?  (Or maybe I have a
split brain...)

Also, I think this quote from Elaine deserves a bit more attention:

 I'm particularly interested in how this would function in a consortium
 like PINES where different libraries might process a multipart set
 differently. For example, one library might process and circulate a 3
 part DVD set as one item, where another might put each in a separate
 container with a separate barcode.

If we want the complete-set copy from Library A to conclusively fulfill a
P-level hold from Library B, we will want to allow multiple parts per copy.  Or
am I missing something?

Finally, for those it may help, here is a quick version of a simple
item-record schema.  The part concerning copy_type is optional, but I wanted it
to show a more complete replacement for the proposed tables:

CREATE TABLE biblio.part (
id SERIAL PRIMARY KEY,
record BIGINT NOT NULL REFERENCES biblio.record_entry (id),
label TEXT NOT NULL,
label_sortkey TEXT NOT NULL,
CONSTRAINT record_label_unique UNIQUE (record,label)
);

CREATE TABLE asset.copy_contents_map (
id SERIAL PRIMARY KEY,
--record BIGINT NOT NULL REFERENCES biblio.record_entry (id

Re: [OPEN-ILS-DEV] BibTemplate and Opera Compatibility

2011-02-15 Thread Dan Wells
Thanks for the feedback.  Attached is a patch against trunk with a couple of 
changes based on these suggestions.

First, the quote removal code is integrated a little more cleanly and also now 
wrapped in a dojo.isOpera condition.  Second, the bib.documentElement is now 
directly inside the item_list query.  This preserves the XMLDocument in the 
'bib' variable for later use in the code (callbacks?), and I don't think we 
lose any possible functionality, as the item_list already assumes we are 
querying for nodes (rather than using some other XMLDocument property).  This 
does, however, shift some onus of responsibility for maintaining Opera 
compatibility to any code which wants to query the 'bib' further down the line 
(not really a bad thing, just making note).

Thoughts?

Dan

 
 There was a need to add quotes around literals in queries for one
 browser or another (Dan Scott attacked this, IIRC), so how about
 replacing the if(1)s with if(dojo.isOpera), at least for the second
 hunk in the patch?  For the documentElement part, there is value in
 having access to the document object from within slot code (maybe not
 a huge amount, but IMO it's not worth removing if we can avoid it), so
 perhaps we need a separate variable to hold that and use it in
 internal queries.  Thoughts?
 
 --miker
 




bt_opera_fix_v2.diff
Description: Binary data


[OPEN-ILS-DEV] BibTemplate and Opera Compatibility

2011-02-14 Thread Dan Wells
Hello all,

I spent some time recently trying to get some typical rdetail BibTemplate code 
to work in Opera, and made a few discoveries worth sharing.  The attached patch 
(against the rel_2_0 file, but with no path) is not at all complete code, and 
in particular the 'if (1)' lines are just placeholders for possibly some other 
condition should we decide to pursue this.

From what I can tell, Opera gets tripped up by two independent aspects of BT.  
First, it cannot properly respond to attributes in a dojo.query() call on JS 
XMLDocument objects (at least the ones EG delivers via unAPI).  That is, it 
will correctly do :

dojo.query(datafield, bib);

but fails on:

dojo.query(datafield[tag=245], bib);

To get around this, we need to instead query the documentElement of the 
XMLDocument, not the XMLDocument itself.  Second, Opera does not work if the 
attribute is quoted within the query.  That is, it can correctly do :

dojo.query(datafield[tag=245], bib);

but fails on:

dojo.query(datafield[tag='245'], bib);

or:

dojo.query('datafield[tag=245]', bib);

The workaround here is to simply strip quotes from the query string before 
running it.

In limited testing, these changes did not have negative side effects for 
rdetail display in other current browsers.  I also have not deciphered whether 
Opera is simply be more strict than the other browsers, or whether these are 
genuine Opera bugs.

Please report back if you are able to test this patch.  Also, do you think 
these changes cross the line between 'compatibility fix' and 'compatibility 
hack', and if so, are they worth considering for a minor browser?

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




bt_opera_fix.diff
Description: Binary data


[OPEN-ILS-DEV] Call Numbers for Serials in 2.0

2011-01-11 Thread Dan Wells
Hello all,

I have been reluctant to send this message, mostly due to my hope that things 
would somehow work themselves out in time, but with 2.0 just around the corner, 
I think it would be a mistake to not reach an agreement pre-release on how call 
numbers relate to the serials data.

At this point I will accept my portion of the blame, specifically for my 
replacement of the unit 'label' field with a 'summary_contents' field.  While 
these fields will have the same contents in a large number of cases, this 
change does not allow for all cases, and also clouds the intended purpose.  The 
label field was meant to function primarily as a call number analytic, 
something missing from the current call number setup.  This provides an 
additional, useful display and management layer, something like:

CALL_NUMBER_1
   UNIT_LABEL_1
  ITEM_INFORMATION_1
  ITEM_INFORMATION_2
   UNIT_LABEL_2
  ITEM_INFORMATION_3
  ITEM_INFORMATION_4
...

which might be:

Z671.L7
   V.59
  33108001234567 - 5th Floor
  33108002345678 - Reference
   V.60
  33108003456789 - 5th Floor
  33108004567890 - Reference

As for the items themselves, the printed label would reflect both (CALL_NUMBER 
+ UNIT_LABEL), as expected (e.g. Z671.L7 V.59).

With this approach we gain the following general benefits:
1) Intentional separation - better sorting.  Analytics sort using different 
criteria than the accompanying call number.  Keeping them distinct is the 
surest way to make sure this happens properly.
2) Less duplication of data - easier management.  If a serial run needs a new 
call number, you only need to edit it in one place, not (potentially) hundreds 
of places.
3) Less duplication of data - simpler display.  There is no need to visually 
or programmatically determine whether a different call number indicates a run 
shelved in a different location or a different volume in the same location.
4) Less duplication of data - faster catalog (?).  I am not 100% certain here, 
but my understanding is that org unit scoping in the catalog happens at the 
call number level.  Having an unnecessarily large number of call numbers for 
every serial (when only a few would suffice) adds an additional burden to 
catalog searches.


I am also aware of at least two drawbacks:
1) Units under the same distribution must have a shared 'root' call number.

While this may be a technical limitation, I haven't been able to come up with a 
real world example where this would matter.  Please do share an example if your 
library requires this 'root' to vary between issues of the same distribution 
(that is, issues from the same subscription going to the same place).

2) Serial holdings are different and more complex than monograph holdings.

I currently see this drawback as the more serious of the two.  If we want to 
intentionally scale things back in order to make serial holdings less 
disruptive, I can understand that.  We will be missing an opportunity, and we 
will also need to wait on the benefits outlined above, but at least we will be 
working towards a common goal.


Clearly I am a proponent of the described separation, especially because I feel 
it is well within reach.  It would mean restoring the 'label' field to the unit 
table to allow for a proper level of flexibility, but the amount of overall 
code changes would be fairly minimal (essentially, call number inputs (where 
needed) become unit label inputs).  From an end-user perspective, these 
internal changes mean very little, but the differences are quickly amplified 
when trying to create refined interfaces for holdings display.  And while this 
change doesn't quite get us all in the same boat again, at least we will be 
paddling in the same direction.

Please reply, thanks,
Dan



-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Call Numbers for Serials in 2.0

2011-01-11 Thread Dan Wells
Hello Lebbeous,

Thanks for the thoughtful reply.  The only problem I have with the people 
around here is that everyone is so reasonable all the time :)

My main motivation is to make sure the two serials interfaces are completely 
interchangeable before any live data diverges irrevocably.  Right now they are 
very close, but not quite.

Based on your reaction and the reaction of others in IRC, we'll probably be 
best served to find a way to keep the 'traditional' call numbers for 2.0, but I 
still hope we can do so in such a way that the two interfaces are more coherent 
for the 2.0 release, and thereby better able to tackle the previously stated 
goals in 2.x.

Here is my compromise solution:

1) Distributions will be required to have a 'Receive Call Number Base' field 
populated.
2) Distributions will optionally have a 'Bind Call Number Base' field populated.
3a) When using the 'Batch Receive' interface, the 'Receive Call Number Base' 
will be pre-populated into the call number field, and the user will then append 
to it as needed.
3b) When using the Items interface receive feature, the 'Receive Call Number 
Base' will be combined with the summary contents to create a complete call 
number in a similar fashion.
4) *_call_number fields will be hidden from the editors for now, reserved for 
later use.

These changes provide the following benefits:

1) We can postpone 'unit-awareness' until 2.1.
2) The two distribution editors will be fundamentally compatible.
3) The two receiving interfaces will be fundamentally compatible.
4) We will be well positioned to re-split the call numbers in 2.1 by consulting 
the defined base(s).

One outstanding question is where to store this 'base' data.  If keeping the 
schema intact is paramount, then temporarily annexing unit_label_base and 
unit_label_suffix is one possibility (while referenced in the code, they are 
currently more-or-less unused in practice, AFAIK).  If not, adding two text 
fields to serial.distribution will be necessary.

Thoughts?

Dan



-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 1/11/2011 at 12:43 PM, Lebbeous Fogle-Weekley lebbe...@esilibrary.com
wrote:

 On Tue, 11 Jan 2011 12:13:15 -0500, Dan Wells d...@calvin.edu wrote:
 Hello all,
 
 I have been reluctant to send this message, mostly due to my hope that
 things would somehow work themselves out in time, but with 2.0 just
 around
 the corner, I think it would be a mistake to not reach an agreement
 pre-release on how call numbers relate to the serials data.
 
 At this point I will accept my portion of the blame, specifically for my
 replacement of the unit 'label' field with a 'summary_contents' field. 
 While these fields will have the same contents in a large number of
 cases,
 this change does not allow for all cases, and also clouds the intended
 purpose.  The label field was meant to function primarily as a call
 number
 analytic, something missing from the current call number setup.  This
 provides an additional, useful display and management layer, something
 like:
 
 
 I think there's no need to take any blame or apologize for anything.  This
 is all very, very new code that has hardly seen any testing by real users
 until recently (and kudos are due to those users now).  It's okay that it's
 evolving, IMO.
 
 CALL_NUMBER_1
UNIT_LABEL_1
   ITEM_INFORMATION_1
   ITEM_INFORMATION_2
UNIT_LABEL_2
   ITEM_INFORMATION_3
   ITEM_INFORMATION_4
 ...
 
 which might be:
 
 Z671.L7
V.59
   33108001234567 - 5th Floor
   33108002345678 - Reference
V.60
   33108003456789 - 5th Floor
   33108004567890 - Reference
 
 As for the items themselves, the printed label would reflect both
 (CALL_NUMBER + UNIT_LABEL), as expected (e.g. Z671.L7 V.59).
 
 [snip]
 
 I think this would be a great and sensible model for organizing serial
 holdings under unit labels and fewer call numbers, but:
 
 - no work that I'm aware of exists for displaying anything about units in
 the OPAC, other than in cases where they stand in for copies and look like
 ordinary copies.
 
 - to my understanding (and I'm not a librarian so I could be wrong on this,
 but I don't think so in this case), some (public only?) libraries simply
 expect that every issue of a serial have its own call numbern and they want
 things organized this way. Or maybe they're used to other ILSes where
 they've cataloged their serials in this way. Could everyone be convinced to
 adapt to a new model?  Perhaps, but I think we'd be springing a lot on
 users to enforce such a model now.
 
 - the developers in general have been trying to limit their work on 2.0 to
 bug fixes only over the course of several betas and release candidates now.
 I'm willing to help on something like this for 2.something.else, but I
 really think it's late for work like this to be trying

Re: [OPEN-ILS-DEV] Call Numbers for Serials in 2.0

2011-01-11 Thread Dan Wells
Hello Mike,

Thanks for weighing in.  I think there is overwhelming agreement about *what* 
data is needed for a sensible serials display.  There is mild disagreement 
about how best to store that data for optimal usability.  I strongly feel that 
an analytic (for lack of a better term, and e.g. V.1, or APR 2010) needs 
to be separate from the call number in a clear way, especially when it comes to 
sorting.  The decision to employ a unit 'label' was one way to accomplish this 
in a place which badly needs it (serials) while not interfering with the rest 
of the system.

I have been struggling to come up with code which allows for both while 
favoring neither approach, and I think it would lead to greater success if we 
focus on a single best approach instead.  I am content with this coming in 2.1 
as long as we don't go too far down a wrong path in 2.0 such that large 
amounts of data will need to be adjusted manually (see my other email for a 
shorter-term solution).

Finally, I am not totally sure I understand this:

The enforcement is my primary /technical/ concern.  There are
identified use cases for not requiring this label, and so I don't
think we can go requiring it.

So the question is, can this be a nullable field?

I think an analytic can certainly be nullable, but I am not aware of use cases 
(other than a single unit) where it would make much sense.  Can you let me know 
what you are thinking here?

I would also like to again apologize for bringing this up at such a late stage. 
 I had been content in taking a very measured approach towards 2.0, and only 
today began to feel that we were a little too far apart on this issue to do 
nothing about it before 2.1.

Thanks again,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] Placeholder Auto-barcodes, Take Two

2010-12-14 Thread Dan Wells
Hello all,

Way back in March I sent a message to the list about adding an auto-barcode 
database trigger which didn't get a whole lot of traction, so this is a second, 
simpler attempt at the same idea.

To be clear, I am not talking about functionality where we auto-generate *real* 
barcodes based on barcode sequence logic, as that already exists in the places 
it is needed.  I am instead talking about a simple way to insert a 'filler' 
barcode when cataloging an item which (for whatever reason) doesn't have (or 
need) a real barcode.

In my first message I included a more complex trigger function than was really 
necessary.  I now think the following will suffice:

CREATE OR REPLACE FUNCTION asset.autogenerate_placeholder_barcode ( ) RETURNS 
TRIGGER AS $$
BEGIN
IF UPPER(NEW.barcode) = 'AUTO' THEN
NEW.barcode := '@@' || NEW.id;
END IF;
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;

CREATE TRIGGER autogenerate_barcode
BEFORE INSERT OR UPDATE ON asset.copy
FOR EACH ROW EXECUTE PROCEDURE asset.autogenerate_placeholder_barcode();

As suggested by Jason, this simply uses the copy id to ensure uniqueness.  
Overall consequences of this change would be that all forms of the word 'auto' 
are now effectively reserved from being valid barcode data, as well as any 
barcode starting with '@@'.  It would of course be fine to change either of 
these markers if someone suggests a reason to do so.

Thoughts?

Thanks,
Dan



-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Placeholder Auto-barcodes, Take Two

2010-12-14 Thread Dan Wells
Hello again,

This topic was discussed at the IRC developer's meeting today.  Here is an 
updated version of the function for review:

CREATE OR REPLACE FUNCTION asset.autogenerate_placeholder_barcode ( ) RETURNS 
TRIGGER AS $$
BEGIN
IF NEW.barcode LIKE '@@%' THEN
NEW.barcode := '@@' || NEW.id;
END IF;
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;

CREATE TRIGGER autogenerate_placeholder_barcode
BEFORE INSERT OR UPDATE ON asset.copy
FOR EACH ROW EXECUTE PROCEDURE asset.autogenerate_placeholder_barcode();

This change accomplishes three things:

1. We only need to reserve '@@' barcodes, not all the forms of the word 'AUTO'
2. Any attempt to manually insert a barcode in the '@@' namespace will be 
overridden, thus avoiding any possible collisions.
3. Future interfaces, load profiles, etc. can choose to use any convention they 
like for the triggering barcode (e.g. '@@PLACEHOLDER, '@@AUTO', '@@DUMMY', 
simply '@@', etc.)

Feedback always welcome.

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Help with expected valuesinvandelay.import_item_attr_definition?

2010-12-09 Thread Dan Wells
Hello Michael,

I am unsure why some parts of the process are not working for you, but I can 
say that our working 949s use the Org Unit short name (e.g. BR1) for both the 
owning_lib and circ_lib fields, and also that status and location simply use 
the 'name' field (e.g. '3rd floor').  It looks like you are on the right track 
there.

What exact version are you on?  I recall we had some import issues on 1.6.0.0 
which were fixed fairly early on, somewhere in the 1.6.0.2-1.6.0.4 range.

I would perhaps suggest starting with a simpler profile which only defines the 
critical fields and work up from there.  In our experience you do not need to 
populate every column in import_item_attr_definition.

Dan

 On 12/9/2010 at 10:48 AM, Peters, Michael mrpet...@library.in.gov wrote:
 Doh!  Hit send too fast...also meant to copy the results from 
 vandelay.import_item!
 
 evergreen=# SELECT * FROM vandelay.import_item WHERE id=4;
 -[ RECORD 1 ]--+
 id | 4
 record | 27525
 definition | 3
 owning_lib |
 circ_lib   |
 call_number| CD DMBAND
 copy_number|
 status |
 location   |
 circulate  | t
 deposit| f
 deposit_amount |
 ref| f
 holdable   | t
 price  | 9.99
 barcode| 1234567891234
 circ_modifier  | cd-music
 circ_as_type   |
 alert_message  | Private alert message here
 pub_note   | Public Note Here
 priv_note  | Private Note Here
 opac_visible   | t
 
 Sincerely,
 Michael Peters
 Indiana State Library MIS | Inspire.IN.gov Helpdesk | Evergreen Indiana 
 Helpdesk
 office - 317.234.2128
 email - mrpet...@library.in.govmailto:mrpet...@library.in.gov
 
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of 
 Peters, Michael
 Sent: Thursday, December 09, 2010 10:47 AM
 To: Evergreen Development Discussion List
 Subject: Re: [OPEN-ILS-DEV] Help with expected values 
 invandelay.import_item_attr_definition?
 
 If it helps, here also is the MARC I'm attempting to import (along with 
 holdings, embedded in the 949).  I know I have to be REALLY close on this!
 
 Sincerely,
 Michael Peters
 Indiana State Library MIS | Inspire.IN.gov Helpdesk | Evergreen Indiana 
 Helpdesk
 office - 317.234.2128
 email - mrpet...@library.in.govmailto:mrpet...@library.in.gov
 
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of 
 Peters, Michael
 Sent: Thursday, December 09, 2010 10:43 AM
 To: open-ils-dev@list.georgialibraries.org 
 Subject: [OPEN-ILS-DEV] Help with expected values in 
 vandelay.import_item_attr_definition?
 
 Hi all!
 
 I'm experimenting with the Import embedded holdings feature in Vandelay 
 and running into some trouble.  Consider the following:
 
 --SQL to create new profile
 INSERT INTO vandelay.import_item_attr_definition (
 owner,
 name,
 tag,
 owning_lib,
 circ_lib,
 call_number,
 copy_number,
 status,
 location,
 circulate,
 deposit,
 deposit_amount,
 ref,
 holdable,
 price,
 barcode,
 circ_modifier,
 circ_as_type,
 alert_message,
 opac_visible,
 pub_note_title,
 pub_note,
 priv_note_title,
 priv_note
 ) VALUES (
 1,
 'OCLC Connexion Format (Indiana Custom) -- 949',
 '949',
 'a',
 'b',
 'c',
 'd',
 'e',
 'f',
 'g',
 'h',
 'i',
 'j',
 'k',
 'l',
 'm',
 'n',
 'o',
 'p',
 'q',
 'r',
 's',
 't',
 'u'
 );
 
 
 ASSUMED Definitions for how to catalog in Connexion
 
 name| OCLC Connexion Format (Indiana Custom) -- 949
 tag | 949
 owning_lib  | a -- ?
 circ_lib| b -- ?
 call_number | c -- text string containing your call number label
 copy_number | d -- text string or integer
 status  | e -- ?
 location| f -- ?
 circulate   | g -- t or f boolean values
 deposit | h -- t or f boolean values
 deposit_amount  | i -- text string (do not use $)
 ref | j -- t or f boolean values
 holdable| k -- t or f boolean values
 price   | l -- text string (do not use $)
 barcode | m -- text string
 circ_modifier   | n -- text string matching one of the circ modifiers in EI 
 circ matrix (ex. book or video new)
 circ_as_type| o -- text string corresponding to a MARC code (a, j, i, e, 
 o, etc.)
 alert_message   | p -- text string for staff client alert message
 opac_visible| q -- t or f boolean values
 pub_note_title  | r -- text string
 pub_note| s -- text string
 priv_note_title | t -- text string
 priv_note   | u -- text string
 
 
 949 tag snippet from a binary marc to be loaded with Vandelay
 
 ...
 =949  \\$aAPLSDfile:///\\$aAPLSD $bAPLSD $cCD BTNHR $eIn process $fAV $gt 

Re: [OPEN-ILS-DEV] Help with expected valuesinvandelay.import_item_attr_definition?

2010-12-09 Thread Dan Wells
Hello Michael,

Things have certainly changed a lot since 1.6.0.0.  For example, we couldn't 
get import to work at all before this changeset (which made it into 1.6.0.1):

http://svn.open-ils.org/trac/ILS/changeset/15356 

I would try to at *least* upgrade to 1.6.0.3 or 1.6.0.4, as quite a few 
significant bugs got squashed early on.

Dan

 On 12/9/2010 at 12:37 PM, Peters, Michael mrpet...@library.in.gov wrote:
 1.6.0.0hope that's not the culprit!
 
 Sincerely, 
 Michael Peters 
 Indiana State Library MIS | Inspire.IN.gov Helpdesk | Evergreen Indiana 
 Helpdesk
 office - 317.234.2128 
 email - mrpet...@library.in.gov 
 
 
 -Original Message-
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of Dan 
 Wells
 Sent: Thursday, December 09, 2010 12:01 PM
 To: Evergreen Development Discussion List
 Subject: Re: [OPEN-ILS-DEV] Help with expected 
 valuesinvandelay.import_item_attr_definition?
 
 Hello Michael,
 
 I am unsure why some parts of the process are not working for you, but I can 
 say that our working 949s use the Org Unit short name (e.g. BR1) for both the 
 owning_lib and circ_lib fields, and also that status and location simply use 
 the 'name' field (e.g. '3rd floor').  It looks like you are on the right 
 track there.
 
 What exact version are you on?  I recall we had some import issues on 
 1.6.0.0 which were fixed fairly early on, somewhere in the 1.6.0.2-1.6.0.4 
 range.
 
 I would perhaps suggest starting with a simpler profile which only defines 
 the critical fields and work up from there.  In our experience you do not 
 need to populate every column in import_item_attr_definition.
 
 Dan
 
 On 12/9/2010 at 10:48 AM, Peters, Michael mrpet...@library.in.gov 
 wrote:
 Doh!  Hit send too fast...also meant to copy the results from 
 vandelay.import_item!
 
 evergreen=# SELECT * FROM vandelay.import_item WHERE id=4;
 -[ RECORD 1 ]--+
 id | 4
 record | 27525
 definition | 3
 owning_lib |
 circ_lib   |
 call_number| CD DMBAND
 copy_number|
 status |
 location   |
 circulate  | t
 deposit| f
 deposit_amount |
 ref| f
 holdable   | t
 price  | 9.99
 barcode| 1234567891234
 circ_modifier  | cd-music
 circ_as_type   |
 alert_message  | Private alert message here
 pub_note   | Public Note Here
 priv_note  | Private Note Here
 opac_visible   | t
 
 Sincerely,
 Michael Peters
 Indiana State Library MIS | Inspire.IN.gov Helpdesk | Evergreen Indiana 
 Helpdesk
 office - 317.234.2128
 email - mrpet...@library.in.govmailto:mrpet...@library.in.gov
 
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of 
 Peters, Michael
 Sent: Thursday, December 09, 2010 10:47 AM
 To: Evergreen Development Discussion List
 Subject: Re: [OPEN-ILS-DEV] Help with expected values 
 invandelay.import_item_attr_definition?
 
 If it helps, here also is the MARC I'm attempting to import (along with 
 holdings, embedded in the 949).  I know I have to be REALLY close on this!
 
 Sincerely,
 Michael Peters
 Indiana State Library MIS | Inspire.IN.gov Helpdesk | Evergreen Indiana 
 Helpdesk
 office - 317.234.2128
 email - mrpet...@library.in.govmailto:mrpet...@library.in.gov
 
 From: open-ils-dev-boun...@list.georgialibraries.org 
 [mailto:open-ils-dev-boun...@list.georgialibraries.org] On Behalf Of 
 Peters, Michael
 Sent: Thursday, December 09, 2010 10:43 AM
 To: open-ils-dev@list.georgialibraries.org 
 Subject: [OPEN-ILS-DEV] Help with expected values in 
 vandelay.import_item_attr_definition?
 
 Hi all!
 
 I'm experimenting with the Import embedded holdings feature in Vandelay 
 and running into some trouble.  Consider the following:
 
 --SQL to create new profile
 INSERT INTO vandelay.import_item_attr_definition (
 owner,
 name,
 tag,
 owning_lib,
 circ_lib,
 call_number,
 copy_number,
 status,
 location,
 circulate,
 deposit,
 deposit_amount,
 ref,
 holdable,
 price,
 barcode,
 circ_modifier,
 circ_as_type,
 alert_message,
 opac_visible,
 pub_note_title,
 pub_note,
 priv_note_title,
 priv_note
 ) VALUES (
 1,
 'OCLC Connexion Format (Indiana Custom) -- 949',
 '949',
 'a',
 'b',
 'c',
 'd',
 'e',
 'f',
 'g',
 'h',
 'i',
 'j',
 'k',
 'l',
 'm',
 'n',
 'o',
 'p',
 'q',
 'r',
 's',
 't',
 'u'
 );
 
 
 ASSUMED Definitions for how to catalog in Connexion
 
 name| OCLC Connexion Format (Indiana Custom) -- 949
 tag | 949
 owning_lib  | a -- ?
 circ_lib| b -- ?
 call_number | c -- text string containing your call number label
 copy_number | d

Re: [OPEN-ILS-DEV] Serials Schema Tweaks for Alpha2 Release

2010-09-15 Thread Dan Wells
Hello Mike,

Sorry for my slow reply, things are pretty hectic around here with the Fall 
semester starting last week.

 
 Can you describe the purpose of these?  Some seem obvious, but some
 are not (to me).
 

I will start with the changes which I think are straightforward.  First, 
textual_holdings are sometimes used to augment the generated_coverage and at 
other times should completely replace it (generally when the holdings statement 
is too complex to be easily generated or has more detail than we care about).  
The 'show_generated' flag in each summary table is a simple means of indicating 
which type of display we want.  Second, 'summary_method' will be consulted when 
generating the 'generated_coverage' fields, and tells us how any attached MFHD 
records (SREs) should be treated for this distribution in relation to the new 
structured data (attempt to merge the data, generate display data from both 
separately, or generate based on one or the other only).

The changes to serial.caption_and_pattern are not as straightforward, but I 
think they are justifiable.  Any given caption/pattern is only valid for a 
certain period of time (i.e. until it changes).  As it stands, we can infer the 
start and end dates of a caption/pattern only by consulting the attached 
issuances.  In practice this means retrieving and sorting sometimes hundreds of 
issuances to provide for the sorted display of just a few caption/patterns.  As 
I work with these objects more, I am simply often wishing for a more convenient 
access point to this important data.  This also means that the 'active' flag is 
redundant (caption/patterns with end_date-s are not active), but we need to get 
end_date in place before we can start removing the 'active' flag from the code.

 Are there IDL changes to go along with this?
 

I didn't provide an IDL patch simply for lack of time, and wanting to get this 
in before the schema was frozen for a few months.  I will happily provide one 
once this discussion concludes.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] Serials Schema Tweaks for Alpha2 Release

2010-09-13 Thread Dan Wells
Hello all,

I am hoping the attached changes to the serials schema can make it in before 
the alpha2 cutoff (unofficially yesterday-ish).  The schema patch is against 
trunk, but it should work for any recent branches, and the upgrade file will 
need a real upgrade sequence number (currently 'XXX' in both filename and log 
table insert).

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




0XXX.schema.serials-tweaks.sql
Description: Binary data
Index: Open-ILS/src/sql/Pg/210.schema.serials.sql
===
--- Open-ILS/src/sql/Pg/210.schema.serials.sql  (revision 17627)
+++ Open-ILS/src/sql/Pg/210.schema.serials.sql  (working copy)
@@ -65,6 +65,8 @@
  CONSTRAINT cap_type CHECK ( type in
  ( 'basic', 'supplement', 'index' )),
create_date  TIMESTAMPTZ  NOT NULL DEFAULT now(),
+   start_date   TIMESTAMPTZ  NOT NULL DEFAULT now(),
+   end_date TIMESTAMPTZ,
active   BOOL NOT NULL DEFAULT FALSE,
pattern_code TEXT NOT NULL,   -- must contain JSON
enum_1   TEXT,
@@ -86,6 +88,11 @@
record_entry  BIGINT  REFERENCES serial.record_entry (id)
  ON DELETE SET 
NULL
  DEFERRABLE 
INITIALLY DEFERRED,
+   summary_methodTEXTCONSTRAINT sdist_summary_method_check
+ CHECK (summary_method IS NULL
+ OR summary_method IN ( 'add_to_sre',
+ 'merge_with_sre', 'use_sre_only',
+ 'use_sdist_only')),
subscription  INT NOT NULL
  REFERENCES serial.subscription (id)
  ON DELETE 
CASCADE
@@ -250,7 +257,8 @@
ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED,
generated_coverage  TEXTNOT NULL,
-   textual_holdingsTEXT
+   textual_holdingsTEXT,
+   show_generated  BOOLNOT NULL DEFAULT TRUE
 );
 
 CREATE TABLE serial.supplement_summary (
@@ -260,7 +268,8 @@
ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED,
generated_coverage  TEXTNOT NULL,
-   textual_holdingsTEXT
+   textual_holdingsTEXT,
+   show_generated  BOOLNOT NULL DEFAULT TRUE
 );
 
 CREATE TABLE serial.index_summary (
@@ -270,7 +279,8 @@
ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED,
generated_coverage  TEXTNOT NULL,
-   textual_holdingsTEXT
+   textual_holdingsTEXT,
+   show_generated  BOOLNOT NULL DEFAULT TRUE
 );
 
 COMMIT;


Re: [OPEN-ILS-DEV] Proposed upgrade to serial schema

2010-07-27 Thread Dan Wells
Hello Scott,

You have divined my intentions exactly.  The only small change I would make is 
to note that the 'shadowed' column will soon have a NOT NULL constraint, so I 
think it makes sense to add it now.

Thanks,
Dan

 On 7/27/2010 at 10:22 AM, Scott McKellar m...@swbell.net wrote:
 The attached is a draft of an upgrade script for the serial schema.  Please 
 review before I commit it (along with the corresponding changes to the schema 
 itself and its install script).
 
 This script is designed to bring the trunk version of the serial schema into 
 sync with Dan Wells' most recent version in the seials-integration branch, 
 except as noted below:
 
 1. I ignored a handful of minor changes in comments and white space.
 
 2. Dan's version doesn't include a couple of triggers present in trunk: 
 b_maintain_901 and c_maintain_control_numbers.  I'm not dropping them from 
 trunk.
 
 3. Where Dan's version adds the column caption_and_pattern to the 
 serial.issuance table, it ostensibly adds it in the middle, between 
 date_published and holding_code.  I don't have a good way to add it anywhere 
 but the end.
 
 4. Dan's version of serial.unit (which inherits from asset.copy) is 
 completely different.  Here's the trunk version:
 
 CREATE TABLE serial.unit (
 label   TEXT,
 label_sort_key  TEXT,
 contentsTEXTNOT NULL
 ) INHERITS (asset.copy);
 
 ALTER TABLE serial.unit ADD PRIMARY KEY (id);
 
 And here's Dan's version:
 
 CREATE TABLE serial.unit (
 sort_key  TEXT,
 detailed_contents TEXTNOT NULL,
 summary_contents  TEXTNOT NULL
 ) INHERITS (asset.copy);
 
 ALTER TABLE serial.unit ADD PRIMARY KEY (id);
 
 Note that each has three text columns but the names are all different.  I 
 *think* what I need to do is:
 
 a. Drop the label column;
 
 b. Rename the label_sort_key column to sort_key;
 
 c. Rename the contents column to detailed_contents;
 
 d, Add a summary_contents column, initially nullable;
 
 e. Copy the detailed_contents column to the summary_contents column;
 
 f. Make the summary_contents column NOT NULL.
 
 Later, someone will presumably massage the two contents columns to make them 
 appropriately different (assuming there's any data in the table in the first 
 place).
 
 If I'm guessing wrong about serial.unit, please let me know.  I can do 
 anything reasonable but I need to know the rules.
 
 I have not yet started looking at the necessary IDL changes.
 
 Scott McKellar



Re: [OPEN-ILS-DEV] Upcoming changes to the serial schema

2010-07-26 Thread Dan Wells
Hello Scott,

Despite the fact that some of the other tables have existed for some time now, 
AFAIK, only the serial.record_entry table has been used in any stable release 
of Evergreen.  So I would say it doesn't matter much (the serial.record_entry 
table hasn't changed).

Thanks,
Dan

 On 7/26/2010 at 1:18 PM, Scott McKellar m...@swbell.net wrote:
 Bill has asked me to take some changes that dbwells made to the serial schema 
 in the seials-integration branch and migrate them into trunk.
 
 Question for the community: should the upgrade script try to convert 
 existing data to the new scheme?  Or does it even matter at this point?
 
 Mind you, I make no promises.  I haven't yet looked closely enough at the 
 branch to figure out what all the changes are.  Converting existing data may 
 be moot, easy, difficult, or impossible.  What I'm asking is how much it 
 matters.
 
 Scott McKellar



Re: [OPEN-ILS-DEV] New work in the serials integration branch

2010-07-19 Thread Dan Wells
Hello again,

Just as an FYI, I think at least the basic structural components of the serials 
interface are now all committed. It consists of more rough edges than polish at 
this point, but at least it is out there for the daring. There are still a few 
uncommitted files, and I expect to make regular commits over the next few days 
to shore up the receiving interface. Once that is done, I'll take a few days to 
tackle the 'FIXME's and the the most important of the 'TODO's.

I'll send another email when I think it is really worth the effort needed to 
get it set up.

Thanks,
Dan

 Lebbeous Fogle-Weekley 07/15/10 2:15 PM  
Aha! My bad for jumping the gun a bit. Looks like it will be cool. 

Lebbeous 


Dan Wells wrote: 
 Hello, 
 
 There are still some commits forthcoming. The problem you are experiencing 
 has most to do with the fact that the interface defaults to the 'Items' tab, 
 but only the 'Subscriptions' tab code has been committed. There are also IDL 
 and a few schema changes yet to commit. I am going to spend a couple more 
 days on the Items tab to get basic predicting/receiving functional again, and 
 will surely let everyone know when I *think* all the initial pieces are in 
 place. 
 
 Thanks, 
 Dan 
 
 
 On 7/15/2010 at 11:04 AM, Lebbeous Fogle-Weekley 
 wrote: 
 Dan Wells has recently done a lot of work to in the serials area, and he 
 has created staff client interfaces that promise to cover large parts of 
 the serials management workflow. Kudos to Dan! 
 
 I'm hoping to draw you out to talk about it a little more, Dan, :-) but 
 also, there seems to be a little bit of code that's missing, or still to 
 be committed? 
 
 In trying to look at your new interfaces, I chose Serial Control View 
 from the Actions for this Record menu of the staff client's opac 
 wrapper, but from there I just get an error to the effect that my_init 
 is undefined, and the interfaces are blank. 
 
 I can see from your code that there should be definitely be something 
 happening there, so I'm just wondering whether some JS files are 
 missing, and/or some XUL overlays (there's a reference to at least one 
 missing file, manage_items.xul, but I haven't made an exhaustive search 
 for others). 
 
 Sorry if I'm drawing attention to ragged edges, but I'm aching to see 
 what's going on in the serials UI! 
 
 Thanks, 
 


Re: [OPEN-ILS-DEV] New work in the serials integration branch

2010-07-15 Thread Dan Wells
Hello,

There are still some commits forthcoming.  The problem you are experiencing has 
most to do with the fact that the interface defaults to the 'Items' tab, but 
only the 'Subscriptions' tab code has been committed.  There are also IDL and a 
few schema changes yet to commit.  I am going to spend a couple more days on 
the Items tab to get basic predicting/receiving functional again, and will 
surely let everyone know when I *think* all the initial pieces are in place.

Thanks,
Dan


 On 7/15/2010 at 11:04 AM, Lebbeous Fogle-Weekley lebbe...@esilibrary.com
wrote:
 Dan Wells has recently done a lot of work to in the serials area, and he 
 has created staff client interfaces that promise to cover large parts of 
 the serials management workflow.  Kudos to Dan!
 
 I'm hoping to draw you out to talk about it a little more, Dan, :-) but 
 also, there seems to be a little bit of code that's missing, or still to 
 be committed?
 
 In trying to look at your new interfaces, I chose Serial Control View 
 from the Actions for this Record menu of the staff client's opac 
 wrapper, but from there I just get an error to the effect that my_init 
 is undefined, and the interfaces are blank.
 
 I can see from your code that there should be definitely be something 
 happening there, so I'm just wondering whether some JS files are 
 missing, and/or some XUL overlays (there's a reference to at least one 
 missing file, manage_items.xul, but I haven't made an exhaustive search 
 for others).
 
 Sorry if I'm drawing attention to ragged edges, but I'm aching to see 
 what's going on in the serials UI!
 
 Thanks,



[OPEN-ILS-DEV] ***SPAM*** Re: ***SPAM*** Problem with utf8 and MARC Edit

2010-06-09 Thread Dan Wells
Hello Alan,

Mostly just a stab in the dark, but maybe this is another form of the Encode.pm 
bug mentioned here?:

https://bugs.launchpad.net/evergreen/+bug/525069

In a different post Dan Scott recommends:

Basically, ensure that you have the libencode-perl module installed to
get a working version of the Encode.pm Perl module.

Good luck,
Dan

 On 6/9/2010 at 9:57 AM, Alan Rykhus alan.ryk...@mnsu.edu wrote:
 Hello,
 
 We're having a problem in MARC Edit where when we try to save a record
 we get the following:
 
 
 Network or server failure.  Please check your Internet connection to
 balsam.mnpals.net and choose Retry Network.  If you need to enter
 Offline Mode, choose Ignore Errors in this and subsequent dialogs.  If
 you believe this error is due to a bug in Evergreen and not network
 problems, please contact your help desk or friendly Evergreen
 administrators, and give them this information:
 method=open-ils.cat.biblio.record.xml.update
 params=[3cb05162d451a7aa5640490ffde742ca,99254,record
 xsi:schemaLocation=\http://www.loc.gov/MARC21/slim
 http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd\; xmlns:xsi=
 \http://www.w3.org/2001/XMLSchema-instance\; xmlns=
 \http://www.loc.gov/MARC21/slim\;\n  leader 
 .
 .
 .
 subfield code=\c\99254/subfield\n  /datafield\n/record]
 THROWN:
 {payload:[],debug:osrfMethodException :  *** Call to
 [open-ils.cat.biblio.record.
 
 I've traced the problem down to the function 'sub entityize' in
 Application::AppUtils.
 
 In this function there is a call to:
 
  $string = decode_utf8($string);
 
 The problem seems to be that the record(string) is already in utf8. If
 you check the string with:
 
  is_utf8($string)
 
 a true response will be returned. Should this call to decode_utf8() be
 wrapped? for example:
 
 if (! is_utf8($string)) {
 $string = decode_utf8($string);
 }
 
 It seems that the object of decode_utf8 is to put the string into the
 perl internal utf8 format used by perl and to turn the utf8 flag on. If
 the flag is already on, as determined by the is_utf8 call, it does not
 make sense to decode_utf8 a string that is already utf8.
 
 In addition, according to the perl documentation:
 
 is_utf8(STRING [, CHECK]) 
 
 [INTERNAL] Tests whether the UTF8 flag is turned on in the STRING. If
 CHECK is true, also checks the data in STRING for being well-formed
 UTF-8. Returns true if successful, false otherwise.
 
 
 So the is_utf8 call makes sure we have a well-formed string when the
 utf8 flag is indeed on.
 
 gosh I hope this makes sense(because it fixes the problem we're seeing)
 -- al
 
 
 -- 
 Alan Rykhus
 PALS, A Program of the Minnesota State Colleges and Universities 
 (507)389-1975
 alan.ryk...@mnsu.edu 
 It's hard to lead a cavalry charge if you think you look funny on a
 horse ~ Adlai Stevenson



Re: [OPEN-ILS-DEV] ***SPAM*** Overhaul of serial schema

2010-06-02 Thread Dan Wells
Hello Scott,

Thanks for moving this forward.  I have just one small concern (which I will 
immediately drop if I am alone in thinking this).  IMHO, the IDL class ids are 
a bit on the short side.  For instance, in my development code (which I 
*really* expect to get out there soon now that things have finally settled a 
bit more) I used the id 'sdist' rather simply 'sd' for serial.distribution.  It 
is such a handy nickname, I use it not just where it needs to be used but also 
in variable names and whatnot, so general uniqueness becomes really important.  
I think 3 characters can work if the particular assortment is not a common one 
in English (e.g. 'acn'), but otherwise 4 or 5 character ids can really improve 
readability/debugability/searchability.

Again, just IMHO.

Thanks again,
Dan

 On 6/2/2010 at 3:39 PM, Scott McKellar m...@swbell.net wrote:
 I just committed a large patch in trunk to gut the serial schema and replace 
 it with a different set of tables, along the lines of my posting on May 17.
 
 This patch does *not* include the serial.caption_and_pattern table that was 
 proposed  in the meanwhile.  I will add that table later in a separate post. 
 I just didn't want the rest to have to wait for it.
 
 Any patch of this size is likely to include at least a few typos, 
 oversights, and blunders.  In particular it's hard to be sure that the 
 upgrade script will apply cleanly, because I had been tinkering with the 
 serial schema in my own sandbox.  It may take a few more commits to correct 
 any such errors.
 
 Scott McKellar



[OPEN-ILS-DEV] Re: Serials Schema Proposal - Further De-emphasis of MARC as Record Format

2010-05-27 Thread Dan Wells
Hello Mike,

Thanks for the detailed reply.  It helps to know we are still not on the same 
page :)  

First, I want to be clear about what the serial.caption_and_pattern table 
actually contains (in my mind).  The confusion seems to have been brought about 
by my hastily naming a column 'marc' in my original email despite the fact that 
it doesn't contain any actual MARC, just most of the data which the field would 
contain.  I am renaming it 'pattern_code' for clarity.  Here is a pseudo-db 
representation of an example:

row-of-serial.caption_and_pattern [
  id: 1 
  type: basic
  active: t
  enum_1: 'v.'
  enum_2: 'no.'
  enum_3: 'pt.'
  enum_4: null
  enum_5: null
  enum_6: null
  chron_1: '(year)'
  chron_2: '(month)'
  chron_3: '(day)'
  chron_4: null
  chron_5: null
  pattern_code: 
['2','0','a','v.','b','no.','u','12','v','r','c','pt.','u','3','i','(year)','j','(month)','k','(day)','w','j']
]

I have some good ideas of how to create an editor for such rows, and it won't 
be a straight MARC editor.  We could also trim down some minor redundancy in 
'pattern_code', but I don't think it's worth the trouble on either end.

We all understand that reliance on MARC formatted data in general is frail, but 
in my experience working through this, here are some more specific reasons to 
get away from it here:

1. Once a caption/pattern is created, it should be *immutable*.  Changing it 
would redefine the meaning of any attached holding statements, and we don't 
want that.  If it actually changes, we always need to keep the old and create a 
new one.
2. (I should have answered this earlier) While pretty rarely seen (in my 
experience), it is possible for a serial to have two or more active patterns of 
the same type.  Most common will be serials which receive several different but 
regular supplements, such as a perhaps 'buyers guide' every December and maybe 
a 'trends' issue every June.  Even for the 'basic' type, one example I have 
seen is of a serial titled something like 'Oceanography' which might have 
issues subtitled 'Animal Life' in odd months and those subtitled 'Plant Life' 
in even months, and these might be held or bound separately.  Each unit type 
gets an active caption/pattern.  The entire MFHD standard (even the pattern 
portion) is designed around describing a snapshot of 'what we have' (the 
pattern exists for compression and expansion, not really prediction) rather 
than 'what will come', so it simply doesn't consider this problem.
3. MFHD is reliant upon 'link ids' to make sense of itself internally.  
Duplicate link ids or deleting a field with a linked id will cause obvious 
trouble.

Based on these observations (and probably more I am not thinking of), I think 
keeping an ongoing link to a fully editable MFHD record for titles under 
'serial control' is inviting disaster unnecessarily.  We could keep pointing to 
serial.record_entry for the other (non-MARC) DB fields, but I think we gain a 
lot of clarity, safety, and convenience if we deprecate it, allowing it to 
continue as an independant legacy/stop-gap solution.  This will become more 
important as the serial.serial table develops in ways not yet foreseen.

I hope this helps clarify my intentions with these new tables.  The overall 
goal is to be rid of storing MARC internally for serials under 'control', and 
certainly avoiding any direct editing of data at the MARC level.  Is this 
reasonable?  I think so!

Thanks again,
Dan



Re: [OPEN-ILS-DEV] Serials Schema Proposal - Further De-emphasis of MARC as Record Format

2010-05-27 Thread Dan Wells
Hello again,

  pattern_code: 
 ['2','0','a','v.','b','no.','u','12','v','r','c','pt.','u','3','i','(year)',
 'j','(month)','k','(day)','w','j']
 ]

 OK, so just an terminology mismatch so far. That's pretty much what I
 was thinking too (though we'd want to use JSON in the pattern_code
 column, IMO, which has different quoting semantics from your example
 ... but that's immaterial to the example).

Yes, surely this field will be JSON.  In the purest form of lazy-exampleness 
and an effort to edit the MARC original as quickly as possible, I just couldn't 
be bothered with pressing the shift key for all those double-quotes ;)  I also 
didn't realize until now that JSON was stricter than JS in that regard (using 
JSON outside of JS is still pretty new for me).

 
 OK ... would this mean that a single MFHD would have two patterns, or
 there would be two MFHD records?  If the former, the l can see the
 duplicate/deleted link id problem, but IMO that's also easy to guard
 against by refusing to store an MFHD that is not self-consistent (and
 we can check for that in obvious ways).
 

Yes, a single MFHD record can have several of the same type of pattern (and 
several of different types, of course).

 As for having two active patterns, it sounds like we just need to
 1) include the link value in the serial.caption_and_pattern row
 (recall, we can enforce correctness on MFHD at insert/update time)
 2) use that link value when looking for the pattern to deactivate
 3) assume a link value of 1 (or 0, if 0-based counting, dunno) when
 there's only one.
 

At the very least we would also need to consider $o (type of unit), but this 
only leads to other problems.  While we can reasonably infer that the highest 
link_id of any given $o is the most likely to be active, we simply *can't* tell 
(from the MFHD) if it actually is or is not.  For instance, maybe that 
supplement just doesn't come any more, got renamed, etc.

But this only matters if the MFHD is trying to be authoritative...

 
 If we say the MARC stored in SRE is completely non-authoritative,
 I'm cool with that.  But we still need something that sits at the top
 of the tree and links through to a bib record.  Ignoring the fact
 that SRE contains MARC, it also fills those two roles.  And, there's
 the benefit that we /could/ generate patterns in batch from that data.
  But think of that as a one-time conversion, not the normal process.
  Also, the legacy/stop-gap is restricted to a single column instead of
 a whole table.
 
 If we make the MARC field nullable (so that we don't require a real
 MFHD to do our work) would you be OK with using SRE as the top of the
 tree and link to BRE, and using your pattern editor to populate SCAP?
 

I am certainly on board with the SRE being completely non-authoritative and 
using it in a one-time conversion role!  I am also fine with it keeping its 
place in the tree if we don't want to bother with a 'sister' serial.serial 
table.  It just *seems* clearer to me from an organization perspective to keep 
them different.  For instance, the OPAC logic might say:

BRE - has SRE? - draw section from current display logic

and completely separate:

BRE - has SS? - draw section from new display logic

From a schema perspective, the SRE would link to BRE and AOU, but nothing else 
would link to it.  The SS table would take its place in the new tree, and 
likely discard not only the 'marc' column, but also the 'creator', 'editor', 
and 'edit_date' as well.  It also seems odd to me to retain a table named 
'record_entry' which we hope eventually won't have any live MARC records in 
it.  That said, now that it is clear we are at least talking the same 
language, it actually *doesn't* (honestly!) really bother me to keep 
serial.record_entry as is with nullable 'marc' (and perhaps company).  I also 
know from this whole process that you have developed very good instincts in 
working this stuff out, so if your gut is telling you we need to stick with 
the SRE table, that's good enough for me.

 
 And I'm sorry if I sounded snippy in my previous email, it wasn't intended.

I didn't take it that way, but in any case, it can certainly be frustrating to 
try to work through ideas using text when it could probably be hashed out in 20 
minutes in person.  At least we will have some good documentation about why we 
did things we did :)

Thanks again,
Dan


[OPEN-ILS-DEV] ***SPAM*** ***SPAM*** Re: ***SPAM*** Re: Serials Schema Proposal - Further De-emphasis of MARC as Record Format

2010-05-26 Thread Dan Wells
Hello Scott,

First, sorry for the conflation of suggested changes in my original email.  It 
is really two separate proposals:

1) add the serial.caption_and_pattern table, as outlined
2) create a new table (serial.serial (aka serial.base)) in the schema to 
function in place of serial.record_entry going forward (retaining 
serial.record_entry for legacy use only).

The logic for (1) was fairly well stated (I think).  The logic for (2) is:
 a. moving the caption/pattern fields out of the 'marc' column (aka MFHD 
record) in serial.record_entry doesn't leave enough of value in the 'marc' 
column, so the column should be dropped or at least nullable
   b. it doesn't make much sense to have a null 'marc' value in a table named 
'record_entry' (implying a MARC record entry, e.g. biblio.record_entry), so the 
table should be renamed (e.g. serial.serial)
 c. if we are both repurposing and renaming the table, it makes sense to 
keep serial.record_entry around untouched for legacy use (i.e. the libraries 
who have already loaded MFHD records using the current basic functionality).  
serial.serial will be largely the same, but with NO MARC AT ALL :)

So...

 As I understand it, the marc in serial.caption_and_pattern would *not*
 be a copy of the marc in serial.record_entry, but a subset of it, or
 somehow derived from a subset of it.  Is that right?

Yes, correct.  Just a stringified version of the directly related 85X field 
would be kept here.

 Your proposed serial.caption_and_pattern table contains columns named
 enum_1, enum_2, and apparently (judging from the ellipsis) a series of
 enum_[0-9]* columns.  On its face, that doesn't look very normalized to
 me.  Is there a firm, well behaved limit on the number of enum columns?
 Is that a reflection of how MFHD records work (of which I am supremely
 ignorant)?  Or would it make sense to add a child table to hold the
 enums?

Yes, there is a firm limit, sorry for being lazy.  The table will have 6 enum 
fields and 5 chron fields.  That is the extent of the standard and certainly a 
reasonable one.

 The name caption_and_pattern bothers me a bit too -- not because the
 name itself matters much, but because it suggests, or induces, a bit of
 muddlement.  Does a row in this table contain a caption *and* a pattern?
 Or maybe one or the other?  Or maybe both?  Or neither?  How do we store
 a caption differently from a pattern -- in different enums?

The caption parts are distinct and knowable.  The pattern parts are much more 
fluid.  I think it is reasonable to model the caption parts directly, but the 
pattern parts will be stored in blob form (i.e. the 'marc' column).  As for 
keeping them in one table, there are three related reasons.  One, they are in 
the same field in the MFHD standard (maybe not the best reason, but a 
convenient one).  Two, a pattern only makes sense in the context of the caption 
(e.g. we have 4 No. per V.), so it makes sense to edit and store them 
together.  Three, due to reason two, captions and patterns will always exist in 
a one-to-one relationship; storing them together makes that clear.

 Serial.serial does have a certain alliterative appeal -- like Sirhan
 Sirhan, or Boutros Boutros-Ghali.

This might have been a joke, but I'll gladly give 'serial.serial' a +1.

 We could also consider creating a whole new ser schema, with a
 ser.serial table.  Anybody using the old serial schema could keep
 it around without interference until they're ready to blow it away, or
 forever if they want.

The *only* table being used in the current serial schema is record_entry.  
Keeping our new tables in 'serial' and letting 'record_entry' stick around as 
legacy shouldn't cause too much confusion, IMO.

Ultimately, as Mike suggested in IRC, this is really not a critical change by 
any means.  I am currently coding with this setup in mind, but 
reverting/revising later won't be the end of the world by any means.

Thanks again for all the help,
Dan





-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 5/26/2010 at 11:18 AM, Scott McKellar m...@swbell.net wrote:
 --- On Mon, 5/24/10, Dan Wells d...@calvin.edu wrote:
 
  On 5/24/2010 at 12:25 PM, Scott McKellar
 m...@swbell.net
 
 snip
 
 1-3.  Sorry for not being more clear, but I think
 serial.base (or whatever it is called) will be a direct
 replacement for serial.record_entry everywhere it is used in
 the new schema.  So for starters it will have all the
 fields in record_entry minus 'marc'.  We might consider
 going without the 'edit' related fields as well, since there
 won't be much to edit there anymore.
 
 I believe Mike Rylander has proposed leaving serial.record_entry in
 place, to serve the same role as your serial.base.  The new
 serial.caption_and_pattern table would then be a child of
 serial.record_entry.  We might want to make the marc

[OPEN-ILS-DEV] Serials Schema Proposal - Further De-emphasis of MARC as Record Format

2010-05-24 Thread Dan Wells
Hello,

I will try to be briefer than usual, as I really want to get some initial 
reactions before I go to far in trying to think this out.  Basically, when I 
began working on serials, I kept with the idea that a MFHD record would be 
central to both predictions and holdings.  Being the opaque and complex format 
that it is, this adds some fairly serious complexity and overhead to serials 
management (predicting, receiving, claiming, discarding, and displaying).  What 
we really want is to maintain full compatibility with the MFHD standard without 
the overhead of maintaining a full MFHD record.  The simplest way to do this is 
to keep MFHD field data in some stringified form (where appropriate), but to 
associate this data directly with the element it represents.  This is precisely 
what we have done with the issuance table, as it has fields for storing the 
enumeration/chronology statement (863-5) which the issuance represents.  We 
have also parted out the textual holdings (866-8) to be directly associated 
with the appropriate holding library via the distribution table (and friends), 
and location and item information are handled by the item and unit tables.

At this point we really need to ask, what's left?  The answer: not a whole lot. 
 Chief among the survivors are the caption and pattern statements (853-5).  I 
am proposing we get those out as well.  We will continue using the MFHD data, 
but it will be stored where it is most relevant, not as a central record (which 
we could always regenerate for export purposes).  We will need at least one new 
table, something like:

serial.caption_and_pattern (
id  SERIAL  PRIMARY KEY,
baseINT NOT NULL REFERENCES serial.base (id) ON DELETE 
CASCADE DEFERRABLE INITIALLY DEFERRED,
typeTEXTNOT NULL CHECK (type IN 
('basic','supplement','index')),
active  BOOLNOT NULL DEFAULT FALSE,
marcTEXTNOT NULL,
enum_1  TEXTDEFAULT NULL,
enum_2  TEXTDEFAULT NULL,
...
);

The first five columns are essential, but I can see benefits from separate 
columns for each level of enumeration and chronology captions as well.  I also 
propose (as evidenced by column 2) that we create a new root table called 
'serial.base' (or something similar). What do we gain by doing these things?

1) In the most recent schema, holdings data is linked to its matching 
caption/pattern via subfield 8s in various fields 'hidden' within the marc 
column of serial.record_entry.  While it is therefore well-defined, it is also 
invisible to the DB layer.  A separate caption_and_pattern table will allow the 
holdings data to reference its caption/pattern data directly.
2) If we proceed (as planned) to create columns in both caption_and_pattern and 
serial.issuance to hold the non-repeatable enumeration and chronology fields, 
we can reliably query for and display a single issuance or group of issuances 
without consulting the MARC at all.
3) We can easily augment places where the MFHD standard is weak.  One very 
obvious place is its failure to identify whether a pattern should be considered 
active or not, a problem this table easily rectifies.
4) The only 'serial' table currently in production use is serial.record_entry.  
Rather than altering or repurposing it, we can allow it to hang on as-is for 
those libraries (like us, Laurentian, and perhaps others) which have invested 
in it (we have around 1,650 of these records in use).  Doing so will allow us 
to phase in the new system on our own terms.

Well, this is getting longish, so I'm going to stop there and hopefully get the 
first reactions I am seeking.  Thank you very much for your time and thoughts.

Sincerely,
Dan Wells


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] ***SPAM*** Re: Serials Schema Proposal - Further De-emphasis of MARC as Record Format

2010-05-24 Thread Dan Wells
Scott,

Thanks for the reply.  See responses below.

 On 5/24/2010 at 12:25 PM, Scott McKellar m...@swbell.net wrote:
 
 Questions:
 
 1. What does serial.base look like?  For example, will it link to
 biblio.record_entry?
 
 2. Using last week's proposed schema as a baseline, how will the new
 tables relate to the new ones?   E.g. will serial.subscription link
 to serial.base?
 
 3. Will serial.subscription continue to link to serial.record_entry?
 I'm guessing that it won't, since you suggest that we retain
 serial.record_entry only as a transitional measure for the few
 libraries that already use it.

1-3.  Sorry for not being more clear, but I think serial.base (or whatever it 
is called) will be a direct replacement for serial.record_entry everywhere it 
is used in the new schema.  So for starters it will have all the fields in 
record_entry minus 'marc'.  We might consider going without the 'edit' related 
fields as well, since there won't be much to edit there anymore.

 
 4. Does serial.caption_and_pattern hold an MFHD record in the marc
 column?  If so, is there reason not to call that column mfhd?
 

4. The 'marc' column will hold the data for a single MARC field (in this case a 
MARC field as defined by the MFHD standard).  We could call it 'marc_field' or 
'mfhd_field' (or maybe 'marc_data' or 'mfhd_data').  It is common to refer to 
an arbitrary chunk of MARC-formatted data as 'marc', but I don't think the same 
really applies to referring to MFHD-specific MARC fields as simply 'mfhd'.  At 
least I never say that :)  All that said, I am fine with any of these names, 
'mfhd' included.

 5. Can we come up with a better name than serial.base?  It's too
 vague.  It could represent sodium hydroxide, the std::basic_string
 class, or Wright-Patterson Air Force Base.  Maybe serial.periodical?
 

5.  I agree that serial.base seems too generic.  Honestly it should probably be 
called 'serial.serial', as some would argue that the term 'periodical' doesn't 
technically include newspapers (and probably a few other minor things).  Again, 
however, I am willing to be more pragmatic than technical if people are opposed 
to a 'serial.serial' table.

 snip
 
 Scott McKellar

Thanks again,
Dan



[OPEN-ILS-DEV] ***SPAM*** Craftsman Bookbag Patch

2010-05-13 Thread Dan Wells
Hello all,

Attached is a simple patch to fix the bookbag menu not showing up in Craftsman. 
 This patch is against the 1_6_0 branch.  All it does is ID the correct node 
for unhiding purposes.

Dan

=
Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
   have the right to submit it under the open source license
   indicated in the file; or

(b) The contribution is based upon previous work that, to the best
   of my knowledge, is covered under an appropriate open source
   license and I have the right under that license to submit that
   work with modifications, whether created in whole or in part
   by me, under the same open source license (unless I am
   permitted to submit under a different license), as indicated
   in the file; or

(c) The contribution was provided directly to me by some other
   person who certified (a), (b) or (c) and I have not modified
   it.

(d) I understand and agree that this project and the contribution
   are public and that a record of the contribution (including all
   personal information I submit with it, including my sign-off) is
   maintained indefinitely and may be redistributed consistent with
   this project or the open source license(s) involved.

Signed-off-by: Daniel B. Wells (Calvin College)
=

Index: Open-ILS/web/opac/skin/craftsman/xml/page_rdetail.xml
===
--- Open-ILS/web/opac/skin/craftsman/xml/page_rdetail.xml   (revision 16428)
+++ Open-ILS/web/opac/skin/craftsman/xml/page_rdetail.xml   (working copy)
@@ -53,8 +53,8 @@
a 
id='rdetail_place_hold'opac.holds.placeHold;/a
/span
/li
-   li class='hide_me'
-   span 
class='selectBox' id='rdetail_more_actions'
+   li class='hide_me' 
id='rdetail_more_actions'
+   span 
class='selectBox'
select 
id='rdetail_more_actions_selector' style='max-width: 11em;'

option value='start'rdetail.more;/option

option disabled='disabled'--/option


[OPEN-ILS-DEV] MFHD Commit to Serials Branch

2010-05-03 Thread Dan Wells
Hello all,

I have committed my MFHD code changes to the new Serials branch for review and 
comment from anyone interested.  You can see the changeset (with more details) 
here:  http://svn.open-ils.org/trac/ILS/changeset/16373 

At this point I cannot guarantee that the interface is 100% stable, as I 
realized while working with the code that some of the new methods may be better 
suited to be members of the Caption object, but I will certainly preserve and 
pass through where it makes sense to do so.  Still, there is plenty here to 
chew on, so I wanted to get this out right away.

I should also note that this module now requires a new, not-yet released 
version of Marc::Record for the *field methods to work properly.  You can get 
the updated code from Source Forge repository: 
http://sourceforge.net/projects/marcpm/develop

Please let me know if you have any corrections or suggestions.

Thanks,
Dan



-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] ***SPAM*** ***SPAM*** Re: MFHD Commit to Serials Branch

2010-05-03 Thread Dan Wells
Hello Bill,

 
 1.  I'd recommend keeping the seials-integration [sic] branch up to date
 with trunk.  It will make the final merge much simpler.  I've used svnmerge
 [1] in the past with success.
 

Sounds good, I'll do my best to keep up.

 2. I'd like to offer our assistance getting the serials schema defined in
 the wiki [2] into trunk.  (FWIW, I'm suggesting trunk instead of the
 integration branch for several reasons.  Dealing with upgrade scripts in a
 branch won't be fun, trunk already has a serials schema in progress, the
 integration branch can absorb the changes (via svnmerge), and there are
 other features related to serials and ACQ that are pending the integration
 of the serials schema).

I agree with this as well.

 
 What outstanding changes need to be made to the schema before it's brought
 in?

I've been tinkering with the schema quite a bit since our pre-conference 
conversation, and testing it against various scenarios.  I definitely 
understand better now the need to have multiple shelving_units per call_number, 
but also think we cannot give up supporting multiple copies per shelving unit.  
It is important for us to know relationally if two copies represent the same 
shelving_unit (that is, the same content).  So it would appear that 
shelving_unit is a corollary to neither call_number nor copy, but truly exists 
at some meta-level which falls in between.  For purposes of easier integration 
and backwards compatibility, I say we keep it there.  With that in mind, we 
really have two options:

1) a separate linking table to map copies to shelving units in a many-to-one 
relation
2) a new 'serial.copy' table which inherits from asset.copy, but adds a 
reference to shelving_unit

I think these are really two ways to do about the same thing, but since I do 
not have much experience with inherited tables, the wiki page has been updated 
to reflect the more traditional approach (#1).  I could rally behind either 
approach (or still another), though, so more thoughts here are certainly 
welcome.

 * I recall some discussion of dropping the unique constraint on
 serial.shelving_unit.call_number.

This has been dropped, per above.

 * Do we need a serials.distribution.owning_lib column to cover the scenario
 where serial.record_entry is owned by the root org unit?

I think this sounds like a reasonable requirement.  It has been added.

 * I'm inclined to leave serial.claim out until we know how acq.claim might
 affect things.

Sounds fine.

 * What else?

I have made a few other changes.  Notably, first, I replaced all DATE columns 
with TIMESTAMP... columns, as DATEs were not playing nice in a few places.  
Second I replaced 'is_expected' and 'is_received' with copies_expected and 
copies_received, as I believe this supports more use cases without adding much 
complexity to cases where the change would not be needed.  There are a few 
other similarly minor changes sprinkled throughout.  I intend to eventually add 
note tables for Shelving Units and Distributions (the Issuance note table is 
now there), but just haven't gotten around to it.  They should be 
straightforward.

Something not quite as straightforward, though, is the idea of having a 'copy 
template' of some sort.  I have added at least some comments pertaining to the 
idea of using the copy_transparency table, but as I understand it, that is not 
quite what was in mind when that table was created.  Still, it or something 
like it would work and is desirable.

All that said, I think getting this into trunk even more-or-less as-is would be 
a big step in the right direction.  I have updated the wiki page ( 
http://www.open-ils.org/dokuwiki/doku.php?id=acq:serials:basic_predicting_and_receiving
 ) to reflect the schema as it currently exists on my development instance 
(apologies in advance for any typos!).

As the week progresses, I fully expect to get the bulk of our current serials 
code into the branch.  I have been concentrating on getting the schema 
nailed-down for you guys first, so I really hope it helps on your end, and also 
that we can reach some quick consensus where needed.

Thanks,
Dan

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133



Re: [OPEN-ILS-DEV] ***SPAM*** EverGreen 1.6.0.4 on Ubuntu 9.04, installation almost finish, but failed on the last test steps.

2010-04-23 Thread Dan Wells
Hello,

I just did a quick line-by-line comparison with one of our test systems, and 
your log was functionally identical to ours up until the point where it ends.  
Is that the actual end of the log?  If you grep for ERR in the log directory, 
do you find anything in any of the other logs?  Your system is working well 
enough to request and get a seed value, but the second request which would 
return the actual authtoken fails.  For reference, our log continues with:

srfsh 2010-04-07 16:19:23 [DEBG:4154:osrf_stack.c:23:] Received message from 
transport code from 
open...@private.localhost/open-ils.auth_drone_libsysadmin-desktop_1270671563.291603_4394
 
srfsh 2010-04-07 16:19:23 [DEBG:4154:osrf_stack.c:50:] Transport handler 
received new message
from 
open...@private.localhost/open-ils.auth_drone_libsysadmin-desktop_1270671563.291603_4394
 to open...@private.localhost/_libsysadmin-desktop_1270670992.860239_4154 with 
body

[{__c:osrfMessage,__p:{threadTrace:1,locale:en-US,type:RESULT,payload:{__c:osrfResult,__p:{status:OK,statusCode:200,content:{ilsevent:0,textcode:SUCCESS,desc:
 
,pid:4394,stacktrace:oils_auth.c:312,payload:{authtoken:edd50e2c814df6476c6748c76aaccb50,authtime:420.00}},{__c:osrfMessage,__p:{threadTrace:1,locale:en-US,type:STATUS,payload:{__c:osrfConnectStatus,__p:{status:Request
 Complete,statusCode:205]

I am assuming you followed the steps as outlined on the Checking for Errors 
page you referenced.  You may also want to run 'top' and see if any processes 
are stuck at 100%.  If so, try killing them, do the Checking steps again, 
then see if it comes back.

Please keep us up to date on anything you discover.

Good luck,
Dan W.




Re: [OPEN-ILS-DEV] Well, it's that time again ...

2010-04-08 Thread Dan Wells
Hello Mike,

I look forward to any kind of feedback, but hopefully nothing along the lines 
of you've got it all wrong, as we intend to demo at least a rudimentary 
interface based on this schema during our serials presentation at the 
conference.  So I guess speak now, or hold your peace for at least a few weeks 
:)

Thanks,
Dan 

 
 There's the stuff you put in, but in order to make serials work with
 Acq, and make items circulate using the issuance backend that we've
 been dreaming of, a ton more will be coming.  Dan Wells, expect a lot
 of discussion soon on your proposed schema! (
 http://open-ils.org/dokuwiki/doku.php?do=revisionsid=acq:serials:basic_predicting_and_receiving
  
 )
 
 IOW, I'm thinking of the
 subscription/distribution/issuance/shelving-unit stuff, in addition to
 the (quite excellent) MFHD functionality.
 

-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




[OPEN-ILS-DEV] Auto-generate Barcodes - Use case, possible patch

2010-03-24 Thread Dan Wells
Hello all,

This email will explain my library's need for auto-generated barcodes and also 
provide our currently deployed patch to accomplish them.  We are certainly open 
to alternatives for meeting our peculiar needs.

First, concerning the need, as a State and Federal repository, we house tens of 
thousands of government documents.  We do not take the time or expense to put 
actual barcodes on these items, and the records are bulk-loaded.  We want these 
items to have usable IDs (entries in the asset.copy barcode field), but we do 
not particularly care what they are until they are needed for use (which is 
rarely).  Because of the bulk-loading requirement, and because of the unique 
constraint on item IDs (barcodes), barcode auto-generation seems like the best 
solution for us.

With that in mind, I set about creating a simple trigger for this table.  It 
creates barcodes in the form of '@@record_id.digit', e.g. @@90234.1, @@90234.2 
for two items with auto-generated copy barcodes attached to a call number 
attached to a record with an id of 90234.  Any form of the word 'AUTO' in the 
barcode field will cause this trigger to fire.  This is similar to how our old 
system worked, and we never had problems with it.

So, the questions are:

1) Do we want this as a feature, or is there a more forward-thinking solution 
out there?

2) Do we like this auto-generation format, or should we go with something else?

3) Are there necessary enhancements for the proposed trigger code?

Thanks,
Dan


-- 
*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




auto_barcode.sql
Description: Binary data


Re: [OPEN-ILS-DEV] Suggested Patch: Block Renews for Holds

2010-03-10 Thread Dan Wells
Hello again,

Attached you will now find two versions of the patch, one for rel_1_6_0 and one 
for trunk.  The rel_1_6_0 version only differs from the first version in adding 
a new matching permission to the data seed file.  The trunk version populates 
the config.org_unit_setting_type table rather than the org_unit_settings.xhtml 
file.  I have less confidence in the trunk version due to lack of experience, 
and the permissions table adds should be double-checked in both cases, as trunk 
and rel_1_6_0 populate these tables quite differently.

Thanks,
Dan

 On 3/10/2010 at 1:26 PM, Dan Wellsd...@calvin.edu (Dan Wells) wrote:
 Hello all,
 
 The attached patch is a very simple means of allowing an option for the 
 blocking of renews if an item is targeted for a hold.  There was some 
 discussion on IRC about the 'best' way to do this, but other ways (in my 
 opinion) ended up as just too far-reaching for what should be a simple 
 feature.
 
 This is tested and in production at our in-db-circ site, but I think it will 
 work for script-based-circ as well.  The patch targets rel_1_6_0.
 
 Thanks,
 Dan
 
 =
 Developer's Certificate of Origin 1.1
 
 By making a contribution to this project, I certify that:
 
 (a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
 
 (b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
 
 (c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
 
 (d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
 
 Signed-off-by: Daniel B. Wells
 =
 
 



block_renews_for_holds-rel_1_6_0-with-seed.diff
Description: Binary data


block_renews_for_holds-trunk.diff
Description: Binary data


[OPEN-ILS-DEV] Discussion wanted: Volumes (Parts) in Evergreen

2010-03-01 Thread Dan Wells
Hello all,

After trying very hard to convince myself otherwise, it is yet my belief that 
Evergreen needs to augment its current item organization hierarchy with an 
additional layer for proper volume (part) support.  My only intention in 
writing this email is to hopefully spur a larger discussion on this matter.  
For the purpose of this discussion, I will define volume as a portion of a 
bibliographic whole.  

As it stands, the asset.call_number table is being overloaded to function as 
both a place to hold the call number and a place to hold volume information.  
Under non-volume circumstances, the call number (and subsequently its attached 
copy-items*) are representative of the bibliographic whole, and everything is 
simple, predictable, and working.  However, when one introduces volume 
information into the call number field, two problems result.

First, it becomes difficult (computationally) to group together the 
bibliographic wholes.  For instance, consider a minority case of three copies* 
of John Calvin's Institutes, often sold in a multi-volume set.  It is not 
uncommon for our library to have the same exact bibliographic entity in 
multiple collections serving different user needs.  Currently in Evergreen we 
might end up with something like:

Title: Institutes
Call Number 1: CN123.A123 V.1
Call Number 2: CN123.A123 V.2
Call Number 3: P345.A345 V.1
Call Number 4: P345.A345 V.2
Call Number 5: J789.A789 V.1
Call Number 6: J789.A789 V.2

This sort of setup is generally still useable, but what if we want to determine 
the seemingly simple question of How many copies* of this title do we have??  
We cannot easily answer it, as the computer does not understand the implied 
relationship between call numbers which are only related by being mostly the 
same.  We also need to be very careful when doing simple operations such as 
changing a call number or discarding a copy*, as we need to do the same action 
multiple times.

A more complete setup would simply add a new layer to clarify things:

Title: Institutes
Call Number 1:CN123.A123
Part: V.1
Part: V.2
Call Number 2:P345.A345
Part: V.1
Part: V.2
...

I am using the word Part here to imply that this layer would be useful for 
other titles which have multiple parts that are not necessary volumes.

The second volume-related problem involves any operation which may be specific 
to a certain volume, such as holds.  If all items from the above example are 
checked out, but I need V.1, what do I place a hold on?  To address this 
concern, we would need to add some sort of meta-volume table.  Where call 
numbers now expressly represent a grouping of a bibliographic whole, 
meta-volumes are a separate but complementary grouping of like parts:

Title: Institutes
Meta-volume 1:
Part: V.1 (CN123.A123)
Part: V.1 (P345.A345)
Part: V.1 (J789.A789)
Meta-volume 2:
Part: V.2 (CN123.A123)
Part: V.2 (P345.A345)
Part: V.2 (J789.A789)

This concept is in most ways independent of the first, and could be dealt with 
separately, but I am introducing it here because I am well aware that there are 
many other ways to model all of these relationships, and recognizing this need 
might inspire a better overall model.  I have tried to focus my thoughts on a 
minimal disruption to the current structure, but perhaps this discussion will 
conclude that a larger overhaul is needed.

At this point, my primary concern is how best to deal with the 95% of cases 
which do not need this additional layer.  Ideally any changes made would 
augment the current model in an unobtrusive way.

I hope that this message can inspire an interesting and fruitful dialog.

Sincerely,
Dan Wells

* (The term copies is also overloaded in this discussion, as it could mean 
the physical manifestation of a single volume (part) or the entire whole, but I 
will do my best to keep things clear by referring to rows in asset.copy as 
copy-items.)



Re: [OPEN-ILS-DEV] reshelving_complete 404 error

2010-02-22 Thread Dan Wells
Hello,

See bug report comments for more information and a patch:

https://bugs.launchpad.net/evergreen/+bug/525950

Dan


 On 2/22/2010 at 2:13 PM, Benjamin Shum bs...@biblio.org wrote:
 Evergreen version: 1.6.0.2
 OpenSRF version: 1.2.2
 PostgreSQL version: 8.3.8
 Linux distribution: Debian Lenny 5.0.3, though it is actually image 
 2.6.26-2-xen-amd64, a virtual machine
 
 The problem we've encountered is when attempting to run the 
 reshelving_complete.srfsh script from /openils/bin. The error message we got 
 was this:
 
 open...@acorn:/openils/conf$ /openils/bin/./reshelving_complete.srfsh
 srfsh# #!/openils/bin/srfsh
 srfsh# request open-ils.storage 
 open-ils.storage.action.circulation.reshelving.complete 24h
 
 Received Exception:
 Name: osrfMethodException
 Status: Method [open-ils.storage.action.circulation.reshelving.complete] not 
 found for OpenILS::Application::Storage
 Status: 404
 
 Request Completed Successfully
 Request Time in seconds: 0.024000
 
 open...@acorn:/openils/conf$
 
 After some discussion on the IRC, working with phasefx, it was traced to the 
 following recent change:
 http://svn.open-ils.org/trac/ILS/changeset/15418/branches/rel_1_6_0/Open-ILS/s
  
 rc/perlmods/OpenILS/Application/Storage/Publisher/action.pm
 
 When we reverted this change, it seemed to fix the problem.
 
 Any suggestions?
 
 Benjamin Shum
 Open Source Software Coordinator
 Bibliomation, Inc.
 32 Crest Road
 Middlebury, CT 06762
 203-577-4070 ext. 113



Re: [OPEN-ILS-DEV] So close I can TASTE it

2010-02-01 Thread Dan Wells
Hi Mike,

It's a small thing, but any chance of incorporating the fix for 
MARC21slim2MODS32.xsl which I posted about on 1/15?  I understand if it is 
better to evaluate all the changes to the LOC version and incorporate them at 
once at a later date.

Thanks,
Dan
-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 2/1/2010 at 10:57 AM, Mike Rylander mrylan...@gmail.com wrote:
 I've gone back through my pending-patch list, and I believe I've
 applied all that are safe to apply.  There are 3 or 4 remaining, but I
 don't feel safe applying them without a chance to test, and we're
 already past critical mass on bug fixes for 1.6.0.1.  Dan Scott got a
 whole pile of new and updated translations in last night, which was
 the last big part (AFAICT) we needed.
 
 So, this is a final call for fixes or backporting before we cut
 1.6.0.1.  Speak now or hold your piece for a little while.



Re: [OPEN-ILS-DEV] So close I can TASTE it

2010-02-01 Thread Dan Wells
Thanks for the explanation.  Waiting on this is fine with me, I just wanted to 
make sure it was missed intentionally.

Dan

 On 2/1/2010 at  2:27 PM, Mike Rylander mrylan...@gmail.com wrote:
 On Mon, Feb 1, 2010 at 1:49 PM, Dan Wells d...@calvin.edu wrote:
 Hi Mike,

 It's a small thing, but any chance of incorporating the fix for 
 MARC21slim2MODS32.xsl which I posted about on 1/15?  I understand if it is 
 better to evaluate all the changes to the LOC version and incorporate them at 
 once at a later date.
 
 If it would be acceptable to get it into 1.6.1.0 I'd prefer to hold
 off.  The main thing is building the unified in-db version that
 includes a normally-external stylesheet, and I would rather tackle it
 all at once.  I'd like to get 1.6.1.0 out within the next month or so.
 
 --miker
 

 Thanks,
 Dan
 --
 



Re: [OPEN-ILS-DEV] So close I can TASTE it

2010-02-01 Thread Dan Wells
Another minor bug in 1.6 is the lack of a defined 'MERGE_USERS' permission in 
the permission.perm_list table (needed for the patron account merging 
functionality).  I had mentioned it on IRC but not officially reported it 
anywhere, so I have now opened a bug for it.

Dan

 On 2/1/2010 at 10:57 AM, Mike Rylander mrylan...@gmail.com wrote:
 I've gone back through my pending-patch list, and I believe I've
 applied all that are safe to apply.  There are 3 or 4 remaining, but I
 don't feel safe applying them without a chance to test, and we're
 already past critical mass on bug fixes for 1.6.0.1.  Dan Scott got a
 whole pile of new and updated translations in last night, which was
 the last big part (AFAICT) we needed.
 
 So, this is a final call for fixes or backporting before we cut
 1.6.0.1.  Speak now or hold your piece for a little while.



[OPEN-ILS-DEV] Vandelay Patch - Resubmit with DCO

2010-01-21 Thread Dan Wells
Resubmit with DCO

=
Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
   have the right to submit it under the open source license
   indicated in the file; or

(b) The contribution is based upon previous work that, to the best
   of my knowledge, is covered under an appropriate open source
   license and I have the right under that license to submit that
   work with modifications, whether created in whole or in part
   by me, under the same open source license (unless I am
   permitted to submit under a different license), as indicated
   in the file; or

(c) The contribution was provided directly to me by some other
   person who certified (a), (b) or (c) and I have not modified
   it.

(d) I understand and agree that this project and the contribution
   are public and that a record of the contribution (including all
   personal information I submit with it, including my sign-off) is
   maintained indefinitely and may be redistributed consistent with
   this project or the open source license(s) involved.

Signed-off-by: Daniel B. Wells
=




Hello,

I have been doing a lot of experimenting with Vandelay the last few weeks and 
have traced many of my failures to one distinct bug.  I finally noticed 
yesterday why Item Import worked for the 'admin' user but for nobody else.  It 
turns out there is a bug in one of the database functions where it treats the 
queue 'owner' column as an org_unit ID when in fact it is a usr id.  Of course 
since 'admin' is ID 1 out of the box and there is an org_unit 1 as well, this 
bug is transparent to admin and easy to miss.

Well, I started by making a few changes to the function to address this, but 
soon realized that what we really wanted to do was base the import on the 
Import Def chosen by the user when the records are loaded, and it would be 
ideal if the Import Def was somehow associated with the queue at the time of 
import.  I started poking around in the schema, and lo and behold, the 
vandelay.bib_queue table already had an 'item_attr_def' linking column for this 
very purpose which was going unused!

So, the attached patch finally puts this column to use by doing the following:

1) Edits the create_bib_queue() sub in Vandelay.pm to accept the item import 
attribute definition ID as an argument and save it appropriately.

2) Edits the vandelay.js interface file to accept and send the import 
definition ID to create_bib_queue() when creating a queue.

3) Edits fm_IDL.xml to add the 'item_attr_def' field (and fix a small labeling 
error).

4) Edits 012.schema.vandelay.sql to replace the buggy function that started all 
this with a now simpler, working version.  The change is actually smaller than 
it looks due removing one nested loop and the resulting indentation change.

Questions and comments welcome!

Thanks,
Dan
-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu 
Hekman Library at Calvin College
616.526.7133





Vandelay.diff
Description: Binary data


[OPEN-ILS-DEV] Vandelay Item Import - Patch

2010-01-20 Thread Dan Wells
Hello,

I have been doing a lot of experimenting with Vandelay the last few weeks and 
have traced many of my failures to one distinct bug.  I finally noticed 
yesterday why Item Import worked for the 'admin' user but for nobody else.  It 
turns out there is a bug in one of the database functions where it treats the 
queue 'owner' column as an org_unit ID when in fact it is a usr id.  Of course 
since 'admin' is ID 1 out of the box and there is an org_unit 1 as well, this 
bug is transparent to admin and easy to miss.

Well, I started by making a few changes to the function to address this, but 
soon realized that what we really wanted to do was base the import on the 
Import Def chosen by the user when the records are loaded, and it would be 
ideal if the Import Def was somehow associated with the queue at the time of 
import.  I started poking around in the schema, and lo and behold, the 
vandelay.bib_queue table already had an 'item_attr_def' linking column for this 
very purpose which was going unused!

So, the attached patch finally puts this column to use by doing the following:

1) Edits the create_bib_queue() sub in Vandelay.pm to accept the item import 
attribute definition ID as an argument and save it appropriately.

2) Edits the vandelay.js interface file to accept and send the import 
definition ID to create_bib_queue() when creating a queue.

3) Edits fm_IDL.xml to add the 'item_attr_def' field (and fix a small labeling 
error).

4) Edits 012.schema.vandelay.sql to replace the buggy function that started all 
this with a now simpler, working version.  The change is actually smaller than 
it looks due removing one nested loop and the resulting indentation change.

Questions and comments welcome!

Thanks,
Dan
-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Vandelay.diff
Description: Binary data


[OPEN-ILS-DEV] Evergreen SQL Help

2010-01-19 Thread Dan Wells
Hello,

Let me start by saying SQL is not my strong suit, and Postgres makes it even 
weaker.  What I am trying to do is write a single Postgres query/function which 
returns all of a user's work_ous and their ancestors without duplicates.  I see 
that actor.org_unit_ancestors() is there, and selecting the work_ous is simple 
enough, but how can one combine them into a single result set?  It seems 
relatively simple, but I just can't get it right.

Thanks,
Dan

-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133




Re: [OPEN-ILS-DEV] Small Bug in Automatic Lost Fee Void - Circulate.pm - Patch

2010-01-18 Thread Dan Wells
Yes, this appears to be a better solution in all respects.

Thanks Bill.
-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133


 On 1/18/2010 at  1:07 PM, Bill Erickson erick...@esilibrary.com wrote:
 
 I'd like to suggest an alternate approach.  The attached diff just moves the
 xact_finish logic in checkin_handle_circ() down below the call to
 checkin_handle_lost(), so that it occurs only once, after all billings, etc.
 have been modified.
 
 -b
 
 
 -- 
 Bill Erickson
 | VP, Software Development  Integration
 | Equinox Software, Inc. / The Evergreen Experts
 | phone: 877-OPEN-ILS (673-6457)
 | email: erick...@esilibrary.com 
 | web: http://esilibrary.com 
 
 Please come by and visit the Equinox team
 and learn more about Evergreen
 ALA MidWinter
 January 15-18, 2010
 booth # 2064



[OPEN-ILS-DEV] Small display bug - MARC21slim2MODS32.xsl - patch

2010-01-15 Thread Dan Wells
Hello,

We were running into a problem with the subject sidebar where part names
were displaying with no space.  For instance:

630 00  ‡aBible. ‡pO.T. ‡xCriticism, interpretation, etc., Jewish ‡xHistory
‡y19th century.

diplays as:

BibleO.T

This was traced to an error in the XSL file where the part template was
called inside of the title tag rather than immediately after it.  The patch
fixes this one issue, but the changelog at LOC mentions a few other small
changes which might be worth throwing in as well:

http://www.loc.gov/standards/mods/v3/MARC21slim2MODS3-2.xsl 

Thanks,
Dan
-- 

*
Daniel Wells, Library Programmer Analyst d...@calvin.edu
Hekman Library at Calvin College
616.526.7133



MARC21slim2MODS32.xsl.diff
Description: Binary data


[OPEN-ILS-DEV] Small Bug in Automatic Lost Fee Void - Circulate.pm - Patch

2010-01-14 Thread Dan Wells
Hello all,

We have noticed a small bug in the optional automatic voiding of Lost Fees.  
The transaction is not marked as finished, as I believe it should be if there 
are no other billings (it still shows in the LOST area on the user's account).  
I am wondering if more logic will be needed for cases where the fee has been 
paid and a refund is generated, but we haven't run into that yet :)

This patch is against trunk.  For bug fixes, does it makes sense (would it be 
helpful) to provide a patch for the current branch as well?

Dan


=
Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
   have the right to submit it under the open source license
   indicated in the file; or

(b) The contribution is based upon previous work that, to the best
   of my knowledge, is covered under an appropriate open source
   license and I have the right under that license to submit that
   work with modifications, whether created in whole or in part
   by me, under the same open source license (unless I am
   permitted to submit under a different license), as indicated
   in the file; or

(c) The contribution was provided directly to me by some other
   person who certified (a), (b) or (c) and I have not modified
   it.

(d) I understand and agree that this project and the contribution
   are public and that a record of the contribution (including all
   personal information I submit with it, including my sign-off) is
   maintained indefinitely and may be redistributed consistent with
   this project or the open source license(s) involved.

Signed-off-by: Daniel B. Wells
=



Circulate.pm.lost_void.diff
Description: Binary data


  1   2   >