Re: [CODE4LIB] Auto discovery of Dewey, UDC
Hi Sergio, As part of eXtensible Catalog we developed a Dewey module for Drupal, which takes a Dewey number, and use OCLC's dewey.info to fetch the textual description of the part. When it was created the service contained only 3 levels of the classification system, since then they went ahead, and now it is deeper. You can find the sorce here: http://cgit.drupalcode.org/xc/tree/xc_dewey/xc_dewey.module?h=7.x-1.x Maybe it helps you. Regarding to UDC: it is much a harder task, and when I worked with it, I run into a blocking problem, which is that UDC was not licenced as freely usable, and I was not able to get a licence to use it in an open source project. There were some other problems as well: UDC changed from time to time, and sometimes it means, that a given classification code means this thing in a given point of time, and that thing some years later. The MARC catalog I worked with did not contain any information about the UDC versions, so the accuracy of the tool was not guaranted (of course you can do some intelligent guessing). And the last problem was, that on contrary to the Dewey classification UDC contains sometime very lengthy descriptions instead of one or two words. Semantically it is OK, but makes the UI design a little bit hard, and if you want to search for the textual description, you'll end up sometimes with a noisy result set. Otherwise to handle the operators, the subclasses, and all the nice things UDC provides is a very interesting challange. Cheers, Péter 2015-06-12 12:59 GMT+02:00 Sergio Letuche code4libus...@gmail.com: thank you very much for your quick reply, dear Stefano, i appreciate it 2015-06-12 13:47 GMT+03:00 Stefano Bargioni bargi...@pusc.it: Hi, Sergio: maybe this article [1 abstract] [2 English text] can give you some basic ideas. We added a lot of DDC info in our Koha catalog two years ago. HTH. Stefano [1] http://leo.cineca.it/index.php/jlis/article/view/8766 [2] http://leo.cineca.it/index.php/jlis/article/view/8766/8060 On 12/giu/2015, at 12:03, Sergio Letuche code4libus...@gmail.com wrote: hello community! we are facing this challenging issue. We need to complete for a vast amount of records, the dewey, UDC info, has anyone had any experience with this? We need some way (via modeling? mahout?) to try and discover these values, based on some text, found in the records' metadata, and then auto complete these values. I would appreciate any feedback, if there is any opensource tool you have used for this purpose, or if you are aware of any best practice for doing this task. Best __ Il tuo 5x1000 al Patronato di San Girolamo della Carita' e' un gesto semplice ma di grande valore. Una tua firma aiutera' i sacerdoti ad essere piu' vicini alle esigenze di tutti noi. Aiutaci a formare sacerdoti e seminaristi provenienti dai 5 continenti indicando nella dichiarazione dei redditi il codice fiscale 97023980580. -- Péter Király software developer GWDG, Göttingen - Europeana - eXtensible Catalog - The Code4Lib Journal http://linkedin.com/in/peterkiraly
Re: [CODE4LIB] How to measure quality of a record
Hi, I thought a lot about this question in the past, and my answer is: yes, you can apply statistical formulas. But you should know well each field of your record: what kind of information could they contain, whether you could set rules about that which you can apply for the individual records. Some factors which are important: - the completeness of the records: the ratio of the fields filled and unfilled - the value of an individual field matches the rules or not (say you expect a number in the range of 1 to 5, but you get 6) - the probability that a given field value could be unique - the probability that a record is not duplication of another record Some concrete example from my Europeana past: - there are mandatory fields, and if they are empty, the quality goes down - there are fields which should match a known standard, for example ISO language codes - you can apply rules to decide whether the value fits or not - the data provider field is a free text - no formal rule - but no individual record could contain unique value, and when you import several thousands of new record, they should not contain more than a couple new values - there are fields which should contain URLs or emails or dates, we can check whether they fit for formal rules, and their content are in a reasonable range (we should not have record created in the future for example) - you can measure whether the optional fields are fulfilled, and in which ratio At the end you will have a couple of measurements, and you can apply weighting to calculate a final classification number. You can do a lot to set up rules with faceted search, and of course you can use statistical tools, such as R, Julia which helps to get a picture of distribution of the values. Hope it helps. Regards, Péter -- Péter Király software developer Göttingen Society for Scientific Data Processing - http://gwdg.de eXtensible Catalog - http://eXtensibleCatalog.org
[CODE4LIB] New HTTP method proposed: SEARCH
Dear all, If you ever created a REST API, you might run into the problem, whether searching should be implemented via GET or POST methods. There are lots of debate around this supported by different theoretical considerations. Maybe these debates will be ceased soon, because last week Julian Reschke, Ashok Malhotra and James M. Snell submitted an RFC proposal HTTP SEARCH Method to Internet Engineering Task Force (IETF) for creating a new method, called SEARCH in HTTP 1.1. According to the proposal there will be an Accept-Search response header field as well to notify clients, that a given server supports the new method. An example: A SPARQL query: SEARCH /contacts HTTP/1.1 Host: example.org Content-Type: text/query Accept: text/csv select surname, givenname, email limit 10 The full document is available here: http://www.ietf.org/id/draft-snell-search-method-00.txt Regards, Peter -- Péter Király software developer Göttingen Society for Scientific Data Processing - http://gwdg.de eXtensible Catalog - http://eXtensibleCatalog.org
[CODE4LIB] Seminar Programme: Göttingen Dialog in Digital Humanities (2015)
Dear code4lib, earlier this week we published here the call for papers for the Göttingen Dialog in Digital Humanities. Now we have a full programme, let me announce it as well. The dialogs take place on Tuesdays at 17:00 during the Summer semester (from April 21th until July 14th). The venue of the seminars is to be announced, at the Göttingen Centre for Digital Humanities (GCDH). The centre's address is: Heyne-Haus, Papendiek 16, D-37073 Göttingen. The agenda April 21 Yuri Bizzoni, Angelo Del Grosso, Marianne Reboul (University of Pisa, Italy) Diachronic trends in Homeric translations April 28 Stefan Jänicke, Judith Blumenstein, Michaela Rücker, Dirk Zeckzer, Gerik Scheuermann (Universität Leipzig, Germany) Visualizing the Results of Search Queries on Ancient Text Corpora with Tag Pies May 5 Jochen Tiepmar (Universität Leipzig, Germany) Release of the MySQL based implementation of the CTS protocol May 12 Patrick Jähnichen, Patrick Oesterling, Tom Liebmann, Christoph Kurras, Gerik Scheuermann, Gerhard Heyer (Universität Leipzig, Germany) Exploratory Search Through Visual Analysis of Topic Models May 19 Christof Schöch (Universität Würzburg, Germany) Topic Modeling Dramatic Genre May 26 Peter Robinson (University of Saskatchewan, Canada) Some principles for making of collaborative scholarly editions in digital form June 2 Jürgen Enge, Heinz Werner Kramski, Susanne Holl (HAWK Hildesheim, Germany) »Arme Nachlassverwalter...« Herausforderungen, Erkenntnisse und Lösungsansätze bei der Aufbereitung komplexer digitaler Datensammlungen June 9 Daniele Salvoldi (Freie Universität Berlin, Germany) A Historical Geographic Information System (HGIS) of Nubia based on the William J. Bankes Archive (1815-1822) June 16 Daniel Burckhardt (HU Berlin, Germany) Comparing Disciplinary Patterns: Gender and Social Networks in the Humanities through the Lens of Scholarly Communication June 23 Daniel Schüller, Christian Beecks, Marwan Hassani, Jennifer Hinnell, Bela Brenger, Thomas Seidl, Irene Mittelberg (RWTH Aachen University, Germany, University of Alberta, Canada) Similarity Measuring in 3D Motion Capture Models of Co-Speech Gesture June 30 Federico Nanni (University of Bologna, Italy) Reconstructing a website’s lost past - Methodological issues concerning the history of www.unibo.it July 7 Edward Larkey (University of Maryland, USA) Comparing Television Formats: Using Digital Tools for Cross-Cultural Analysis July 14 Francesca Frontini, Amine Boukhaled, Jean-Gabriel Ganascia (Laboratoire d’Informatique de Paris 6, Université Pierre et Marie Curie) Mining for characterising patterns in literature using correspondence analysis: an experiment on French novels As announced in the Call For Papers, the dialogs will take the form of a 45 minute presentation in English, followed by 45 minutes of discussion and student participation. Due to logistic and time constraints, the 2015 dialog series will not be video-recorded or live-streamed. A summary of the talks, together with photographs and, where available, slides, will be uploaded to the GCDH/eTRAP. For this reason, presenters are encouraged, but not obligated, to prepare slides to accompany their papers. Please also consider that the €500 award for best paper will be awarded on the basis of both the quality of the paper *and* the delivery of the presentation. Camera-ready versions of the papers must reach Gabriele Kraft at gkraft(at)gcdh(dot)de by April 30. The papers will not be uploaded to the GCDH/eTRAP website but, as previously announced, published as a special issue of Digital Humanities Quarterly (DHQ). For this reason, papers must be submitted in an editable format (e.g. .docx or LaTeX), not as PDF files. A small budget for travel cost reimbursements is available. Everybody is welcome to join in. If anyone would like to tweet about the dialogs, the Twitter hashtag of this series is #gddh15. For any questions, do not hesitate to contact gkraft(at)gcdh(dot)de. For further information and updates, visit http://www.gcdh.de/en/events/gottingen-dialog-digital-humanities/ or http://etrap.gcdh.de/?p=633 We look forward to seeing you in Göttingen! The GDDH Board (in alphabetical order): Camilla Di Biase-Dyson (Georg August University Göttingen) Marco Büchler (Göttingen Centre for Digital Humanities) Jens Dierkes (Göttingen eResearch Alliance) Emily Franzini (Göttingen Centre for Digital Humanities) Greta Franzini (Göttingen Centre for Digital Humanities) Angelo Mario Del Grosso (ILC-CNR, Pisa, Italy) Berenike Herrmann (Georg August University Göttingen) Péter Király (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen) Gabriele Kraft (Göttingen Centre for Digital Humanities) Bärbel Kröger (Göttingen Academy of Sciences and Humanities) Maria Moritz (Göttingen Centre for Digital Humanities) Sarah Bowen Savant (Aga Khan University, London, UK) Oliver Schmitt (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen) Sree Ganesh
[CODE4LIB] CfP: Göttingen Dialog in Digital Humanities
Dear Code4Listers, I'd like to share a CfP with you from the University of Göttingen, Germany. / Call for Papers: Göttingen Dialog in Digital Humanities The Göttingen Dialog in Digital Humanities has established a new forum for the discussion of digital methods applied to all areas of the Humanities, including Classics, Philosophy, History, Literature, Law, Languages, Social Science, Archaeology and more. The initiative is organized by the Göttingen Centre for Digital Humanities (GCDH). The dialogs will take place every Tuesday at 5pm from late April until early July 2015 in the form of 90 minutes seminars. Presentations will be 45 minutes long and delivered in English, followed by 45 minutes of discussion and student participation. Seminar content should be of interest to humanists, digital humanists, librarians and computer scientists. We invite submissions of complete papers describing research which employs digital methods, resources or technologies in an innovative way in order to enable a better or new understanding of the Humanities, both in the past and present. Themes may include text mining, machine learning, network analysis, time series, sentiment analysis, agent-based modelling, or efficient visualization of bigand humanities-relevant data. Papers should be written in English. Successful papers will be submitted for publication as a special issue of Digital Humanities Quarterly (DHQ). Furthermore, the author(s) of the best paper will receive a prize of €500, which will be awarded on the basis of both the quality and the delivery of the paper. A small budget for travel cost reimbursements is available. Full papers should be sent by March 20th to gkr...@gcdh.de in Word.docx format. There is no limitation in length but the suggested minimum is 5000 words. The full programme, including the venue of the dialogs, will be sent to you by April 1st. For any questions, do not hesitate to contact gkr...@gcdh.de GDDH Board (in alphabetical order): Camilla Di Biase-Dyson (Georg August University Göttingen) Marco Büchler (Göttingen Centre for Digital Humanities) Jens Dierkes (Göttingen eResearch Alliance) Emily Franzini (Göttingen Centre for Digital Humanities) Greta Franzini (Göttingen Centre for Digital Humanities) Angelo Mario Del Grosso (ILC-CNR, Pisa, Italy) Berenike Herrmann (Georg August University Göttingen) Péter Király (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen) Gabriele Kraft (Göttingen Centre for Digital Humanities) Bärbel Kröger (Göttingen Academy of Sciences and Humanities) Maria Moritz (Göttingen Centre for Digital Humanities) Sarah Bowen Savant (Aga Khan University, London, UK) Oliver Schmitt (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen) Sree Ganesh Thotempudi (Göttingen Centre for Digital Humanities) Jörg Wettlaufer (Göttingen Centre for Digital Humanities Göttingen Academy of Sciences and Humanities) Ulrike Wuttke (Göttingen Academy of Sciences and Humanities) This event is financially supported by the German Federal Ministry of Education and Research (No. 01UG1509). http://www.gcdh.de/en/events/gottingen-dialog-digital-humanities / Regards, Péter Király -- Péter Király software developer Göttingen Society for Scientific Data Processing - http://gwdg.de eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] OAI Crosswalk XSLT
2014-07-11 16:38 GMT+02:00 Matthew Sherman matt.r.sher...@gmail.com: have a question for those of you who have worked with OAI-PMH. I am currently editing our DSpace OAI crosswalk to include a few custom metadata field that exist in our repository for publication information and port them into a more standard format. The problem I am running into is the select statements they use are not the typical XPath statements I am used to. For example Hi, element1/element2 - means, that element2 is child of element1 element[@name='type'] - matches element name=type. @name is shortcut for the name attribute of the element xsl:for-each - is a for each loop. The select part is an xpath expression, and what it matches will be accessed by the xsl:value-of select=. /. All in all, the whole loop put every element into the dc:type tag. Regards, Péter -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] Announcement: Two New Vocabularies added to LC's Linked Data Service
Hi Kevin, 2014-06-25 23:00 GMT+02:00 Ford, Kevin k...@loc.gov: The Library of Congress is pleased to make two new vocabularies available as linked data congratulation, it's very useful. I have a question: do you have a SPARQL endpoint as well? Regards, Péter -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] The lie of the API
implement q values or *s. You have to go to the documentation to figure out what Accept headers it will do string equality tests against. Rob On Fri, Nov 29, 2013 at 6:24 AM, Seth van Hooland svhoo...@ulb.ac.be wrote: Dear all, I guess some of you will be interested in the blogpost of my colleague and co-author Ruben regarding the misunderstandings on the use and abuse of APIs in a digital libraries context, including a description of both good and bad practices from Europeana, DPLA and the Cooper Hewitt museum: http://ruben.verborgh.org/blog/2013/11/29/the-lie-of-the-api/ Kind regards, Seth van Hooland Président du Master en Sciences et Technologies de l'Information et de la Communication (MaSTIC) Université Libre de Bruxelles Av. F.D. Roosevelt, 50 CP 123 | 1050 Bruxelles http://homepages.ulb.ac.be/~svhoolan/ http://twitter.com/#!/sethvanhooland http://mastic.ulb.ac.be 0032 2 650 4765 Office: DC11.102 -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] jobs.code4lib.org and job locations
Hi Ed, thank you for your work, it is very nice job! I have one comment: some job description too lengthy to show up in one screen size, so I have to move the map downside to see the top of the rescription. After I close the window the map doesn't jump back to the original viewport. There is a JS solution of this issue, I'll send you it later. Thanks again! Péter 2013/2/24 Ed Summers e...@pobox.com: If you happen to post jobs to code4lib.org you'll notice that you can now add a location for the job. In fact you are required to fill it in when posting. The location input field uses Freebase Suggest just like the employer and tag fields. When you select an employer the location will auto-populate with the employer's headquarters location, but you can change it if the job happens to be somewhere else...which does happen from time to time. I retroactively applied as many locations as I could using the employer. One nice side effect (other than seeing where the job is for in the UI) is having lat/lon geo-coordinates for the job. I haven't built any maps into the UI yet, but I did expose the coordinates in the Atom feed which lets you do this: https://maps.google.com/maps?q=http://jobs.code4lib.org/feed/ The small number of markers is because this is just the first page of the feed, e.g. https://maps.google.com/maps?q=http://jobs.code4lib.org/feed/2/ https://maps.google.com/maps?q=http://jobs.code4lib.org/feed/3/ https://maps.google.com/maps?q=http://jobs.code4lib.org/feed/4/ ... If someone has an interest in playing with LeafletJS or something to get some map views into jobs.code4lib.org proper that might be a fun experiment, if you have any spare time. Many thanks to Ted Lawless for the work to get this going, and also to Mark Matienzo for tirelessly assigning employers to the historic job postings. There are still a few kinks to work out (some historic postings that had addresses in non-standard places in the freebase data), but please feel free to file issue tickets on Github [1] if you notice anything odd. //Ed [1] https://github.com/code4lib/shortimer -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] jobs.code4lib.org and job locations
Hi Chris, the only think I could add to your work is a tip for display. It works with Mapstraction library (http://mapstraction.com/) and it is attributed to Adam Duvander, he described it in Map Scripting 101 (No Stratch Press, 2010) pp. 109-110. The following snippet saves the position of the map at the time user opens a marker's info box, and restore this position when user closes the info box. Sometime the info box moves the map, and when user closes it, the original marker is not visible since the viewport was shifted. When you create a marker you might want to do like this: var mk = new mxn.Marker(point); ... mk.openInfoBubble.addHandler(myboxopened); mk.closeInfoBubble.addHandler(myboxclosed); mapstraction.addMarker(mk); function myboxopened(event_name, event_source, event_args) { mapcenter = mapstransaction.getCenter(); } function myboxclosed(event_name, event_source, event_args) { mapstransaction.setCenter(mapcenter, {pan: true}); } Mapstraction is an abstraction library works with different map APIs (Google, Yahoo!, MS etc.), and even if you don't use, it is translatable into native APIs. Of course it won't work in the case when the Jobs Map is NOT integrated into the main Google Map page (https://maps.google.com/maps?q=http://jobs.code4lib.org/feed/), but only if you create a distinct page, and you use the map's API. Regards, Péter 2013/2/24 Chris Fitzpatrick chrisfitz...@gmail.com: hi, has anyone volunteered for the mapping feature? if not, I'd like to take a crack at it as I am wanting to get more practical django experience under my belt. and since this list has gotten me two jobs, I would love to give some payback. just dont want to duplicate any work someone else has started. b, chris. On 24 Feb 2013 20:08, Gary McGath develo...@mcgath.com wrote: It works very nicely with Sage, which is what I use to follow feeds. Thanks! On 2/24/13 1:45 PM, Ed Summers wrote: Hi Gary, Great idea, and it was easy to implement. For example you can now get tag related feeds: http://jobs.code4lib.org/feed/tag/digital-preservation/ http://jobs.code4lib.org/feed/tag/python/ http://jobs.code4lib.org/feed/tag/web-archiving/ http://jobs.code4lib.org/feed/tag/fedora-repository-architecture/ etc ... -- Gary McGath, Professional Software Developer http://www.garymcgath.com -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] PHP YAZ
Hi, in the PHP Black Book you can find some usage examples. http://www.amazon.com/PHP-Black-Book-Peter-Moulding/dp/1588800539 It was quite long time ago when I played with YAZ on Windows, I don't remember any troubles. Have you got installation problem? Péter ps. Warning: PHP Black Book is outdated in lots of aspects, it discussed PHP 4.x, so be careful if you want to try the code axamples. 2013/2/21 Stephen Marks steve.ma...@utoronto.ca: I've done it before, but it's been a while. What problem are you having particularly? s On Feb-20-2013 1:57 PM, Brent Ferguson wrote: Is there anyone that has experience working with PHP and YAZ on a Windows Box... Have a few questions to help clarify what is needed to get up and running... Brent Ferguson, MLS Web Developer / Reference Librarian - Elkhart Public Library http://www.myepl.org/epl -- Stephen Marks Digital Preservation Librarian Scholars Portal Ontario Council of University Libraries step...@scholarsportal.info 416.946.0300 Fearlessness is better than a faint heart for any man who puts his nose out of doors. The length of my life and the day of my death were fated long ago. --Skírnismál -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
[CODE4LIB] eXtensible Catalog Drupal Toolkit 1.3
Dear List, I happily announce, that after several months of development the eXtensible Catalog Drupal Toolkit 1.3 is just released. The eXtensible Catalog Drupal Toolkit is the front end of eXtensible Catalog (XC) built on Drupal content management system. It contains a set of 25 Drupal modules, a custom theme, and installation profile, and a customized Apache Solr search engine. XC is a discovery interface built on FRBR and RDA-like metadata structure. The release has a primary focus on data integrity, namely being able to successfully process record updates on a schedule basis. This includes new additions, updates and deletions of records. This release includes some Solr integrity fixes submitted by Kyushu University. The installation process for release 1.3 has also been reworked to include an implementation option using Drush that makes the installation substantially easier. If you have drush, the whole installation is only 4 steps. We also created a custom Solr package wich is pre-configured to the needs of the Drupal Toolkit. You can find the installation instructions and release notes here: http://drupal.org/project/xc_installation. I hope you will find it usefull. Now we are working hard on creating the first stable release of the Drupal 7 version. Any comments, suggestion and feedback are more than useful. You can find all the project's issue tracker here: http://extensiblecatalog.lib.rochester.edu:8080/browse/DRUPAL. The eXtensible Catalog project's website is available at http://eXtensibleCatalog.org Best wishes, Péter -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
Re: [CODE4LIB] Libraries Sharing Code: The List Making
Hi Patrick, we store the code repositories of eXtensible Catalog on Google Code and Drupal.org. Is this list only for github projects? Regards, Péter 2013/2/17 Jason Ronallo jrona...@gmail.com: OK, I've added some more links and reorganized things a bit. I added sections for other independent library organizations (like Project Blacklight) as well as a section for individuals. I think the resource could be more useful with some indication of what kind of thing you'll see at the other end of the links, but that might be more maintenance than anyone wants to do. Jason On Sat, Feb 16, 2013 at 5:26 PM, Patrick Berry pbe...@gmail.com wrote: I don't see any reason to not list repos that contain library code. I wasn't really aiming for the Wikipedia style canonical listing, so the more links the better. Pat On Saturday, February 16, 2013, Jason Ronallo wrote: Pat, While my library has an institutional account we currently use for private repos, we have released some code which is maintained under individual accounts. The code in the individual repositories is copyright North Carolina State University, but isn't included under the institutional account. It might be that in the future we release some code through the institutional account, but have not yet. There are good reasons why this might be the case for other institutions as well. For instance an institution could allow code to be released but not want to take on responsibility for maintaining it. While our library is sharing some code through individuals and their accounts, I wonder if listing individual accounts like this is out of scope for the page you've created? Would it be worth it to create a page that lists individual accounts of code4libbers? Are there other ways to find code released by code4lib folks? Jason On Fri, Feb 15, 2013 at 11:29 PM, Patrick Berry pbe...@gmail.comjavascript:; wrote: First, to the organizations doing this, thank you so much for sharing. I'm sure I'm not the only person to notice the growth in code sharing, especially through Github. As we're associated with libraries, I thought it might be good to have a list, no matter how incomplete, of libraries sharing code. As you might imagine Google searches for library or libraries tend be full of code libraries instead of Libraries with code. Go figure... http://wiki.code4lib.org/index.php/Libraries_Sharing_Code As with all wiki pages, please do add what isn't there. Unless it's links to cheap prescription pills or something. Don't do that. I will admit that originally this page was titled Libraries with Github Organizations but I quickly realized that the first response would point out the painfully obvious fact that you can share code without Github. Yes, I was aware of that before I started the page but I'll @blame jetlag and CST. Pat (the one from Chico) -- Péter Király software developer Europeana - http://europeana.eu eXtensible Catalog - http://eXtensibleCatalog.org
[CODE4LIB] Europeana API 2.0
Hi All, I proudly report, that the preview launch of the new Europeana API has just happened. The new things of the API: - The API is based on Europeana's new metadata schema, the Europeana Data Model (http://pro.europeana.eu/web/guest/edm-documentation). The previous version based on Europeana Semantic Elements, which is an extended Dublin Core schema. This new schema separates different entities, and is close to some semantic principles. - It is more close to the Europeana portal. You can access more features (such as breadcrumb, facets, related items and so on). - Based on Solr 4.0 and MongoDB 2.0 - We have an API console, where you can test the possibilities without signing up for the API key (http://preview.europeana.eu/portal2/api/console.html) - Two month ago Europeana with the help of data providers changed the right policies, and now all metadata records are licensed under CC0 waiver (you can find more, and details here: http://pro.europeana.eu/web/guest/data-exchange-agreement). You can find the API Terms of Usage here: http://preview.europeana.eu/portal2/rights/api-terms-of-use.html. - The API registration process thus became very simple. Previously the usage of the API were limited to a number of domains, so every API requests had to consider individually. Now it is fully automatic. (http://preview.europeana.eu/portal2/api/registration.html) - The limit of usage is 10 000 API calls per day. If you would like to extend this limit, please contact us. We also have a mailing list for discussiing API-related things: https://groups.google.com/forum/?pli=1#!forum/europeanaapi, and an initial version of documentation: http://preview.europeana.eu/portal2/api-documentation.html. We plan to launch is as formal API in five-six week, during that time, we will launch more features, and improve current ones. Your feedback is more than welcomed! I hope you will find the new Europeana API usefull. Let me know your feedback! Regards, Péter -- Péter Király Portal Backend Developer Europeana.eu
Re: [CODE4LIB] generating and parsing NCIP with PHP
Hi Emily, part of the XC project is a NCIP client application writen in PHP as two Drupal modules. It is writen partly as classes. It doesn't implement all of the NCIP verbs, but can be a good starting point. The current stable version supports NCIP v. 1.0, and now we are working on NCIP2 (we are on testing phase). You can find the source at http://drupal.org/project/xc. I don't want to go into technical details here, but if you would like more guidance and tips, I would be happy to help you. Best! Péter 2012/1/2 Emily Lynema emily_lyn...@ncsu.edu Hi folks, We are working with Lehigh University on building out a more full-fledged SirsiDynix Symphony adapter to work with the XC NCIP toolkit. We will hopefully building our new Patron Account interface on top of the eXtensible Catalog NCIP toolkit. Obviously, to build our new interface on top of the NCIP toolkit, we need to generate NCIP XML requests and parse NCIP XML responses. These things are a bit gnarly to work with, and I'm not sure that PHP is exactly known for excellence in working with XML. Has anyone ever dabbled in this area before? Created an awesome PHP library we could just pick up and use? Have any particular pointers? We have Zend framework at our disposal in terms of PHP frameworks, and will likely be using that for this project. I don't know in particular if it has good XML parsing tools (my staff probably would), but even if it does, we still have to sort through the NCIP verbosity. Just thought I'd check. -emily -- Péter Király eXtensible Catalog http://eXtensibleCatalog.org http://drupal.org/project/xc
Re: [CODE4LIB] FW: Drupal developer position, UNC Chapel Hill
I thought we are over this joke, but it seems we will play this again and again. Péter
Re: [CODE4LIB] open bibliographic principles
Thank you! That's an important document, and I translated it into Hungarian. I will publish it soon. Péter -- Péter Király eXtensible Catalog http://eXtensibleCatalog.org http://drupal.org/project/xc 2011/9/9 Thomas Krichel kric...@openlib.org: On behalf of the Open Bibilographic Working Group of the Open Knowledge Foundation, I would like to bring your attention to the Principles on Open Bibliographic data at http://openbiblio.net/principles/ The group continues to offer the opportunity, for both individuals and groups, to sign up to the principles. Cheers, Thomas Krichel http://openlib.org/home/krichel http://authorprofile.org/pkr1 skype: thomaskrichel
Re: [CODE4LIB] Streaming
+1 I have spent the news to Hungarian librarians, I hope, that they will follow the live or archive streams. király péter eXtensible Catalog 2011/2/9 William Denton w...@pobox.com: On 8 February 2011, Jason Griffey wrote: I'd like to ditto what Roy said below. I know how hard this is to do at all, and to do it well is the sign of experience and talent. +1 I'm watching the archive now and the video is wonderful. Thank you! Bill -- William Denton, Toronto : miskatonic.org www.frbr.org openfrbr.org
Re: [CODE4LIB] PHP MVC frameworks
Hi David, I have tried several frameworks in the past (and even wrote a home-grown one, as almost every newcommer...). The best I can suggest you is the Zend Framework. But it depends on your needs. If you want, you can use Drupal as framework as well, because it provides you both the controller (hooks, APIs), model (database API), and view (themes/templates) layers. Király Péter http://eXtensibleCatalog.org 2010/11/15 David Kane dk...@wit.ie: Hi, I am interested to hear if anyone is using PHP MVC frameworks to help with their code. From what I have learned, they seem to be a very good idea indeed. However, there are so many of them (http://www.phpframeworks.com/) Also, pkp.SFU.ca uses their own one in their PKP (public knowledge project) software. Who is using them and what for? David. -- David Kane, MLIS. Systems Librarian Waterford Institute of Technology Ireland http://library.wit.ie/ T: ++353.51302838 M: ++353.876693212