Jurisdiction comparison database

The interns and a Google Policy Fellow at CC HQ in San Francisco have finished their summer programs today. A recording of their informal final presentations is available on Ustream. The first 10 minutes are by Greg Leones from Australia, who demos the new online Jurisdiction Database (dubbed the “Miracle Database”), an excellent tool for analysing different jurisdictions, including english re-translations. Luxcommons is pretty proud to find that the luxembourg licences are, so far, the only french language versions included. Have a look at the query interface. Thanks for the effort Greg!

The two other presentations are from Alea Garbagnati, a legal intern from UC Hastings, talking about her work on restructuring and redrafting the CC FAQs and Tal Niv, the Google Policy Fellow (and PhD student at UC Berkeley) detailling her work on the CC Contribution Project.

More videos on the Creative Commons Ustream channel.

Libraries make available 5.4 million bibliographic records under CC Zero

The Cologne University Library, the Cologne University of Applied Sciences Library, the Cologne Public Library, the Academy of Media Arts Cologne Library and the Library Centre of Rhineland-Palatinate announced to publish their catalog data in cooperation with the North Rhine-Westphalian Library Service Centre (HBZ).

All data is published under the Creative Commons CC Zero. The data is in the public domain, hence it belongs to all and may be used for any purpose without restrictions. To the extent possible under law, the person who associated CC0 with this work has waived all copyright and related or neighboring rights to this work.

Links:

HBZ Press release (english)

Heise Online Meldung (german) (Google translation to english)

“les petites cases” sur les linked data

je me premets de reprendre un post en entier du blog “les petites cases“, vivement recommandé!

Data.gov.uk

Je partage l’avis de Zach Beauvais, platform evangelist chez Talis qu’on ne présente plus depuis que Nicolas les a proclamé « Rois du Web sémantique ». La grande nouvelle de la semaine n’est certainement pas l’Ipad, mais bien l’ouverture en bêta publique du site « data.gov.uk ». Mis au point sous la houlette de Sir Tim Berners-Lee et de Nigel Shadbolt, ce site donne l’accès sous une forme brute aux données/statistiques/informations du secteur publique britannique. Et l’influence des deux comparses se fait sentir dès la page d’accueil : un énorme logo représentant un triplet, une rubrique SPARQL dans le menu principal, et une section « What is the Semantic Web » avec le logo du Web sémantique. Le ton est donné et, contrairement au data.gov américain, les Britanniques ont dès le départ fait le choix des technlogies du Web sémantique. Le site en lui-même a été construit avec Drupal 6 avec du RDFa à l’intérieur, la plupart des donnnées a été convertie en RDF grâce, entre autres, aux bons soins de Jeni Tennison, grande prêtresse du XML et de XSLT convertie au Linked Data, elles sont indexées dans une base de données RDF fournie et hébergée par Talis et, bien-sûr, interrogeable en SPARQL via un sparql endpoint. Le site propose une liste d’applications qui exploitent les données et les mashups entre data.gov et data.gov.uk n’ont pas tardé à apparaître à l’image de celui mis au point par Alvaro Graves. Et si vous vous demandez à l’image de Stefano Mazzochi l’intérêt de RDF pour mettre à disposition des données statistiques, je vous recommande la lecture de ce billet de Jeni Tennison qui fait le bilan de l’expérience « data.gov.uk », insistant, en particulier, sur l’intérêt des technologies du Web sémantique.

Linked Data Triplification Challenge 2010

Le « Linked Data Triplification Challenge 2010 » est lancé. Proposé chaque année par le groupe de recherche AKSW de Leipzig à l’origine de Dbpedia et de triplify, ce concours vise à récompenser les initiatives pour exposer des nouvelles données dans le linked data, faciliter la conversion de données en RDF ou exploiter les données du linked data. Ce concours a déjà provoqué l’émergence et/ou récompensé des projets comme Linked Movie Database, Dbtune.org ou Linked Open Drug Data.

Smob V2

Alexandre Passant a publié la semaine dernière une nouvelle version de Smob, un outil de microbloging sémantique. Cette nouvelle version que j’ai eu la chance de tester en bêta constitue une refondation complète par rapport à la version précédente : plus de distinctions entre Smob server et Smob client, mais une notion de hub, l’interface de saisie est plus conviviale et offre un moyen assez simple pour tagger intelligemment ses twitts (entendez relié à des URIs du Linked data 😉 ), RDFa et SPARQL à tous les étages. Bref, c’est très joli, toujours aussi intéressant et je vais m’empresser de l’utiliser à l’image de la version précédente, d’autant qu’il va être maintenu et on peut espérer voir sortir de nouvelles versions plus régulièrement.

Uberblic.org

Georgi Kobilarov qui fut un des créateurs de Dbpedia vient de lancer Uberblic.org, soit un « service d’intégration de données liées fournissant un point d’accès unique au Web de données ». Cette plate-forme présente deux avantages : la mise à jour en temps réel des données modifiés dans le site d’origine et la possibilité de modifier les mappings et la structure des ontologies utilisés (même si cette dernière n’est pour l’instant accessible qu’à des invités). Plutôt qu’un long discours, je vous invite à consulter cette vidéo qui montre tous les détails. En revanche, la politique des identifiants me laisse songeur, car j’ai l’impression que le service attribue de nouvelles URIs à des ressources qui en possèdent déjà, nous verrons à l’usage, mais cette plate-forme me semble prometteuse et me rappelle un peu ce que sera la prochaine version de Twine.

Sur le front de la standardisation

Du côté du W3C, ça s’agite ferme aussi autour des technologies du Web sémantique. Le groupe de travail SPARQL continue sa marche forcée vers SPARQL 1.1 et fournit une mise à jour de 6 recommandations toutes aussi intéressantes les unes que les autres (je conseille en particulier SPARQL Update et SPARQL Uniform HTTP Protocol for Managing RDF Graphs) et offre à la lecture une nouvelle recommandation : SPARQL 1.1 property paths, soit une syntaxe de navigation dans le graphe.

Mais, la grande annonce concerne RDF et la tenue au mois de juin d’un workshop sur son avenir. Cette annonce avait été précédée au mois de novembre par un thread sur la liste Semantic Web du W3C constituant une liste de souhait sur RDF 2 et a été suivie par de nombreuses réactions à commencer par deux threads toujours sur la même liste : Requirements for a possible “RDF 2.0” et RDF syntaxes 2.0. Sur le même sujet, je vous conseille ausssi la lecture de ce billet de Dave Beckett, développeur de redland, inventeur de turtle et grand avocat des technologies du Web sémantique.

Report on 7th-communia workshop in Luxembourg

Jonathon writes:

We recently attended a workshop in Luxembourg as part of Communia, the EU policy network on the digital public domain. There was a focus on bringing together themes from previous events to make a series of policy recommendations to the European Commission (watch this space!).

Below are a few notes highlighting some of the talks and discussions that we thought might be of particular interest to readers here:

Read on:

http://blog.okfn.org/2010/02/03/7th-communia-workshop-luxembourg/

Extensive report (in french)

http://www.europaforum.public.lu/fr/actualites/2010/02/communia/index.html

Communia workshop in Luxembourg – 1&2 February

Dear all,

The seventh Communia workshop in Luxembourg City is approaching fast!

It’s the policy recommendation workshop and thus crucial for the goals of Communia. The working groups meet on the 31st January to finalise their recommendations. The first workshop day is a general overview of different policy fields relating to the public domain, the second day is devoted to alternative compensation systems and a policy recommendation wrap up session.

Sunday 31st January: Communia Working Group meetings (full day)

Monday 1st February: Communia Policy Workshop (full day)

Tuesday 2nd February: Communia Policy Workshop (ends at 16h)

The workshop page is at http://www.communia-project.eu/ws07

There you will also find the programme, links to the registration page, hotel and travel information and a Google map with additional info.

Please register and book your hotel (using the provided form) as soon as possible!

Very much looking forward to welcome you in Luxembourg,

On behalf of the organisers:

EEAR, Germany, NEXA, Italy and Luxcommons, Luxembourg with the support of CRID, Belgium and CERSA, France

Patrick Peiffer,

Luxcommons asbl

CC and the Google Book Settlement

by Mike Linksvayer, November 16th, 2009

Originally posted here: http://creativecommons.org/weblog/entry/19210

The is probably the copyright story of the year — it’s complex, contentious, involves big players and big subjects — the future of books, perhaps good and evil — resulting in a vast amount of advocacy, punditry and academic analysis.

It’s also difficult item for Creative Commons to comment on. Both “sides” are clearly mostly correct. Wide access to digital copies of most books ever published would be a tremendous benefit to society — it’s practically an imperative that will happen in some fashion. It’s also the case that any particular arrangement to achieve such access should be judged in terms of how it serves the public interest, which includes consumer privacy, open competition, and indeed, access to books, among many other things. Furthermore, Creative Commons considers both Google and many of the parties submitting objections to the settlement (the Electronic Frontier Foundation is an obvious example) great friends and supporters of the commons.

We hope that a socially beneficial conclusion is reached. However, it’s important to remember why getting there is so contentious. Copyright has not kept up with the digital age — to the contrary, it has fought a rearguard action against the digital age, resulting in zero growth in the public domain, a vast number of inaccessible and often decaying orphan works, and a diminution of fair use. If any or all of these were addressed, Google and any other party would have much greater freedom to scan and make books available to the public — providing access to digital books would be subject to open competition, not arrived at via a complex and contentious settlement with lots of side effects.

Creative Commons was designed to not play the high cost, risk, and stakes game of litigation and lobbying to fix a broken copyright system. Instead, following the example of the free software movement, we offer a voluntary opt-in to a more reasonable copyright that works in the digital age. There are a huge number of examples that this works — voluntary, legal, scalable sharing powers communities as diverse as music remix, scientific publishing, open educational resources, and of course Wikipedia.

It’s also heartening to see that voluntary sharing can be a useful component of even contentious settlements and to see recognition of Creative Commons as the standard for sharing. We see this in Google’s proposed amended settlement, filed last Friday. The amended version (PDF) includes the following:

Alternative License Terms. In lieu of the basic features of Consumer Purchase set forth in Section 4.2(a) (Basic Features of Consumer Purchase), a Rightsholder may direct the Registry to make its Books available at no charge pursuant to one of several standard licenses or similar contractual permissions for use authorized by the Registry under which owners of works make their works available (e.g., Creative Commons Licenses), in which case such Books may be made available without the restrictions of such Section.

This has not been the first mention of Creative Commons licenses in the context of the Google Book Settlement. The settlement FAQ has long included an answer indicating a Creative Commons option would be available. Creative Commons has also been mentioned (and in a positive light) by settlement critics, for example in Pamela Samuelson’s paper on the settlement and in the Free Software Foundation’s provocative objection centering on the tension between the intentions of public copyright licensors and the potential for settlements to result in less freedom than the licensor intended.

Independent of the settlement, we happily noted a few months ago that Google had added Creative Commons licensing options to its Google Book Search partner program. This, like any voluntary sharing, or mechanism to facilitate such, is a positive development.

However you feel about the settlement, you can make a non-contentious contribution to a better future by using works in the commons and adding your own, preventing future gridlock. You can also make a financial contribution to the Creative Commons annual campaign to support the work we do to build infrastructure for sharing.

If you want to follow the Google Book Settlement play-by-play, New York Law School’s James Grimmelmann has the go-to blog. We’re proud to note that James was a Creative Commons legal intern in 2004, but can’t take any credit for his current productivity!

ja amen end mend do pro

Luxembourg based music service Jamendo hit the german news site heise.de with their Jamendo pro service.

ja amen end mend do pro

Jamendo pro allows you to buy a certificate as proof that you only play their music, openly licensed and with rights cleared. This saves filling out the many, many forms that german music collecting society GEMA otherwise requires as proof (Yes, you’d have to notify them for the complete Jamendo repertoire of 200.000 songs). GEMA and paperwork, again.

Jamendo pro is a perfect turn-key solution for public music licensing like doctor’s waiting rooms, shops, etc. Yet again proof of the IP innovation potential of open content licences like art libre and creative commons.

Of course, Jamendo pro still has limits for private listening, one-person companies or people listening to music while at work. If you’re interested in the big picture and problems for moving beyond the incumbent’s paradigm of “mechanical reproduction” towards an internet “flat-rate” solution, see this french study from 2005 (english, french pdfs) and this brandnew german study (english pdf).

Note: This brilliant comment from RAIDer found that the name Jamendo contains many important english and german words, “(…) enthält viele wichtige deutsche und englische Worte wie Ja, Amen, end, mend,

do.”, hence the remix with the pink Creative Commons Luxembourg sticker above.

Sound copyright?

Youtube: “How copyright extension in sound recordings actually works“,

copyright_extension.png

Informative cartoon by the Open Rights Group on the music industry efforts to mislead politicians through massive lobbying and ignoring ALL evidence (incidentally presented at the European Parliament on 27th January). The proposed term extension is paid for by the european consumers  and, tragically, with virtually no benefit for the performers or cultural production.

You may sign a petition and find out more at www.soundcopyright.eu.