Tag Archives: OIN

Open Forum Academy round table on patent non-aggression pacts

Another day, another trip to Brussels. Today the Openforum Academy, the Openforum Europe think tank “with a broad aim to examine the paradigm shift towards openness in computing” is inviting to a round table and discussion about “Patent non-aggression pacts: a way forward for technological innovation?” Next to Clara Neppel, an examiner in the field of computer-implemented inventions at the European Patent Office, and Carlo Piana of the FSFE, I will be speaking about Open Invention Network and it’s role as a non-aggression network with a strictly defined field of use, namely Open Source software. The results of the round table will be compiled into a whitepaper and published later by Openforum Academy.
Continue reading

Defensive Publications: Shedding Light on Innovation

35010906_5977484c38

The patent system is broken. The point of patents is to encourage innovation and inventiveness. Instead of promoting innovation, patent offices have awarded overly broad, vague, and unoriginal patents that draw unclear lines allowing bad actors to profit and threaten costly lawsuits. Patent examiners have a strong sense of the technology that is patented, but they’re missing an understanding of what has been and is currently being developed in the open source world. As shocking as it may seem, the result is the examiner formulating an inaccurate sense of what is innovative. As the final arbiter of a very significant monopoly grant, they are often grossly uninformed in terms of what lies beyond their narrowly scoped search. This is not wholly their fault as they have limited resources and time. However, it is a strong indication of a faulty system that is so entrenched in the archaic methods under which patent offices have been operating.

We have faced and continue to cope with the effects of bad patents on multiple fronts. The most widely known are being displayed on the large stage where huge companies battle in the courts, resulting in large money settlements or high stakes jury verdicts (i.e. Apple v. Samsung). This leads to higher costs to consumers and users, uncertainty amongst innovators about what is patented, a veritable arms race to secure patents to corner the market, and limited competition. These ‘wars’ cost companies money and that cost trickles down to the consumer.

On another stage, we also see the threat of trolls exponentially increase as more patents are acquired and used against small companies, nonprofits, and independent developers. The fear of costly litigation forces licensing agreements with the result of stifling innovation by suffocating independent inventors. On all fronts, more money is being spent on co-existing with undeserving patents rather than developing new ideas. We are losing out on breakthroughs and advancement in technology because of the environment of fear and uncertainty that has been created.

The answer has to be more than abolition of the patent system, as from a pragmatic point of view, it won’t happen. It does us little service to ignore patents and abandon the system. Rather, we need to address and combat the threats to innovation so that we can begin to bring an end to the age of fear and litigation. We can continue to deal with patents as they are issued, identify those that abuse the system, then spend the money and invest time to work to invalidate these. Taking this one step further, we can also proactively prevent these obstacles to innovation from even existing by directly communicating to the examiner what is being and has been developed. The tools to do so are available through the use of defensive publications.

A defensive publication essentially describes what is known or currently being developed. For those who are developing software, these documents are regularly created in the form of blog posts, community updates or releases. However, examiner constraints prevent these sources from being found. The last step is formalizing this and ensuring that the patent examiner has access to an open source database of these documents.

With increasing amounts of low quality patents being issued worldwide and a lack of clear boundaries, patent examiners are losing a sense of what is indeed inventive. Those who patent are getting a voice. Every free software release, solved issue or innovative development process can be turned into a defensive publication. References to current or older releases can also be demonstrated to help illustrate how the community of developers resolves obstacles. By writing these disclosures, free software can demonstrate how to be proactive. In turn, a patent application is rejected and a potential lawsuit is avoided.

Through the Linux Defenders program, Open Invention Network works  with open source developer communities to create defensive publications. We will be working closely with Linux kernel and Qt  developers, because we think that these represent major driving innovative forces in the Open Source spectrum. Important innovations in Linux and Qt should be documented in defensive publications following the releases of the software. We invite interested individuals and companies to contribute to this, and will support the authors in getting their publications out. If you are interested in writing Qt related defensive publications, or will be able to help identifying topics that should be documented, consider joining the mailing list to follow the discussion: http://lists.qt-project.org/mailman/listinfo/defpubs

[Image by Nick Kocharhook, thanks: http://www.flickr.com/photos/k9/35010906/%5D

FLOSS in the Cloud: EOLE, Brussels, Dec 6

Happy Saint Nicholas day everybody! What better purpose could the day be used for to than to travel to Brussels through a storm, and attend the 2013 incarnation of EOLE, the “European Open Source & Free Software Law Event”, held today in Brussels. Philippe Laurent opened the conference with the still blurry question of what cloud is, quoting the FSF: “[cloud] … is a marketing buzzword with no clear meaning…” that is best to avoid. The whole world did not listen and now uses the term widely. The post reflects both what was discussed, and what I learned from the event.

It seems that while the cloud is still opaque, a common understanding is emerging on what cloud computing means. It represents a convergence of all the individual bits of running a service – software, platform, infrastructure, storage, hosting, billing, scaling and more – into a single, standardised, comparable offer. Essentially, it is the message to the engineers that nobody cares about the details, the individual twiddly bits, and clients want one unified package of hosting something that is actually used by a user. Economically, it is another critical step towards massive standardisation of IT operations, making procurement easier because all relevant bits are integrated, and improving competition by making the offers of various providers comparable. We should expect average service prices per user to fall, pretty dramatically, and especially fixed cost overhead in companies that formerly self-hosted to go down as well. In a couple of years, owning your own metal might sound like getting milk delivered to your door in cans.

It helped that Christian Verstraete from HP opened with a detailed overview of OpenStack. It showed the audience that there is a strong convergence of the market towards one free software solution, with backing from 95% of the relevant industry players. A standard test similar to the JavaScript Acid test can be expected to emerge for compatibility between offerings by different cloud providers. With that, migrating from one provider to another should pose no technical issues, only contractual ones. Based on the ForgeRock experience, Lasse Andresen underlined that by stressing that solutions have to be completely free software, not open-core. And the fact that if there is a well-adopted Open Source solution, it cannot easily be killed. In this, the freedoms provided by the licenses do prove useful – companies may fail, but the technology remains.

So far, that was all good, but not very law-related. Things became interesting from a legal point of view when Patrice-Emmanuel Schmitz opened the panel, with his background as one of the authors of the European Union Public Licence. However, he summarised the issues of current licenses and the debate of what distribution or conveying software means for web services, and it seems like that is still mostly murky. The concentration of services into cloud offerings has led to the rise of new licenses (a trend nobody was hoping for, considering the mess of tons of mostly identical not-invented-here licenses that were used a couple of years back). The underlying problem, though, is fundamental: Open Source licensing is based on copyright, which governs reproduction, distribution, adaption and performance of a copyrighted creation. None of these happen under auspices of the user of the site, and therefore there is no copyright relationship regarding the software between the site provider and the consumer. There is a remainder of code being distributed to the user, like JavaScript libraries. It is hard to construe a derivative work relationship between that code and the rest of the application that runs server-side, especially because these JavaScript libraries are often treated more like data than code and not even linked server-side at all. It is more similar to an client-side running interpreter than to a program part. If the web application is not a derivative work of the distributed libraries, the chain is broken, and a provider can claim not to be at fault with Open Sources licenses and not offer the source code for their modifications of the server application. The Affero GPL solves this problem partially by requiring the provider to offer the source code to the user when it is run on the server. This again ties the licensing to an element of the copyright rights bundle, performing. But it leaves a trace of a bad taste, because now there is a problem of proof – the user usually does not know what software was involved in rendering a response. Also, not all server software is licensed under the AGPL or similar licenses.

Contributing to Open Source is not something people do just because the license says so, but because they are somehow driven to collaborate. Web applications can still benefit from the Open Source way. What is different is that for libraries and applications, what the licenses are modelled for, users and developers are effectively treated the same and the distinction only exists in what they do. For web applications, users do not necessarily acquire a right to use, study, modify and improve the source code even if the developers published their product under a copyleft license. This is the norm that made it fun and enjoyable to contribute to Open Source projects. New norms and governance setups should be designed to maintain that situation and thus keep the motivation of contributors (individuals as well as institutions) intact. Compliance should be the norm by now, and I hope that the distrust sometimes underlying the relation – “Are they really showing all the software that is running?” will be a thing of the past.

Many thanks to the organisers!


Google+

Qt Project and Defensive Publications

Open Source communities are amazingly innovative. Linux Defenders encourages them to document their ideas in the form of defensive publications, so that this body of knowledge becomes relevant prior art for later patent applications and patent invalidations. The Qt community is especially relevant for defensive publications for two reasons – it is highly innovative, and Qt’s functionality covers pretty much all topics that are relevant in software engineering today. At the Qt Contributor Summit that is currently on its way in Bilbao, Spain, Armijn Hemel and me started a process to make defensive publications a routine part of the Qt release process.Akademy 2013 and Qt Contributor Summit poster

Continue reading

Defensive Publications at Embedded World 2013

Embedded World has started today in Nürnberg, Germany. I am here with Open Invention Network to spread the idea of defensive publications and OIN’s non-aggression community of companies in the Open Source sphere. Highly innovative companies present here, and many of them face the same dilemma — if the innovators decide not to patent their inventions, they run the risk that another party applies for a patent of the same invention later. The decision not to patent could be for ethical reasons because they understand that software patents are harmful, or for business or many other reasons. The problem stays the same, there is a threat that patents are awarded even though similar solutions already existed.

There are many software patents out there that experts consider obvious, not inventive or trivial. All of those three are reasons that the patent should have been rejected. Especially important for complex but non-inventive ones, the patent examiner did not discover relevant prior art when scrutinising the patent application. The state of the art was not documented and accessible in a way that supported a good decision. Defensive publications are one answer to this problem. They offer a cheap and fast way to document inventions. Also, defensive publications are available for areas that legally should not be patentable, like software as such.

Through linuxdefenders.org, companies and research institutions, but also individual developers can submit defensive publications relevant to Linux and Open Source in general. Linux Defenders is backed by Open Invention Network on it’s mission to prevent bad software patents. We want to support developers and inventors to document their ideas as explicit prior art. Our goal is to ensure freedom to operate for the innovators in Open Source. If you are interested, worried about your invention or have any questions, you can find Armijn Hemel and me in hall 5, booth 341. Or ping me on Twitter @mirkoboehm.

Mirko Boehm
Google+