Tag Archives: Creative Destruction

KDE Akademy 2014 – Welcome, new KDE board!

Akademy 2014 is still in full swing in Brno in the Czech Republic with the traditional hack week that started on Monday. At about 200 participants it was well attended and organized. This years conference will very likely mark a milestone of change for KDE – a new board was elected, and a strategy discussion was started that will affect the direction and development of the KDE community for a decent amount of time. When I traveled home from Akademy 2014 on the train from Brno to Berlin, I personally felt a sense of satisfaction, because the community has managed to steer clear of the dangers of bike shedding about the board succession, and is accepting the change imposed by a shifting environment as a positive force.

Akademy 2014

Continue reading

The KDE Randa 2014 meeting, in easy-digestible video format!

In case you were wondering what was going on in Randa, here are some first hand impressions. The video was produced by Françoise Wybrecht (alias Morgane Marquis) and Lucie Robin, and the people in it are the actual participants of the event. It was also created using KDenlive, one of the awesome Free Software tools a team has been working on at the Randa meeting itself. The video introduces the faces and personalities of the contributors and their different backgrounds and origins. Many thanks to our brand new ad-hoc media team for producing this video!

(In case the embedded video does not show up, see here: https://www.youtube.com/watch?v=yua6M9jqoEk)

KDE Frameworks Book Sprint at the Randa Meeting 2014

A couple of weeks before the KDE Randa Meeting of 2014, the meeting’s organizer Mario Fux suggested to have a book sprint to help developers adopt the newly released KDE 5 Frameworks. In the Open Source spirit, the idea was not to start from scratch, but rather to collect the various bits and pieces of documentation that exist in different places in one location that offers itself more to a developer than before. Valorie Zimmermann stepped up to organize it (many thanks!), and a number of people volunteered to take part. After a week the project completely changed its orientation and struggled and found and also newly defined its place as a part of KDE’s documentation efforts. And it produced an initial version of the book, which is currently circulated on people’s ebook readers around here. Continue reading

KDE Frameworks 5 Tech Preview released, with updated ThreadWeaver

Today, the KDE Community released a tech preview of the upcoming KDE 5 Frameworks, the new, modularised incarnation of what was previously distributed simply as the KDE libraries. The new frameworks are drop-in extensions to Qt applications, with minimal and well-documented dependencies for easier deployment. The tech preview contains two frameworks that are marked as mature, namely KArchive and ThreadWeaver. The updated ThreadWeaver was my major piece of library coding work in 2013, and was finished just in time for the release. Even though it is a tech preview, it is stable, and no major (or even significant but minor) changes in the current API are expected until the final release. Programmers are already encouraged to use it, and provide feedback and bug reports.

2897019812_c6bddd5fb1_oThreadWeaver is a concurrent execution scheduler written in C++. Available for all target platforms of the Qt framework, including desktop, mobile and embedded environments, ThreadWeaver delivers concurrent execution of tasks, load balancing with regard to user-defined criteria, multiple independent queues, processing graph modelling, aggregate jobs and other comprehensive features. As all other KDE frameworks, ThreadWeaver is Free Software. Its only dependency is Qt, which makes it a tier 1 framework in KDE’s lingo.

A number of the new features of ThreadWeaver were announced at Akademy 2013. Jobs, the unit of concurrent execution in ThreadWeaver, are now managed by the queue using shared pointers, meaning that auto-delete behaviour is implicit and controlled by the user. Helper templates are available to queue stack or member variables, so allocation of jobs can be static or dynamic. Functors or lambda functions can be used to construct jobs. Job aggregates like collections and sequences now execute their own run() method before queueing their elements, so that aggregates can generate their own elements. Success and queueing state of jobs are now integrated into a single status. Jobs can signal the result of execution by setting a status, but also using exceptions, simplifying error reporting in more complex job classes. Jobs can be decorated, and no more inherit QObject by default. Decorators can be used to add signals, change priorities or modify just about any behaviour of jobs independently of the actual job class used. The construction of the global queue can now be customised using a queue factory. The QueueStream API greatly simplifies queueing jobs with a familiar iostream-like C++ syntax.

ThreadWeaver follows the Unix idiom of doing one thing, and doing it right. Similar to how small Unix programs can be combined to create an practically infinite space of computing solutions, ThreadWeaver offers itself to programmers as an add-on module with minimal dependencies. Including it extends an application with concurrent scheduling capability. But the same Unix idiom is also applied in a second sense. Within ThreadWeaver, a few basic concepts – jobs and their aggregates, queues and policies – are implemented that again provide simple building blocks that can be combined creatively, offering a vast space of potential solutions within the scope of the application.

The history of ThreadWeaver goes back to KDE 3. The idea of implementing a thread pool based execution scheduler that manages dependencies between jobs was implemented as a proof of concept using Qt 3. However it turned out to be difficult to implement and use because of the lack of thread-safe reference counting of the implicitly shared classes at the time. These fundamental problems have been solved with the release of Qt 4. Additionally, the introduction of cross-thread signal-slot connections further simplified the communication between jobs and the application’s user interface. The first production ready version of ThreadWeaver was released as part of KDELibs with KDE 4.0. For KDE Frameworks 5, it was almost completely re-written to simplify memory management of jobs, make use of new Qt 5 features like atomic variables, and in part to reflect new language constructs in C++11 like lambda functions. ThreadWeaver comes with an extensive set of unit tests that all pass in the tech preview (hear, hear).

In the following weeks and months, the framework will be polished and debugged based on user feedback. Also, a series of posts here on this blog will introduce individual ThreadWeaver concepts and features in depth, mostly based on example programs, including contrasting it to thread handling in Qt using QThread or Qt Concurrent. ThreadWeaver is very close to production quality, having been tested continuously in the last couple of months. There may still be smaller, source compatible changes to the framework. We ask interested programmers out there to provide feedback and bug reports to make ThreadWeaver what it should be — a worry-free, easy to use and powerful add-on to Qt that programmers enjoy using. Have fun!

[Image by Shannan Sinclair, thanks: http://www.flickr.com/photos/originalbliss/2897019812%5D


Google+

FLOSS in the Cloud: EOLE, Brussels, Dec 6

Happy Saint Nicholas day everybody! What better purpose could the day be used for to than to travel to Brussels through a storm, and attend the 2013 incarnation of EOLE, the “European Open Source & Free Software Law Event”, held today in Brussels. Philippe Laurent opened the conference with the still blurry question of what cloud is, quoting the FSF: “[cloud] … is a marketing buzzword with no clear meaning…” that is best to avoid. The whole world did not listen and now uses the term widely. The post reflects both what was discussed, and what I learned from the event.

It seems that while the cloud is still opaque, a common understanding is emerging on what cloud computing means. It represents a convergence of all the individual bits of running a service – software, platform, infrastructure, storage, hosting, billing, scaling and more – into a single, standardised, comparable offer. Essentially, it is the message to the engineers that nobody cares about the details, the individual twiddly bits, and clients want one unified package of hosting something that is actually used by a user. Economically, it is another critical step towards massive standardisation of IT operations, making procurement easier because all relevant bits are integrated, and improving competition by making the offers of various providers comparable. We should expect average service prices per user to fall, pretty dramatically, and especially fixed cost overhead in companies that formerly self-hosted to go down as well. In a couple of years, owning your own metal might sound like getting milk delivered to your door in cans.

It helped that Christian Verstraete from HP opened with a detailed overview of OpenStack. It showed the audience that there is a strong convergence of the market towards one free software solution, with backing from 95% of the relevant industry players. A standard test similar to the JavaScript Acid test can be expected to emerge for compatibility between offerings by different cloud providers. With that, migrating from one provider to another should pose no technical issues, only contractual ones. Based on the ForgeRock experience, Lasse Andresen underlined that by stressing that solutions have to be completely free software, not open-core. And the fact that if there is a well-adopted Open Source solution, it cannot easily be killed. In this, the freedoms provided by the licenses do prove useful – companies may fail, but the technology remains.

So far, that was all good, but not very law-related. Things became interesting from a legal point of view when Patrice-Emmanuel Schmitz opened the panel, with his background as one of the authors of the European Union Public Licence. However, he summarised the issues of current licenses and the debate of what distribution or conveying software means for web services, and it seems like that is still mostly murky. The concentration of services into cloud offerings has led to the rise of new licenses (a trend nobody was hoping for, considering the mess of tons of mostly identical not-invented-here licenses that were used a couple of years back). The underlying problem, though, is fundamental: Open Source licensing is based on copyright, which governs reproduction, distribution, adaption and performance of a copyrighted creation. None of these happen under auspices of the user of the site, and therefore there is no copyright relationship regarding the software between the site provider and the consumer. There is a remainder of code being distributed to the user, like JavaScript libraries. It is hard to construe a derivative work relationship between that code and the rest of the application that runs server-side, especially because these JavaScript libraries are often treated more like data than code and not even linked server-side at all. It is more similar to an client-side running interpreter than to a program part. If the web application is not a derivative work of the distributed libraries, the chain is broken, and a provider can claim not to be at fault with Open Sources licenses and not offer the source code for their modifications of the server application. The Affero GPL solves this problem partially by requiring the provider to offer the source code to the user when it is run on the server. This again ties the licensing to an element of the copyright rights bundle, performing. But it leaves a trace of a bad taste, because now there is a problem of proof – the user usually does not know what software was involved in rendering a response. Also, not all server software is licensed under the AGPL or similar licenses.

Contributing to Open Source is not something people do just because the license says so, but because they are somehow driven to collaborate. Web applications can still benefit from the Open Source way. What is different is that for libraries and applications, what the licenses are modelled for, users and developers are effectively treated the same and the distinction only exists in what they do. For web applications, users do not necessarily acquire a right to use, study, modify and improve the source code even if the developers published their product under a copyleft license. This is the norm that made it fun and enjoyable to contribute to Open Source projects. New norms and governance setups should be designed to maintain that situation and thus keep the motivation of contributors (individuals as well as institutions) intact. Compliance should be the norm by now, and I hope that the distrust sometimes underlying the relation – “Are they really showing all the software that is running?” will be a thing of the past.

Many thanks to the organisers!


Google+