Saturday, December 20, 2008

[Pub] Mule IDE

I published an article about the new Mule IDE in the current issue of the Eclipse Magazin. In the article I give an overview about Mule and how the IDE supports developers to model their Mule applications. The IDE provides the following features:
  • Mule project wizard
  • Mule runtime configuration (you can define different Mule runtimes)
  • Graphical Mule Configuration Editor
  • Start your Mule Server from your IDE
More information about the Mule IDE you can find on the Mule IDE homepage.

Friday, December 19, 2008

[Arch] Application Architecture Guide

The architecture and design of software applications should be technical neutral. The Application Architecture Guide from Microsoft guides developers through the design of applicataions based on the .NET platform. In my opinion, this architecture guide not only focus on the .NET platform and is a really good cookbook for Software Architects. As we present in our Best Practice Project, design patterns, are technical neutral and can be used in any language. The design principles of a Data Access Object or a Proxy are the same in Java as in .NET.

The guide can be used as a reference and provides the following five parts: Fundamentals, Design, Layers, Quality Attributes and Archetypes - Desgin and Patterns.

Fundamentals

The first part focus on fundamental architecture concepts, including application Archetypes, Layers and Tiers and Architectural Styles. On chapter focus on the .NET platform.

Design

The second part is very interesting for software analysts by providing some guidelines to design your application. Here you will find best practice approaches to make decisions about the distribution, select the right application type, choose the right architectural style (component based, message-bus,...). The chapter Architecture and Design is on of the most interesting one, because it provides guidelines to design considerations, architecture frame, persistence, security and some other topics. The last chapter Communication guidelines provides best practice approches to use the right communication between your software components and describes the design impact of choosing a communication technology.

Layers

The layers part focus on the layered architecture and describes each layer in detail, including Presentation, Business, Data Access and Service Layer. The layered architecture is on of the common used architecture styles in software projects.

Quality Attributes
In this part non functional requirements in software projects are described and how these requrements can be achieved by doing the right design. What must be account for to be secure and performant.

Archetypes - Design and Patterns

Don't compare this chapter with Maven Archetypes. The last part of the architecture guide describes different types of applictions, like Mobile Application, RIA or Web Application. This is very interesting because you will find which typical design principles and design patterns are used in such an archetype. For example one of the key patterns in Web Applications are Composite View, Front Controller, the classical MVC, Page Cahge or the page controller.

This architecture acts as Bibel for developers and you fill find less .NET code. In my opinion this guide is realy good and should be checked by every software architect and analyst.

Thursday, December 18, 2008

[Misc] Virtualisation for Software-Engineering

The easy accissible virtualisation tools like Virtual Box enable new applications in software engineering. An obvious usage pattern would be, to provide prepared systems for customers to evaluate that would require rather complex installations and setups.

Not so apparent is the usage in automated testing. Tests of higher granularity like integration tests are sometimes difficult to automate as they often require (again), complex installations of several components (database, middleware, legacy components...), setup of these components plus a consistent state of all these components. Virtualisation can facilitate that process significantly. The developers can create a consistent image of the system where all dependent systems are properly installed, configured and fed with data.

This image is used as reference. In the build and test automation one has to get the clean image, start it, install the more "dynamic" modules, i.e. the system under development and test, and execute the (integration) tests in the virtualisation, then collect the test results and shutdown the virtual system again. That procedure allows deterministic test-scenarios on different machines, also (or particularly) on a continuous integration server.

Sounds good, however in practice I am still missing some tools that support that. Linux in the box is of course a good start as everything is easily scriptable. However, if you want to include the virtualisation into the build-automation (e.g. in Maven) and into the continuous integration server, plugins to Maven/Ant... would be useful that allow to control the virtualisation environment, trigger activities there and retrieve data (e.g. the test-results).

In an initial research I did not find much tools in that area, any ideas?

Tuesday, December 09, 2008

[Misc] Glassfish

I recently informed myself about the (Sun) Glassfish J2EE server. I never took it as a serious competitor in the field, as I had the impression it is just a reference implementation that is from Sun... However, I had to change my opinion. In the recent years it seems, that the Glassfish community worked hard on their baby and currently it seems to be a solid competitor in the field.

The Glassfish univers "not only" contains a J2EE server, but actually a set of Enterprise-tools like as message broker, clustering framework, enterprise service bus (JBI compatible), a library to implement SIP applications and the like. Additionally it is well supported by the Netbeans IDE. The recent (preview) version contains a J2EE runtime that additionally supports scripting languages like Ruby and Groovy and is based on the OSGi framework.

What I do like additionally is the fact, that Glassfish comes with a decent installation tool, provides a solid web-based administration interface and seems to be reasonably well documented. And, of course, the whole stuff is Open Source.

I must say, I am quite impressed so far. Any comments on that one?

Friday, December 05, 2008

[Misc] Mule Developer Blog

There is now a new blog which takes his focus only on Mule, by providing technical tips, comments and breaking news around the Mule product line (ESB, Mule Galaxy,..). Blogger from this blog are only developers from Mule Source and members of the Mule Service Team, which means that you get the information from first hand. There are already posts, e.g. how to write custom transformers in Mule or an introduction to Expression transformers. Another interesting post focus on performance tuning in Mule.

The backround idea behind this blog is to give the Mule community as much information as possible. This is the right way, because there are some issues (e.g. performance issues) where you always end up in the mailing list and search some hours for the right answer. Some posts are emerge from discussion threads in the user mailing list.

Monday, December 01, 2008

[Arch] Archi-Zack!-ture

A few months ago I read a very good article series by Pavlo Baron about Software Architecture in the Java Magazin. Software architecture is an important component in software projects and you will find a lot of resources (books, web sites, blogs) which focus on this topic. He titled the series with Archi-Zack!-ture.

What does a Software Architect do? The role of a software architect in a software product becomes more important the more complex the system is. Software architecture is still a rather young discipline and the value of a software architect in software development is often underrated. As Pavlo describes in the first part, it's hard to seperate the role of a software architect with the role of a software developer. The architect enables developers to develop the software system by providing the necessary environment. The developer on the other side focuses on the fulfillment of the requirements. The motives are the same, but the emphasis differs.

The major part of the series are (Anti-) patterns about the software architecture. I picked up the most important one from my point of view. First I will enumerate the patterns which should be followed:
  • Architect also implements
    Each architect should also implement. This is very important, because as a architect you make decisions about performance, scalability and similar stuff. Without any technical backround these issues are very hard to design/answer.
  • Senior Developer = Software Architect
    It is important the a software architect has experience in software development.
  • Manages makes the final decision
    One man of the management must make the final decision. The software architect prepares the material which are used for the decision
Now some antipatterns
  • Ivory-Twoer Architect:
    The purchase to the reality lost
  • One Man Show:
    The software architect makes everything alone
  • Single point of decision:
    The software architect must decide everything, e.g. implementation details.
  • Design by Committee:
    On the other side, it makes no sense to make all decisions in the team which ends in a never ending discussion. A good mixture is the answer.
  • Architecture by implication:
    Architecture is not documented.
  • Architecture without review:
  • Meta architecture:
    Software architect remains too abstract
  • Customers are idiots:
    The Software architect is not a God!!
  • No Refactoring:
    The management does not give time for refactoring
Some people believe that software architects are the "kings", and without them software projects don't work. The above mentioned anti patterns are found rather often in projects and architects should keep that in mind. It is not important which title is written on your business card, the content, the attitude to the project and how the project team works toghether, are the keys to the success of the project.

Wednesday, November 12, 2008

[Arch] RESTful applications with NetKernel

The architectural style REST has gained some popularity and is often brought up against SOAP for interoperable web services. REST stands for Representational State Transfer and has some characteristics that distinct itself from other architectural styles:
  • Resources such as a person, an order, or a collection of the ten most recent stock quotes are identified by a (not necessary unique) URL.
  • Requests for a resource return a representation of the resource (eg. an html page describing the person) rather than an Object that IS the resource. A resource representation represents the current state of the resource and as such is immutable.
  • Representations contain typically links to other resources, so that the application can be discovered interactively.
  • There is typically a fixed and rather limited set of actions that can be called upon resources to retrieve or manipulate them. HTTP is the best known example of a RESTful system and defines eg. GET, PUT, POST and DELETE actions.
Applications based on REST are typically very extensible, provide good caching support, and can be easily mashed up to bigger applications.

NetKernel

Using the RESTful application pattern in non web based applications is currently not very well supported by programming languages and frameworks. NetKernel is an open source framework designed to provide a simple to use environment to program RESTful applications.

Its architecture is rather simple: Programmers write Modules and register them with the kernel. Each module registers its address space that that states which logical addresses (URIs) the module will handle and which java class, script (Python, JavaScript, Groovy, …), or static resource will act upon the request and return a resource representation. A module can also register rewrite rules that translate from one address to another.

Accessing resources within NetKernel from outside is via Transports. Each module can have Transports that monitor for external system events (eg. JMS events, HTTP requests, CRON events, etc), translate these events into NetKernel requests, and place these requests into the NetKernels infrastructure that will route the request to the appropriate resource.

NetKernel supports a wide range of scripting languages uses resource representation caching to speed things up transparently for the developer. The internal request-response dispatching is done asynchronously so callers can easily state that they do not care for an answer after 10 seconds, are not interested in the response at all, or place several request first and then wait for the responses coming back. REST is most often associated with HTTP – with NetKernel one can apply the REST architecture style also to applications that do not use http; it is completely decoupled from the http stack.

Compared to other REST frameworks such as Restlet, NetKernel is extremely well documented and several large sample applications can be downloaded from their homepage to get started quickly.

Related Links
  • http://www.1060.org – the homepage of net kernel.
  • A recent article on TheServerSide.com about resource oriented computing with NetKernel that provide a more thorough introduction.
Benedikt Eckhard (edited by Alexander Schatten)

Sunday, November 02, 2008

[Arch] Mule 2 and Beyond

For anybody who doesn't know the Open Source ESB Mule, I can recommend the presentation from Ross Mason hold on the Java Polis conference. In this presentation Ross gives an overview about Mule, including the component architecture and new features in Mule 2. Other relevant topics are:
  • Develop services in Mule
  • Dealing with web services in Mule
  • Exception strategies
  • Transaction Management
  • Scalability
  • Projects around Mule: Mule HQ and Mule IDE
He also points to the projects on MuleForge, which provides Mule users interesting enhancement modules and connectors, like LDAP or SAP.

Tuesday, October 28, 2008

[Event] ESEM-Conferences

From October 6th-10th the Experimental Software Engineering International Week (ESEIW) took place in Kaiserslautern, Germany. The ESEIW was organized in four joint events:
  • 16th Annual Meeting of International Software Engineering Research Network ISERN, Oct 6th-7th
  • 6th International Advanced School of Empirical Software Engineering. IASESE, Oct 8th
  • 3rd International Doctoral Symposium on Empirical Software Engineering IDoESE, Oct 8th
  • 2nd International Symposium on Empirical Software Engineering and Measurement. ESEM, Oct 9th-Oct 10th
Major topics of the ISERN Meeting were
  • aggregation opportunities from experimental results,
  • application of Empirical Software Engineering to Software Architecture
  • discussion of an Empirical Research roadmap and associated key task within a time range up to 2010
  • application opportunities of empirical investigations using simulation.
The ESEM main conference is an established forum for researchers and practitioners to report and discuss recent research results in the area of empirical software engineering and metrics. The conference focuses on topics related to processes, design and structure of empirical studies, and results of specific studies.

Paper Presentation

We gave a short paper presentation in the track "Empirical evidence and Systematic review" titled with "An Empirical Investigation of Scenarios Gained and Lost in Architecture Evaluation Meetings" (D. Winkler, S. Biffl, M. A. Babar) which reports on initial findings on team effects in scenario brainstorming processes for architecture evaluation.
Abstract: Studying the effectiveness of scenario development meetings in the software architecture process is important to improve meeting effectiveness. This paper reports initial findings from analyzing the data collected in a controlled experiment aimed at studying the effectiveness of meetings in terms of gained and lost scenarios of individuals, real and nominal (non-communicating) teams. Our findings question the effectiveness of holding meetings since more important scenarios were lost than gained in these meetings. In the study nominal teams performed better than individuals and real teams.

The slides of our presentation are available for download.

Key Note Talks

Beside presentations and discussions of academic papers, "state of the art" presentations focus on relevant topics for industry and best software engineering practice:
  1. Using empirical methods to improve industrial technology transfer. (Harald Hoenninger, Vice President Corporate Research, Robert Bosch GmbH, Germany).
  2. Empirical Challenges in Ultra Large Scale Systems (Mary Shaw, Institute for Software Research, Carnegie Mellon University (USA)

Abstracts of the key notes are available for download.

Dietmar Winkler (edited by Alexander Schatten)

Sunday, October 26, 2008

[Arch] SEDA Model and scalable enterprise applications

Scalability, Performance, Fault Tolerance are classical non functional requirements when building enterprise applications. Setting up the application in a clustered environment is a very popular approach and is often used in practice.

When building process based applications you usually deal with long and short running processes. For example a order process may take one week, whereas a save transaction is a process that takes (should take) only few seconds. Short running processs often implements the backend business logic. In Service Oriented Architectures these short running processes results in a orchestration of provided services from different systems.

Mihai Lucian describes in his article a simple scenario, where different platforms in a backend process are connected by using web service endpoints. One of the platform that is queried has a slow response time. He describes one solution approach to battle to the slow response time by using asynchronous IO in servlet container and the Staged Event Driven Architecture (SEDA) model.

He mentioned that current servlet APIs do not provide methods in order to deliver data to the client in a asynchronous way. AJAX based frameworks currently use one of the modes: Polling, Piggy back and Comet. Apache Tomcat for example provides a implementation to handle asynchronous IO by decouple the requirest and response from the worker thread. So you can prepare your response later.

Now the combination with the SEDA model is very interesting, because you can route your request to the right queue. As Mule is based one the SEDA model, Mihai Lucian illustrates how to implement such a scenario in Mule (see image).




From my point of view the example is very interesting based on the following key points:
  • Asynchronous communication in a servlet container
  • The most interesting thing is how to correlate the request with the response through different layers
  • How Mule fits in such a architecture
  • Practical example of JMS
  • Using Routers in Mule and route the message to the right Queue
  • Using Apache CXF in Mule
A full description of the example is provided on the article homepage. At the end of the article he provides some benchmarks in order to provide a clear view of the advanateges using such an architecture.

Friday, October 24, 2008

[Misc] Two new interesting SE books out

Recently two fresh books fell into my hands:

  1. The Productive Programmer, Neal Ford, 978-0596519780
  2. Clean Code, Robert C. Martin, 978-0132350884
The first book is a very ambivalent one.

Pros: It covers a wide range of areas how to be more productive. And so you can find topics like ACCERLERATION, FOCUS, AUTOMATION, Test-Diven Design, Code Analyse, up to very philosophical points. The real strong point of this book is that it has several dozen important points most programmers really forget about in their daily work. Each point might be trivial when analyzed in isolation. But to gather all productive points is a worthy issue. And even some principles as SLAP (already covered by Beck) can not be explained often enough.

Cons: The content really leaves a very diced impression. You can find the same information at several places in the book (like the "magic" unix find) and some points of uninteresting information like his words to Reflection, Exceptions, Array Indices and some flame about EJB 2 (really 2!). And thus it's not astonishing that this book gets only two stars on amazon.de.

But nevertheless: even if the book is a mixed work, if you adhere to 80% of the good rules, you will be a massively better programmer.

2. The Clean Code (subtitle: handbook of agile craftsmanship) is more a real work of craftsmanship itself. The idea is the same as for the book beautiful code I recently reviewed here. Martin has found smart authors for each chapters. And the chapters are:
  • Meaningful Names (! which is a horror in big companies with developers from different cultural and technical backgrounds to my opinion...)
  • Functions
  • Comments (you think you can not learn here?)
  • Formatting (sounds boring but is not) and
  • Error Handling, Boundaries, Unit Tests, Classes, Systems, Emergence, Concurrency, Successive Refinement, JUnit Internals, and more...
It closes with a catalog of smells and heuristics which is a little like the one we had in the Fowler refactoring book but nevertheless of great use.

Robert C. Martins Clean Code Book differs from the rest because there is a lot code in it. And a lot code that migrates from bad to good. You really feel while reading that the authors have invested in strong code examples. This makes it a really valuable resource to read. So it has my strong recommendation.

To conclude: These two books, combined with all the Fowlers and Becks (not the beer...) lead me to create a catalog of all the useful points they have written. It is still small and in beta state. But you are invited to use, contribute or link to this growing list of best practices.

So God bless all (most of?) the software developers!
(who read these books ;-)

Monday, October 20, 2008

[Conf] CEE-SET Conference

Last week I joined the CEE-SET conference in Brno. On the conference I presented a paper written by Robert Thullner, Josef Schiefer and myself: We analyse the application of Open Source frameworks in implementing enterprise integration patterns. For that matter a series of scenarios was implemented with (combinations) of different frameworks like Apache Active MQ, Apache Camel, Apache Service Mix and Mule.

The paper is available for download.

Saturday, October 11, 2008

[Arch] Introduction to REST

REST ist an architectural style on how to let distributed applications communicate. It is considered as an alternative approach to XML:RPC or SOAP webservices. I generally like that Google Talks introduction to REST:



I would also recommend additional resources to get a more complete picture on REST, check out this link.

Monday, October 06, 2008

[Arch] OpenSource ESBs

The last couple of years major Open Source ESBs, including MuleSource and ServiceMix, have been expanded and are used in critical business solutions. Tijs Rademakers and Jos Dirksen offer a book which gives an overview about Open Source ESBs and which combination of Open Source technologies with ESBs are used. The main open source solutions covered in this book are Mule and ServiceMix. Therefore most of the examples in the book are based on these two technologies. Other Open Source ESBs that will be covered are Apache Synapse, Open ESB and the new integration framework from Spring, called Spring Integration.

In the TechBrief the authors mentioned that all Open Source ESBs focus on Enterprise Integration Pattersn. If you understand these patterns its very easy to understand the implementation and handling of ESBs.

The book is divided into three parts. The first part concentrates on reader which are not familiar with an ESB:
  • Overview about ESB functionality and what Open Source ESBs are available in the Open Source market
  • Taking a deep look into the Mule and ServiceMix architecture
  • Installation of Mule and ServiceMix and how to run them
The second part focus on ESB core functioanlity which covers some of the Enterprise Integration Patterns. Here the reader becomes some connector examples, like JMS, JDBC, POP3 and Web Services.

The third part covers case studies and also illustrates integration scenarios with BPM engines, like jBPM and Apache ODE.

In the tech brief there was also a short comparison between Mule and Service Mix. When to use which one, is hard to say, it depends on your requirements. But in this interview on of the authors said that in a web service based architecture the JBI approach is often the better choice, but Mule is very often used, because you can also transfer Java objects, which is often very comfortable and faster. They also talk about integration of legacy systems, which is sometimes easier with Mule, because when you use Service Mix all messages must be transformed in XML.

You can download chapter 1 and chapter 4 for the book homepage.

Tuesday, September 30, 2008

[Arch] A Comparative Analysis of State-of-the-Art Component Frameworks

Andreas Pieber and Jakob Spoerk wrote a thorough and very good thesis about software components and Java-based component frameworks. The authors introduce component-based software development, derive criteria to compare frameworks and eventually discuss OSGi, Spring, J2EE, SCA, and JBI on an individual basis and in connection with each other as some problems require the combination of several component frameworks.

Dowload the thesis here.

Thursday, September 18, 2008

[Arch] Pattern Based Development of Business Applications

In a recent series Jos Dirksen and Tijs Rademakers describe "pattern based development" on the basis of Open Source Middleware (ESBs). Specifically their first article describes how to implement and integrate applications using Mule and the second article gives a good introduction into the Java Business Integration standard (JBI) and the implementation ServiceMix plus a Message broker.

Wednesday, September 17, 2008

[Arch] Requirements?!

The Waterfall Requirement

I find this really interesting: In talking with many people who actually do make Software in the last years (not only talk about it *g*) I think I detected that many have a growing problem with the term "requirement". Martin Fowler sums it up greatly in his recent blog post. He makes a strong point in saying that the understanding that many people have about "requirements" is actually still very much driven by a waterfall-like understanding of the software engineering process.

As a matter of fact, requirements seem to be problematic on so many levels: on the level of customer/programmer relationship (do they understand each other), on the level of abstraction, how to manage them, how to test if the implementation follows the requirement that we hopefully have understood correctly, just to name a few. A story often heard is "we spent months with the customer building lists and lists, pages and pages of requirements or bought expensive requirement management software, and in the end the development of the product was very decoupled from these lists; but good to know we have them in the files.".

Now my question is this: the trail from requirements to software seems still very natural to us, and it is stunning for many developers and managers that it actually oftentimes does not work. I would like to add here: sometimes it works very well, but I come to that point later. Now, what could be the replacement of requirements? (In case we agree that they do not work as intended in their traditional sense.)

Observation comes in...

Well, there are modifications of requirements engineering in agile processes like the storytelling in XP. But Martin Fowler makes an interesting other observation: Many successful web-applications actually work out the requirements as they go by providing some base-services, a platform (e.g. for exchanging photos). The very important point in this phase is to build in functionality that allows the management to observe the behaviour of the customers: which functions are they using, what are they annoyed about (forum discussions, email feedback...). Then they add experimental new features and test the acceptance (Or you can as well go the next step: let the customers develop the applications as Yahoo! Pipes shows it).

It's Alive!

The interesting point is, that this procedure was very elaborately described in the book "It's Alive: The Coming Convergence of Information, Biology and Business" written by Christopher Meyer and Stan Davis. A very recommended read. The do not describe so much the process of software engineering but rather strategies of modern enterprises and derive the core principles:
  • Seed: bring in a new feature, idea; probably only to a subset of customers, probably in variations for different customers
  • Select: select the successful variations
  • Amplify: eventually amplify the successful ideas and bring more of that sort
Meyer and Davis brought exampled from "old economy" but actually in software engineering (particularly in web-applications) we have a very good opportunity to rather easily "seed" new features, observe the behaviour and select and then amplify the good ideas. The key point here is, as also Fowler mentions, to focus more on the "observation framework" (in a technical-, but also in a management-sense!) than on trying to get all requirements right from the beginning.

... and back to the Waterfall

Having said that, I want to come back to a point I mentioned earlier: in the "agility euphoria" some evangelists forget to mention, that there is a broad variety of different software products and engineering efforts. (and I am not speaking of safety critical systems here): In many cases actually a "waterfall inspired" process works pretty well. This can be the case in such projects where a technical guy (a developer at best) has already a lot of experience in a particular domain and is either re-writing a legacy application (a case I am observing right now) or is developing a new application that is actually following a series of similar applications and similar customers he had. And this case is quite a regular one in the industry, and we should not forget this scenario. In such cases the requirements of the new application can often be nailed down quite precisely.

Why is that? Well, actually the "Seed, Select and Amplify" process happened in this technical expert. He or she worked with old applications in that domain, often with a broad range of customers that have experiences with several systems of a sort. So he has developed quite a good understanding of (1) the domain (2) the customer (3) the competitors. In such cases the problem often lies more to get the implementation phase right and not to spoil the project on the last steps e.g., with not so experienced developers (either in the technical sense, or in the domain, or in the worst case in both). Hence agile principles as suggested by Scrum for example can be very helpful for the implementation phase to keep control of the process even though the requirements are quite stable from the beginning.

Wednesday, September 03, 2008

[Arch] Google AppEngine & Python

Cloud computing is the fashion right now and Google is positioning it's AppEngine against services like Amazon EC2. However similar on the first glance, the two approaches are rather different in detail: Amazon's service is more a virtual server hosting (where you have all freedoms, however are responsible about administration too) plus a set of webservices (like the storage services S3 and SimpleDB or the Queue Service SQS).

AppEngine offers a concrete application development environment in Python plus a simple database that has to be used. So you are limited to Python code and Python frameworks like Django and you cannot install an arbitrary database, on the other hand you do not have to deal with many administration issues and Google deals with the scaling.

Guide van Rossum, the father of Python (who is now employed at Google) gives a very interesting one-hour presentation on YouTube on how to write and configure a Python/Django web-application within the Google AppEngine environment.


Tuesday, August 19, 2008

[Arch] Trends in Data-Management (aka Databases)

It is interesting for me to observe: relational databases have been attacked several times in the last decades, e.g. with object-oriented databases (gone) or XML databases (gone). Now recently a new trend in data-management seems to appear: databases or better data storage/management mechanisms that follow a much looser paradigm than relational databases often using a lightweigt (often REST or JSON based) access strategy. This demand for new datamanagement strategies seems to have several reasons, some come to my mind:
  • Performance: in some cases, complex queries are not required (or can be replaced by simple ones): databases that perform very fast with pure primary key retrieval
  • Complex datastructures are not needed
  • ACID is not needed, i.e. mostly simpel writes are performed but fast reads necessary
  • Agile development seems to favor rather ad-hoc data-structures vs. carefully planned ones (if this is a good trend is written on a different page)
  • Distribution is important and distributed relational databases are a hard thing to do
  • Access to rather document-oriented datastructures is required
and probably many more. Already older tools like Apache Lucene (actually designed as full-text search engine) is used in several projects as kind of a database replacement. This is particularly possible when reading is more important then writing data and no particular ACID requirements are in place. But Lucene provides a nice and rich query language for that matter.

Recently Amazons EC2 platform made a lot of waves as a distributed deployment platform to be used for applications that have to scale significantly (there is, btw. an Open Source version implementing part of the interfaces named Eucalyptus). Part of the Amazon toolset are two storage mechanisms: S3 and SimpleDB. For both APIs are available to be used from applications. S3 is a storage mechanism for storing rather larger junks of data (like files, documents) and is organised in "buckets". SimpleDB, currently beeing in beta, is a storage mechanism for more fine-grained issues. With SimpleDB chunks of data can be stored using a primary key (item id) and a set of items that can consist of attribute/value pairs. To access SimpleDB a WSDL interface description is available and a sort of REST-style interface.

The newest kid on the block (as appears to me) is Apaches CouchDB, which is currently in the Apache incubator. CouchDB seems to follow a similar strategy like Amazons SimpleDB but is focuses on REST/JSON style access (here is a nice comparison between SimpleDB and CoudhDB). CouchDB is (unfortunately, in my opinion) written in Erlang which makes installation and usage (at least in the Java environment which most Apache projects share) rather a difficult issue. However, conceptually it seems to be quite interesting and I suppose we will see more projects of that sort soon.

Ah, and speaking of marketing: projects like CouchDB explicitly express that they are not alternatives to relational databases :-) However, the first projects appear that provice RESTful interfaces for relational database...

Btw.: does anyone know other projects in that domain that I have not seen yet?

Monday, August 18, 2008

[Arch] Mock Objects

I stumbled over this article yesterday: A neat and short description of Mock objects and a motivation how Mock objects in general and Mocking frameworks can support (unit) testing particularly with classes that have dependencies. I like this very short introduction because the concept of Mock-objects is actually not so difficult to understand but the need for Mock-frameworks is not so easy to grasp.

If the basic idea is understood the documentation of frameworks like JMock can kick in and do the rest ;-)

Addition: Thanks to the comment of reader Touku who recommended the article from Martin Fowler: Mocks Aren't Stubs.

Thursday, August 07, 2008

[Misc] Puppet and Puppetmaster

I am back from Indonesia, and what could be a more worthy topic to write as first blog after the travel? Exactly: Puppet. In Indonesia I listened to the IT Conversations talk with Luke Kanies about his project. Puppet is an open source system-administration framework for Unix-based operating systems. I believe, that puppet shows quite some innovations not easily to be found in other tools and has the potential to be the next step in system administration.

First: the target audience of puppet are system administrators and/or developers that have to roll out and administrate a potential large number of server and client (!) systems. Everyone who has to administrate more than two machines know that doing that manually is for sure not an entertaining business. Now what I believe is puppets strongest idea is, to define an abstraction layer over system administration:

Puppet allowes to define the behaviour of machines in an abstract way by using a language to describe classes of configurations; as in object-oriented languages inheritence is possible. The usual tasks of a sysadmin can be written in the puppet language. More important, puppet tries to abstract from OS details, so it does not matter for ordinary activities like configuring an Apache webserver whether the target OS is Linux, Solaris or BSD. To abstract from concrete resources puppet uses so called resources: a good example are users. As we know, they can be defined and managed in different ways on different platforms and contexts. Puppets resources hence deal with concepts like user, file, cron and so on on different operating systems in the same way.

Essentially puppet can be seen as the missing next step after virtualisation solutions: a virtualisation describes the hardware requirements of a machine, puppet describes the operating system and services requirements. So ideally you define the specifications of your machine (needs Apache Webserver, mysql... version...) and then execute that on the very machine using puppet. If you need a second machine with the same configuration, just reuse the configuration from the first (puppet calls that repeatable configurations).

Puppet is also a tool in the sense, that a so called "puppet-master" can communicate with puppet clients. These clients are under control of the puppet master.

Configurations are idempotent, this means, you do not need to assume a specific context or status on the machine to run a configuration "script". You can simply start a configuration on a specific machine and the configuration definition with puppet brings the machine into the desired state. Actually puppet executed these configurations on a regular interval to keep the machine in the desired state.

As far as I understand puppet so far, it is the next level of system administration (as mentioned above, particularly also in combination with virtualisation) allowing to manage also complex infrastructure. There are apparently already a number of companies and institutions using puppet on a larger scale. Luke Kanies mentiones in his talk that also Google is using puppet so administrate several thousand machines (apparently partly MacOS) but also many other international companies.

Puppet written in Ruby and is provided as Open Source framework, however, one thing that worries me a little bit at the moment is the fact, that there is currently no big community behind puppet. Puppet is the "baby" from Reductive Labs and there essentially from Luke Kanies and I believe few further developers. What I have heard from this project so far is really impressive, and I hope that the project attracts more developers soon and Reductive Labs is open minded enough to open the development to outsiders.

Tuesday, August 05, 2008

[Pub] JBPM meets ESB

The combination of a process engine and an Enterprise Service Bus (ESB) is one interesting aspect of modern service oriented architectures (SOA). Both, an ESB and process engines provide similar concepts and software architects often have problems to find the right solution. Therefor I and Bernd Rücker wrote an article in the German Java Magazin about it. To have a practical showcase the integration is shown with a small example using JBoss jBPM and two Open Source ESBs: JBoss ESB and Mule.

The easy showcase implements the following example: Some event is generated and saved as a file (This may be an order, some incident, an alert, whatever). This file is picked up by the ESB and a new jbpm process is started. The process contains a human task, where somebody has to review the data of the event and decides, if that event can be ignored (e.g. a false alert) or if it has to be handled. In the latter case, the event is sent to an existing case management system via Web Service (could be Lotus Notes or something like that). The case management systems sends a JMS message as soon as the case is closed. This message is again picked up by the ESB and the right process instance is triggered (called "signaled" in jBPM).

The article covers the following topics:
  • The basic combination of a process engine and an ESB
  • When makes it sense to combine a process engine with an ESB
  • How does JBoss ESB integrate jbpm and which Event Handler the proces designer can use to call ESB services
  • How does Mule integrate jbpm and which Event Handlers does Mule provide for the process designer
  • Lessons Learned :)
To compare the ESB implementatations the show case was implemented with JBoss ESB available here and Mule, available here. Following the links, you will find a detailed description about the two implementation scenarios.

Wednesday, July 09, 2008

[Misc] (Open Source) Developers and Marketing

In the recent IT-Conversations Steve Yegge from Google talks about developers and their attitude towards marketing. I would say, that many stories he tells are more or less well known, however, I think he raises an issue: the relation between development and marketing (in the "business world") or even more problematic: in Open Source projects.

This starts with project-naming and includes license issues (which customer or manager understands 60 different OSI licenses...) and selling of the product. And in one thing he is definitly right: even when we are working in an Open Source environment and we are (mostly) technicians, we want our project to be used (why else would we put it out there), plus a healthy project needs a proper community. Maybe we should once in a while put code, tests, architecture-discussions and the like aside and try to put on the shoes of our (potential) users. And I am afraid in many Open Source projects we will realise, that these shoes are not fitting all too good ;-)

This brings me btw. to another thought: maybe the way the OS process is structured and organised leads in many cases to excellent code, but not necessary to excellent products (in the sense, that the user understands what the software could do for him and how he could use it efficiently). I think OS projects and their tools actually encourage mostly coders to participate in a project. I think, there are hardly OS projects where some contributors are focusing only on interaction design, documentation, marketing...

Might be worth a second thought?!

Friday, June 27, 2008

[Arch] Pattern Based Development with Mule

I've found an interesting article about Pattern Based Development with Mule. This article illustrates how you can implement the Enteprise Integration Patterns using Mule elements. All code examplex are based on Mule 2.

Thursday, June 26, 2008

[Pub] Open Source ESBs for System Integration

For our German speaking audience: in the current iX magazin Markus and I discuss the expectations on an ESB and the current status quo in the Open Source arena particularly focussing on Mule and the trinity Apache ActiveMQ, Camel and ServiceMix. As an example we show a support-process and analyse how the process-execution can be supported by integration middleware.



Then we go into some details of what you could expect from an ESB (MOM, routing, filtering, message transformation, various endpoints, ...). I outline the Java Business Integration (JBI) standard which is in my opinion an important attempt in trying to define integration concepts and enterprise service components that can be exchanged between ESBs of different vendors.

In brief we also outline concepts of clustering and failover, mostly on the example of ActiveMQ using Broker Networks, failover protocols and the like. Finally we go into more details into the current status of ServiceMix and Camel and Mule. (But our avid blog-readers will know most of that already from various blog postings anyway).

Friday, June 13, 2008

[Pub] The tenth book is out!

Well again something to celebrate:
My tenth book is out!
(is it time to stop now?)

If was again a ot of work but the result looks good and contains quality. The Proceedings of the First International Conference on Object Databases ICOODB 2008 (ICOODB.org) have been printed and are now being send around the world.

Please drop me a line if you wish to order a copy!

Any hints for the next ten books are welcome :-)
Best
Stefan

Thursday, June 12, 2008

[Tech] Update on "Maven: The Definitive Guide"

I am happy that the guys from Sonatype are continuously improving their free book on the "de facto standard" Apache Maven build-automation framework: "Maven: The Definitive Guide". The book covers most topics typical Maven users will encounter, including generation of documentation (site) and writing Maven plugins (mojos).

I think this book is very useful for the newbie as well as for more experienced Java developers. The book is frequently updated and available for online reading and as PDF download; in the recent update they put their book under a Creative Commons license.

Wednesday, June 11, 2008

[Event] Software and Systems Essentials Conference 2008

From April 28-30 the 2nd Software and Systems Essentials Conference 2008 took place in Bern, Switzerland.

An important goal of the conference is bringing together people from business, industry, and academia who are working in software engineering and information technology with its various aspects. Discussions and exchange of experiences between users in public and industrial contexts and vendors of software solutions (regarding software development frameworks) were in the main focus of the event. 

Main topics of the conference were software processes and the exchange of experience on the individual application in various contexts (e.g., in the public application domain), project management regarding systematic systems development processes, and software quality. Among several topics in these areas, I recognized a focus on traceability, i.e., requirements tracing from requirements analysis over design to software code (vertical traceability) and vice versa and linking requirements and specifications to software code and test cases on various levels of abstraction (horizontal traceability). 

We gave a presentation in the track "Testing and Quality Assurance" titled with "QATAM: ein Szenariobasierter Ansatz zur Evaluierung von Qualitätssicherungsstrategien" (D. Winkler, C. Denger, F. Elberzhager, and S. Biffl). This presentation is a result of an ongoing project of TU Vienna and Fraunhofer IESE  in Kaiserslautern, Germany (Institute for Experimental Software Engineering).

Summary of the presentation
 
Efficient development of complex high-quality software system requires systematic planning activities. The selection of an appropriate software process, e.g., the V-Model XT, is a success-critical activity in software development. Software processes define the sequence of steps within a software development project (e.g., what products are required at which milestone). Additionally, constructive and analytical methods support developers in building a product (constructive methods) and verifying/validating  software solutions (analytical methods). 

Nevertheless, resources typically are critical issues in software engineering practice. Thus, an optimal resource planning is required with respect to quality assurance (QA) planning for small- and medium enterprises as well as for large companies. The "Quality Assurance Tradeoff Analysis Method" (QATAM) focuses on the definition and evaluation of quality assurance strategies to enable optimal application of a balanced set of agreed methods along the project life-cycle. The given presentation includes the basic concept of QATAM and illustrates its application with respect to better planning method selection and application regarding a more efficient project execution. 

The slides of our presentation (in german language) are available for download.

The slides of all presentations will be available to the conference participants via the conference website.

Keynotes

Beside presentations and discussions of academic and industry papers, "state of the art" presentations focus on relevant topics for industry and best software engineering practice:
  1. E-Government Programm Schweiz – Ein komplexes Programm in einem komplexen System (Peter Fischer, EFD) 
  2. From informal process sketches to enactable process: How to represent your development process with SPEM 2.0, Rational Method Composer, and Team Concert  (Peter Haumer, IBM) 
  3. Agiles Projektmanagement für große Projekte (Bernd Österreich, OOSE)
Keynote presentations are available on the conference website.

Dietmar Winkler (published and edited by Alexander Schatten)

Saturday, June 07, 2008

[Arch] Apache CXF and "Code First" Webservices

Dana Gardner has an interesting interview in his Briefings Direct podcast (transcript) talking about the Apache CXF project. Apache CXF is one of the leading "Open Source Service Framework", supporting a series of service protocols like SOAP, XML/HTTP, RESTful HTTP, or CORBA. Additionally CXF plays in the concert with Apache Camel, ServiceMix and ActiveMQ.

I believe, that CXF is a great project and as I said a leading webservice framework. I also perfectly agree on their statement, that Open Source frameworks in the middleware/SOA field will more or less take over the market (I would probably not buy shares from Oracle/Bea *g*). But I feel a quite a bit uncomfortable about two fundamental concepts that were stressed several times in the interview:
  1. Code First approach and slightly connected:
  2. Abstracting too much from Webservice protocols like WSDL: "So, a lot of these more junior level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.", Kulp
I personally believe that this is not the right way to go. I think we have to take the criticism of the REST folks (among others) seriously here, and the question we have to ask ourselves is: why in the first place do we want to use SOAP webservicess?

Because it is an easy remoting approach that a junior level engineer can click together with a wizard due to excellent frameworks and UI components? Sure not! First: a remote methode call is something fundamental different compared to a local method call; it probably should be treated as such. But even more important: Services on the basis of SOAP have some underlying assumptions:
  • platform independence is needed
  • Services on a rather coarse granularity are exposed
  • interoperability over system/company borders are imperative, i.e., the service interface is in the center and should be considered properly and not change any minute
  • remote service calls are rather the exception (coarse granularity, aggregated functionality),
  • strong formalisation is needed (XML schema, service description, security...)
  • i.e. performance losses due to XML (un-) marshalling are acceptable considering the advantages gained by this "neutral" approach
and probably some more. The point however is, that SOAP services are not just an ordinary remoting approach they are to expose Services under specific conditions. (For other remoting problems probably other technologies like RMI or REST are better suited.) In the cases where SOAP is a good architectural choice I would suggest that the platform neutral service description i.e. WSDL should stand in the center of attention. The idea is, that different parties can express a neutral and platform independent service description plus data-description in W3C schema as a foundation for cooperation.

The next step, not the first step is to implement the service. So my feeling about code-first approaches are that they can lead developers and architects into a dangerously wrong direction (just two annotations and the webservice is done, so we are made to believe). What we would really need are not code-first webservice frameworks, but easy to use WSDL editors/modeling tools.

Additionally a Webservice infrastructure is by definition a complex beast. Trying to abstract all underlying protocols from the developers easily gives them a wrong idea about the actual complexity of their undertaking. When (e.g. interoperability, security) problems occur, they probably have no idea about the reason and the means to fix them. So give us good Service modeling tools, but no code-first approaches. This leads us into the wrong direction.

Just my two cents.

Thursday, May 29, 2008

[Pub] What's new in Mule 2

Mule from MuleSource is one of the most used Open Source ESB in actual integration projects. On March 31, Mulesource announced the final Mule 2 release, a popular Open Source ESB. This major release comes with some new features and architectural improvements. For this purpose I've written an article for all german audience interested in Mule, for the JAX Center. The article is available online and covers the following topics:
  • The new schema based configuration approach
  • API changes
  • New concepts such as Mule Context and Registry
  • The role of Spring in Mule 2
  • The changes of Transports and Transformers
  • Migration from Mule 1.x to Mule 2

We've upgrade our prototype application to Mule 2, illustrating the basic concepts of an ESB. The prototype is available on here.

Thursday, May 15, 2008

[Misc] Martin Fowlers live DSL book

Today I would like to point your interest on the project that Martin Fowler is currently working on. So if you look at his homepage, you can see that he is actually working on a book about Domain Specific Languages (DSL).

The interesting thing in this is that he is writing in publicy and you can even trace his work with an RSS feed he is providing. His motivation was partially that there is much hype in this topic and lots of specific talks and papers, but no holistic approach for this topic. So whoever is interested in DSLs (and Software Engineers should!) should read this to gain a broad view of this topic.

And Martin Fowler would not be Martin Fowler if he won't use simple examples to illustrate his views: It reads like an adventure in a castle where you walk around, turn the light on three times, open doors and panels, take a picture off the wall, and much more to obtain a treasure.

And this example illustrated that you have rules, input data and thus you might have something like a state machine that helps you to solve a task. So one core item in this book is to show the variety of how these state machines can be build using:
  • Method Chaining
  • Pushing Parameters into objects or nesting
  • Literal collections (like being used in Ruby or Rails)
  • Closures (please let them arrive in Java...)
  • parsing XML or other notations
and even Macros or annotations. Doing this, he carefully outlines and distinguishes between internal and external approaches. So if you as a Software Engineer would have to solve such a problem you would normally remember only a few approaches. Using Martin Fowlers book, your senses will be quite sharpened to find the best solution for your problem.

But he already has more chapters written as expression builder, networks, lots of parser issues, symbol tables and much more.

To conclude: it's not only worthwhile to see a respected author nearly live working on a book. The current DSL textfragments will help a lot to fin the right hammer for lots of software engineering problems.

Tuesday, May 13, 2008

[Arch] Programmatic Dependency Injection

Depdenency Injection is a very cool approach to structure and decouple your software components Depdenency Injection can be seen as a design pattern. Therefore some different implementations of this pattern exisit. You can also write your own DI implementation, but in many cases software developers use existing frameworks like Spring or HiveMind, doing all the hard work. I've found an interesting article about programming dependency injection with an abstract factory. The popular factory pattern is a very common approach to abstract the complex initialisation of service components. Some ideas of the factory pattern can be found in DI framework. This article illustrates an abstract factory pattern with the two following key differences to the traditional factory:
  • An optional factory interface replaces the abstract factory class
  • Every factory method is responsible for creating an object and injecting its dependencies
Based on a simple example with two components he illustrates the following scenarios:
  • Lazy-instantiation of service components
  • Non-Singleton scope (if always a new instance of a object must be created)
  • Wiring up objects dynamically
  • Creation of local stateful objects with dynamic paramters for singletons
To summarize the article:
  • Using factories, developers have to write more code to get started
  • Factory implementation code changes significantly if code changes between lazy-initialization and eager-initialization or from singletons to non-singletons
  • Abstract Factory design pattern include creating local stateful objects from dynamic parameters, handling checked exceptions thrown during object creation, and wiring up objects dynamically
  • Better performance because it uses straightforward Java code and hardwiring
For those who are not familiar with Dependency Injection, look at this simple introduction article, illustrating Dependency Injection with a real life scenario. In this article, the autor also presents a simple implementation of a DI framework, by loading object implementations from a properties file.

Whether you use a DI framework or you implement your own DI, the DI approach brings a clean structure to your software components/systems and unit testing becomes easier by using mock objects. DI frameworks, such as Spring, provide a wide range of additional features, like AOP, Transacation Management, API templates, and some other stuff, which can be also used.

Wednesday, May 07, 2008

[Tech] Persistence Layer Generation

More than a year ago, I wrote a blog entry discussing Apache iBatis (my favorite "O/R" Mapping framework) and wrote mostly about the not so well known sub-project Abator. Today I realised that there was some significant updates on the iBatis project recently, and also a new Abator version, renamed to iBator as released.

iBator is a code-generation tool that makes kind of introspection into a relational database schema and with a supporting XML configuration generates basic iBatis SQLMaps, Java Classes, and DAO classes (Spring). iBator can be used by an Ant task, standalone command or Eclipse plugin. It seems that this subproject got more momentum recently and I hope for further updates soon.

So iBator allows to generate a significant part of the persistance code of an application, which also helps "iBatis rookies" in understanding how iBatis works. The problem of the original Abator was, that it did not allow roundtrip engineering: typically the code generation is only the first step in the development of a persistance layer. One will in many cases want to modify the generated DAOs to better fit the projects needs. It seems, that the recent iBator release allows a merging of new generated Java classes with changes made in the old ones by using the Eclipse plugin. However, I did not substantiate this feature as yet.

I believe, that iBator makes the already very easy and straightforward iBatis project even more accessible in providing good boilerplate code to start from, yet I would be curious about actual "roundtrip experiences"...

Tuesday, May 06, 2008

[Arch] JBI Misses the Mark

One of the main differences between Apache Service Mix and Mule ESB is the JBI implementation. Apache Service Mix is a full implementation of the JBI standard. However, you can plug in Mule to a JBI container by using the JBI transport from Mule. I personally love Mule, because it's extremly lightweight and Mule 2 fits ideally with Spring application, finally that Mule 2 is based on Spring 2.x. Up to now I didn't have a deeper look to Apache Service Mix. But based on Alex information, the combination of Apache Active MQ, Camel and Service Mix is a solid basis provided by Apache.

The last couple of years Ross Mason the founder of the Mule Project often discusses why he decided not to adopt JBI for Mule. In his blog "JBI misses the mark" he mentioned the basic idea of Mule:

"[...] Mule was designed around the philosophy of 'Adaptive Integration'. What this means for Mule users is that they can build best-of-bread integration solutions because they can choose which technologies to plug together with Mule. [...]"

About JBI he points the following assumptions and there consequences:
  • XMLmessages will be used for moving data around
  • Data Transformation is always XML-based
  • Service contract will be WSDL
  • No need for message streaming
  • You need to implement a pretty heavy API to implement a service
  • It’s not actually that clear what a service engine is in JBI
Another interesting point is:
"[...] JBI seems to be a 'standard' written by middleware vendors for middleware vendors. This 'vendor view' of the world is one of the main reasons Open Source has done so well. Traditionally, Open Source has been written by developers much closer to the problem being tackled. These developers can deliver a better way of solving the problem using their domain knowledge, experience and the need for something better. This was the ultimate goal Mule and given the success of the project I believe that goal has been realized with the caveat that things can always be improved (which we continue to do). [...]"
Im waiting for some comments (especially from Alex) and what he think about the statement about Ross :)

Monday, May 05, 2008

[Misc] Late April Joke: OLPC XO and Windows...

Ok. I do not tend to use strong words in this blog, but even considering to put Windows on the OLPC XO is probably the most questionable idea I have heard in the last years in the IT world. The whole idea of the OLPC was (with much effort!) to create an open system, from hardware to software, that everyone can modify. To create a device you can learn with and learn from. The hard- and software are linked together, co-evolved for high performance on a rather low performance hardware to achieve a highly efficient and powerful device. Just considering to put Windows on that device is such a weird idea, that I believed it was an April joke when I read the first article. Why would anyone want to destroy the OLPC idea that was definitly more than a cheap laptop by making just that: a cheap crappy Windows laptop. What is the rational here? If Mr. Negroponte believes he cannot drive the project any longer, he should leave or stop it, but not ruin everything that was build up with a lot of effort from many spirited developers worldwide.

I just write that article to sympathise with all OLPC members that feel betrayed at least just by circulating such an idea. It is like inviting the club of vegetarians to a summer party and promising the best vegetarian food and then offering bloody steak and spare ribs. Sorry guys for these bad news associated with a great and inspired project. Carry on, and leave this nonsense behind you!

Friday, May 02, 2008

[Pub] Service Composition

In my recent (German) Infoweek article I discuss service composition using the SCA and SDO standard and the open source runtime Apache Tuscany. This article is freely available on the Infoweek site. I personally think that both standards are very interesting and not enough known at the moment. The Apache Tuscany project is still in the incubator, but seems to have a good momentum. Recently a new version was announced, that also provides an integration with OSGi as the runtime works with Apache Felix. Javalobbly has an article about this new release and the Tuscany/OSGi integration.

However, what I am still waiting for is an integration of SCA/SDO in Apache Tuscany with JBI, i.e., that the runtime can be deployed as a JBI component e.g. in Service Mix. I have no objections against OSGi, however, my feeling is, that it would fit even better in the integration context of an JBI enterprise service bus.

Wednesday, April 30, 2008

[Misc] Strong Opinions on Freedom

Today I listened to the IT Conversations Podcast: "Eben Moglen on Licensing in the Web 2.0 Era". I must say, that this talk made me thinking. It for sure left me with mixed emotions. Eben was discussing with Tim O'Reilly and I definitly disliked the fact, that he attacked O'Reilly on a quite personal and in my opinion unprofessional level.

Still, there is a point he makes. Eben Moglen is a professor of law at Columbia law school and founder of the Software Freedom Law Center (and working with Richard Stallman). He was recently busy with the GPLv3, and this was part of the discussion. The main point Eben again and again raised was however, that we wasted at least 10 years in discussing "Open Source" and not about discussing freedom. We should talk about patent laws and other regulations in the first place.

I think that there is something to it. I also tried in discussions and publications to focus less on Open Source, but on conditions that make Open Source the right way to go, as Open Source itself is not the issue. I always felt that e.g. protocols are the more important field of discussion and policy than Software actually is. And here is the point where legal policies could also play an important role for freedom (of choice) and lay a proper foundation for non-monopolistic commerce: If we would e.g. have a proper regulation on Office protocols, like defining that only open specified document formats (like used in Open Office) are allowed in public services, this would lead to a pressure to software vendors and to a proper competition on the market and finally also to a rich Open Source scene. Then we would have the freedom of choice, the freedom to use our data and information in any way we want plus it would have positive effects on the Open Source scene, but not the other way round.

Ok, these are my two cents to the topic. To conclude, I think that Eben is definitly making a valid point, even if he decides to use an aggressiv tone in his speech that is not appropriate in my opinion. Listen and decide for yourself.

Tuesday, April 29, 2008

[Pub] Implementing Enterprise Integration Patterns using Open Source Frameworks

Robert Thullner recently finished his excellent master-thesis with the title "Implementing Enterprise Integration Patterns using Open Source Frameworks". In his thesis he is refering to the EI Patterns described by Hohpe and Woolfe. Robert analyses a set of leading OS frameworks in this domain (Mule, Apache ActiveMQ, Service Mix and Camel) for how they implement patterns and how they support developers in implementing EI scenarios using patterns. (Particularly Camel impressed us from the conceptual level and also the ease of use).



He defines a set of scenarios using specific patterns (the figure above shows one of the scenarios) and are implemented with various (combinations) of technologies to evaluate and demonstrate the capabilities of the specific technology or mix of technologies. Finally he categorises the frameworks and gives hints on implementation best-practices.

I don't want to go into details here, but who is interested in Enterprise Integration Patterns and Open Source frameworks might want to download the full thesis here. The sources of his examples can be downloaded as well.

[Tech] Database Migration

I found an interesting project: migrate4j which was introduced at Javalobby today. The idea behind this tool is to leverage the issues that come up when applications are developed using a relational database and the database schema changes between version. I.e., databases at customers or used by other developers have to be modified to the needs of the new version of the software. Probably you even want to downscale again.

The main page of the project already gives a good insight into the functionality of this tool. The idea is to describe "up" and "down" grading steps in Java classes that can be executed within the build automation cycle. Up and down are relative to the current version of the database. So it should be possible to up- and downgrade the database to the desired level automatically when needed.

Very interesting idea, however, I am wondering, why there are not more tools like that around; everyone developing database-applications is fighting with such issues I suppose. Have I overseen such tools? Recommendations?

Wednesday, April 23, 2008

[Conf] IDC SOA Conference

Today I was invited to hold the keynote speach at IDCs SOA conference in Vienna. I was talking about building agile business processes from "decoupled" services. I discussed three different views on aggregation: 
  • A "formal" top-down approach using Standards like BPEL or SCA; this approach typically is process driven
  • Event-driven architectures, which is a rather bottom-up approach (and a very agile and flexible one!)
  • And finally a data-driven approach using "syndication" features and standards, typically following the REST principles (which reminds me, that I want to write something about REST since an eternity...)
If you are interested in my presentation download it from my website.

Friday, April 18, 2008

[Tech] Tech Brief on Mule 2

For few weeks the new major release of the Open Source ESB Mule was released. On TheServerSide Ross Mason, the founder of Mule and CTO of MuleSource gives some statements about the new version. In this tech brief he points out the following issues:
  • Major API changes and improvements
  • Architecture improvements
  • Transports, transformers, Connectors have consistent look and feel
  • Schema-Based Spring XML configuration
  • A REST pack was released with 2.0 hosted on MuleForge
  • Future support for SCA
My impression about the new version is very positiv. It's much cleaner and the new schema based configuration makes Mule configuration an easy task, not least through the XSD support of my XML editor. There is now much work to do in order to migrate some support modues and extensions available on MuleForge.

Wednesday, April 16, 2008

[Conf] Object Database Conference Review


On 13th and 14th of March Berlin saw the ICOODB 2008 i.e. the First International Conference on Object Databases (some pictures here).

Despite several transportation strikes more then 150 registered users were able to listen to the 25 talks at TFH-Berlin. The conference started with the science day on Thursday the 13th. Some of the first speakers have been:

  • Christof Wittig (CEO db4objects Inc.) with a cool Web 2.0 keynote
  • Mice Card from the OMG about database standardization
  • Prof. Subieta presenting his stack based approach to object databases
There has been a hot discussion after all these talks about future directions of object databases. One main question was if new developments should be user driven or standards driven.

The second day (14th) was the application day with talks from:
  • Robert Greene (Vice President Versant Inc.)
  • Leon Gudzenda (CTO Objectivity)
  • Ralf Westphal with two invited talks about Transacional Memory and AmazonDB
  • Carl Rosenberger
  •  Chris Beams (Spring Source)
and many more ...

So as you can see the program didn't just cover object databases: We had talks about LINQ, JPOX, SQL 2003 and even Oracle contributed a lot interesting stuff!

In the next month we finish the proceedings covering the science papers. Those who are interested can order them (by contacting me).

Furthermore we will put in some interesting slides on the webpages in a few weeks.

To conclude: Due to the great visitor response the ICOODB is likely to be continued 2010 and then subsequently every two years at a different location. We are currently negotiation to provide the best place in the world for ICOODB 2010. So stay tuned!

[Pub] Enterprise Service Bus - Concepts

For our German speaking audience: Markus wrote a very good article about Enterprise Service Bus concepts for jaxenter. This article gives a good introduction on ESB concepts, Integration Patterns, Binding, Transformation... the whole program :-)

It is freely available, so...

Wednesday, April 09, 2008

[Misc] Google App Engine

Today I came in contact with a post writing about the Google App Engine.

"[...]Google App Engine is designed for developers who want to run their entire application stack, soup to nuts, on Google resources.[...]"
Focusing on service oriented architecture, this approach can be interesting, because companies can host their services on google and service consumers have a common way to access these services. By using the google platform developers can do the following:
  • Write code once and deploy
    Developers write the code, and Google App Engine takes care of the rest
  • Absorb spikes in traffic
    Automatic replication and load balancing with Google App Engine
  • Easily integrate with other Google services
    Using built-in components provided by Google
The service is now launching in beta and has a number of limitations. The first 10 000 developers will get a committment for development.
"The service is completely free during the beta period, but there are ceilings on usage. Applications cannot use more than 500 MB of total storage, 200 million megacycles/day CPU time, and 10 GB bandwidth (both ways) per day. We’re told this equates to about 5M pageviews/mo for the typical web app. After the beta period, those ceilings will be removed, but developers will need to pay for any overage. Google has not yet set pricing for the service."
At present applications must be written in Python, because Googles infrastructure is based on it.

Update: Christoph wrote us a comment and refered to the Google App Enging Blog as good resource!

Monday, April 07, 2008

[Event] Sustainability and IT

Today, on Monday 7, we invite to an event at the Austrian Computer Society with the topic "Sustainability and IT", the rest of this posting is in German, so is the event:

Ich möchte alle Mitglieder des Open Source Arbeitskreises der OCG sowie alle anderen Interessenten und Spätentschlossene (!) zu folgender Veranstaltung Einladen:

Heute, am Montag den 7. April (!) 2008, 16:30

Zemanek Saal der Österreichischen Computer Gesellschaft (OCG)

Wollzeile 1-3, 1010 Wien

"Nachhaltigkeit und IT - Eine Neuorientierung"?

Folgende Programmpunkte sind geplant:
  • Alexander Schatten: Einführung zur Veranstaltung, und Idee der Neuorientierung des Arbeitskreises
  • DI Friedrich Schmoll (Umweltbundesamt): "Green IT---Nur ein Marketingschlagwort?"
  • DI Georg Meixner (IBM): "IT-Nachaltigkeit und Kosten"
  • Diskussion mit den Vortragenden
  • Diskussion über zukünftige Aktivitäten und OCG Arbeitskreis-Ausrichtung
Für alle weiteren Vorschläge bin ich natürlich offen, bitte entweder hier diskutieren, oder persönlich per Email an mich. Weitere Details auch im neuen Blog: Forum Nachhaltigkeit.