Saturday, December 22, 2007

[Arch] Book Review: Kent Beck "Implementation Patterns"

As promised 10th Dec here is the second book review. Recently we discussed beautiful code, and now we have: Kent Beck, "Implementation Patterns", Addison-Wesley, ISBN-13: 978-0321413093

Of course Kent Beck is well known in the software-engineering and development space. He wrote books about
  • Better Smalltalk
  • Patterns
  • Eclipse
  • JUnit Pocket Guide
  • Refactoring
  • Test Driven Development,
and of course he might be best known for XP and his XP books.

Thus it's more then important to listen if he brings out a new book. And in fact this book is a wonderful and easy reading - in contrast to beautiful code - and a good reference for every ambitious developer.

Let's first have a look at the books chapters:
2 - Patterns
3 - A Theory of Programming
4 - Motivation
5 - Class
6 - State
7 - Behavior
8 - Methods
9 - Collections
10 - Evolving Frameworks
App A - Performance Measurement

So the first impression is, that something like state or behaviors sounds like the classic pattern categorization. But it is more from a different perspective: The developers perspective. For example the last three chapters - and even more performance measurement - can rarely be found in developer books. Unfortunately such important chapters like performance is just a small appendix. Most examples are written in Java which is especially important for the collection chapter that also shows lots of performance comparisons.

The interesting surprise in this book is that Beck writes about issues like "Classes" from scratch. Many of us might think they know everything about classes and can not be surprised. But Becks writing always forces you to look from another perspective. And this is the books strength. When Beck writes of Code "Communication" and e.g. naming Classes or Variables he pulls you up from a developer to an advanced developer in the complete context.

In fact Becks book is a great reference book that cover lots of issues. Some examples: Classes, Superclasses, Interfaces, Abstract Classes, Versioned Classes, Value Objects, Specialization, Subclasses, Implementors, Conditionals, Delegation, Pluggable Selector, (Anonymous) InnerClasses, all kind of Access Strategies, Variables & Fields, Parameters, Constants, Names initialization Modes, and so an on the Behavior and Methods Chapter.

So what might be criticized?

In fact some things can be found elsewhere in other books like Bruce Tates "Better, Faster, Lighter Java" or Bruce Eckels "Thinking in Java". And Kent Beck even reuses parts from his older books. Sections as the "Pluggable Selector" can be found in his TDD book and in Implementation Patterns even if Kent has invented complete new code examples for this pattern. And rarely some examples are a little elusive (e.g. when missing code symmetry looks more like missing intentional/implementation definitions from the start). But don't get me wrong: it's super worth reading and should lay next to nearly every developer!

To summarize

If you are an advanced developer guru who has reflected for years about your own code you will not find the book a complete new revelation. But for developers who still love to reflect about code and like to have a developers reference at hand, this book is a wonderful meta-best-practices extract with lots of issues that TatesEckels* are missing.

Need a post Christmas Gift?
Happy Chistmas!

Friday, December 21, 2007

[Tech] Tim O'Reilly: From Web 2.0 to Robotics and Genomics?

Tim O'Reilly speaks in an interview at the Web 2.0 Expo in Berlin about the "second coming of the web", that they named Web 2.0 at time ("a stupid name") and he shares some thoughts about current "social networks" and "social graphs" like implemented in Facebook, and why he is not really happy with many of the current developments. He again stresses the importance of data over applications and sees several "undiscoverd data bases".

He also refers to Open Social. An interesting initiative from Google to provide a common API for various social applications, that could provide a basis for software developers to create larger clusters of social applications. Furthermore he discusses privacy aspects and the change in the attitude of people towards privacy.

However he is apparently not happy with current developments, which are not adequate to the possibilities new devices (like the iPhone) and "Web 2.0" generally would allow. He also compares innovation cycles: the PC was hot in the 80s and got very boring in the 90s also due to the dominance of companies like Microsoft. Similar developments could be seen with the phone companies, that overslept most of the new developments and might happen to todays giants like Google.

So the underlying question is: Do companies and particularly developers understand the potential of "Web 2.0" ?

[Misc] Merry Christmas

Merry Christmas to all our readers and a thank you to all authors who contributed this year!

I think this was a quite good year for us. We meanwhile have a good and continuously increasing number of readers and feed-subscribers; I think that the quality of the articles was fair at least and by the end of the year we even got a new author in Stefan Edlich.

I hope we are going to do fine in 2008 and we are again going to be able to welcome many new readers and more interesting discussions.

Have relaxed Christmas holidays, I think you all deserved it, and stay tuned in the new year!


cheers


Alex

Tuesday, December 11, 2007

[Event] Die One Laptop per Child Austria Initiative: OCG

This blog article announces an event in Vienna:

Für alle, die die wirklich interessante Präsentation von OLPC Austria an der TU verpasst haben, freue ich mich folgende Veranstaltung im Namen von Mag. Stockinger und der OCG ankündigen zu dürfen. OLCP ist nicht nur ein wichtiges Projekt für Entwicklungsländer, es bietet auch für Programmierer eine Herausforderung, die es wert ist anzunehmen! Für alle am Projekt oder an der Entwicklungstätigkeit in diesem Umfeld interessierten:

Die One Laptop per Child Austria Initiative
Treffpunkt Kulturinformatik, 13. Dezember 2007

http://www.ocg.at/kultur/kulforms-aktuell.html

One Laptop per Child (OLPC) ist das ambitionierteste Bildungsprojekt der Welt. Der XO Laptop ist ein speziell für Kinder entwickeltes Unterrichtsmittel. Er ist extrem robust, hat einen unter vollem Sonnenlicht lesbaren Bildschirm und kann tagelang ohne direkten Stromanschluss verwendet werden.

OLPC Austria ist der erste europäische Verein, die die Entwicklung des XO-Laptops unterstützt. Ziel ist es, Software für den Laptop zu entwickeln und das Projekt in Österreich und Zentraleuropa zu fördern. OLPC Austria agiert mit offizieller Unterstützung des OLPC-Projekts am MIT Media Lab.

Die Vortragenden werden auch einige XO Laptops mitnehmen, die die Teilnehmer auch in die Hand nehmen können.

Bitte anmelden! und teilnehmen!

Monday, December 10, 2007

[About] Welcome Stefan!

A warm welcome to our new blog-author Stefan Edlich! Stefan is currently professor at TFH Berlin. We got to know each other at a persistence event that I organised in Vienna, and Stefan gave a great talk about db4o. Stefan is not only an expert in object oriented databases and involved in the organisation of the icoodb conference, but has a broad view and vision on Software Engineering (also, maybe particularly? in the Open Source domain) as can be seen by the seven (!) books he has published about topics ranging from Apache Ant and Commons to db4o. Enough said from me, check out his homepage for details!

I am happy to have you on board, looking forward to more articles and hope for interesting exchange of news, ideas and concepts on this platform!

cheers


Alex

[Arch] Two important new books: two reviews

Recently two interesting books have been published:

1. Andy Oram & Greg Wilson, "Beautiful Code",

OReilly, ISBN-13 978-0596510046

2. Kent Beck, "Implementation Patterns",

Addison-Wesley, ISBN-13: 978-0321413093

Let's have a look at the first one. The second one will be looked at in the next review posting here.

There have been a lot of rumors before "Beautiful Code" has been published. Perhaps because there are not really that much book that cover something like "the art" of coding. So whenever I listen around, all the Gurus I know seem to read that book. So it must be important...?!

What primary stucks is that the book is edited by Oram / Wilson. What they have done is: they have called for chapters. In this way they have been able to attract 33 chapters from famous coders. This means that the book has no golden thread. The chapters are not build upon another. They simply cover the "Beautiful Code" view of the author. This in turn means that the chapters differ entirely and are not connected. So you have chapter that sound more like beautiful design, beautiful architecture or chapters that discuss what beautiful code might look like or what beautiful language design might be.

Let's have some random examples:
  • Brain Kernighan shows us a regular expression and it's power using C code stuffed with lots of pointers.
  • Karl Fogel shows the delta editor that is the basis of subversion. This includes C Code with lots of comments.
  • Jon Bentley is the first who uses 'higher level' languages, compares some quicksorts and even shows a little mathematical background.
  • Alberto Savola introduces JUnit and shows how beautiful tests can cover a binary search algorithm.
  • Image processing code by Charles Petzold.
  • Bryan Cantrill discusses threads in connection with beautiful code.
  • Jeffrey Dean and Janjay Gemawath show you evolving code leading to the map-reduce solution.
  • Andreas Zeller write about beautiful debugging. He doesn't show too much code, but some insights could rarely be found elsewhere.
  • Yukihiro Matsumoto (the brilliant author of ruby) only shows 13 lines of code but is getting into a short philosophical discussion, how a programming language should look like that 'offers' the ability to write even more beautiful code.

There are a lot more topics like:

finding code, XML Verifiers, beautiful frameworks, operator precedence, secure communication, DNA sequencing and gene sorting, beautiful code at CERN or in LINUX, a reliable Enterprise System for NASA's Mars Rover Mission, software transactional memory, audio& emacs and even Logging, REST, are covered.

Most readers might think at first glance that some areas might not be of interest: LINUX code, Bioinformatics or the Architecture of an Enterprise Ressource planning System might not be every ones first choice to read.

But nevertheless: in my opinion, nearly every of this 33 chapters in this book is a wonderful insight into beautiful XXX from different viewpoints where XXX is code or software-engineering, design, problem-solving or whatever.

But one should not get the weight of this book wrong. It is not possible to read the 550 pages like a novel and finish it in a few evenings. The book challenges you to invest hard work in each of the 33 chapters. The problem / challenge is to understand each code and each problem. To my opinion this need 1 hours per chapter and a some sheets of white paper to get the complete insights.

So to summarize, there might be two readers for this great book:

A) Just read the pages and you might have heard about 'map-reduce' or cool 'testing or debugging'. This might work faster.

B) You would like to understand each area covered in the book. This might cost you 33 hours but you will improve your IT knowledge dramatically.

So what type of reader are you?

Wednesday, December 05, 2007

[Arch] Threads are evil

I just stumbled over the article written by Edward A. Lee professor at Berkely University with the title "The Problem with Threads" (PDF):

In this technical report he raises some interesting points. Actually we all know how difficult it is to program "clean" multi-threaded applications, particularly when data has to be shared. He analyses the problems in detail and illustrates that even quite simple algorithms/patterns like the observer pattern are very tricky to be implemented in a truly thread-safe way.

What is particularly dangerous is the fact, that a programmer might think, he or she had done a thread-safe implementation but actually did oversee issues that could lead to deadlocks or concurrency issues. He discussed the inherent complexity of concurrent programming, and I would add the resulting complexity in testing these multi-threaded programs. They tend to behave in a non-deterministic way, with strong dependencies on operating system- or virtual machine implementations (like different thread scheduling on differnt platforms) and hardware differences (single processor, single core; multi-processor, multi-core machines). Test could run fine on one machine and (occasionally) fail on others.

Summing it up, many bugs in such programs will not be found, and might emerge with new (different) hardware or usage scenarios, as Lee poses it:
"I conjecture that most multi-threaded general-purpose applications are, in fact, so full of concurrency bugs that as multi-core architectures become commonplace, these bugs will begin to show up as system failures. This scenario is bleak for computer vendors: their next generation of machines will become widely known as the ones on which many programs crash.

These same computer vendors are advocating more multi-threaded programming, so that there is concurrency that can exploit the parallelism they would like to sell us. Intel, for example, has embarked on an active campaign to get leading computer science academic programs to put more emphasis on multi-threaded programming. If they are successful, and the next generation of programmers makes more intensive use of multithreading, then the next generation of computers will become nearly unusable."
He further analysis ways to "prune" nondeterminancy by software engineering process, specific test-procedures or explicit language thread support like in Java and the problems associated with it. He also suggests altenatives to threads and ways to implement concurrent programs like coordination languages. He concludes:
"Concurrent programming models can be constructed that are much more predictable and understandable than threads. They are based on a very simple principle: deterministic ends should be accomplished with deterministic means. Nondeterminism should be judiciously and carefully introduced where needed, and should be explicit in programs. This principle seems obvious, yet it is not accomplished by threads. Threads must be relegated to the engine room of computing, to be suffered only by expert technology providers."
Stimulating article, I must say. Any comments, own experiences?

Tuesday, December 04, 2007

[Tech] The Androids are coming?

Actually I am a little bit embarassed to write about Google Android, as I thought, that already everyone knows about the newest Google feat. However, talking to some colleagues and students, it is apparently not yet so well known as a I thought.

Now what is Android?

Google released recently in cooperation with the Open Handset Alliance (including companies like DoCoMo, LG, Motorola, Samsung, T-Mobile, but also a series of software companies like Google, Ebay, and general hardware companies like Intel, Qualcom, Texas Instruments...) an Open Source operating system for mobile phone. The API and development language is Java. And, no, so far there are no mobile phones available that already use Android as a basis. And this is for sure the critical point. Rumors say, that next year the first mobile phones on Android basis might appear.

However, I think that this approach is very interesting. Let's face it: the iPhone is a cool gizmo, but too closed; Symbian and the like are a pain in the a... and developing in Java Micro Edition is all but fun. I also believe, that an open cell-phone platform could bring a significant incentive for new innovative mobile services.

Google provides an SDK for download at Google Code that includes a phone-emulator and a set of nice videos on YouTube. Some of them give a general introduction, some lead through the first development steps in writing an Android application given by Dan Morill.

I actually like what I have seen so far and would really love to see an opening of the mobile-phone sector.

Update: What I completly forgot to mention is the Developer Challenge. Google provides 10 Mio US$ that will be distributed between two calls:
  • Submissions until March 3
  • and submissions after the first handhelds are launchen in probably late 2008
Check out the website for more details.

Thursday, November 29, 2007

[Misc] The war of BPM solution suits

John Raynolds blog impressed me and motivated me to write this article about BPM solution suites. Since SOA "was born" the BPM topic became a new "silver bullet" in the software industry. Major companies, like IBM, BEA, Lobardi and others invest enormous capital to develop BPM solution suites. These suites should help developers to build software systems in reasonable time. Also Open Source providers, like JBoss develop their own SOA stack platform to capture all areas about SOA. Is this the right way?

I agree with Raynolds statement about the benefit of using BPM tools:
"BPM suites allow you to create multi-party process-oriented applications from scratch, deploy them to production, and maintain them over the long haul. You begin by diagramming the business process using a graphical notation such as the Business Process Modeling Notation (BPMN). The resultant Business Process Diagram (BPD) captures the highest level requirements in an executable process definition...
This is one of the great things about BPM, the mapping between the requirement and the implementation couldn't be more straight-forward. The BPD can be stepped through with the business folks ad-infinitum to make sure that it's really correct, and together you can add in all of the (business) exception flows and caveats until everyone is happy."
All products try to abstract the underlying technologies by providing nice GUI-wizards and modeling tools. Additional they introduce their proprietary script languages which is only a subset of Java or Java Script. Especially in this area the BPM tools harm the creativity of software developers. Software development is a creative work!! However, the tools often don't use the full strength of a language, such as Java.

The Infoq article summarizes some statements about Java developers and why they hate BPM. I think the BPM area is too big for a single software system to capture all issues. The real challenge is to combine things together. At present I've not seen a solution suite which closes the gap between the business process modeling and their execution in an engine. Products, like WebSphere Business Modeler, are great tools to model and document your process, but ends with simulation. What about their execution? The real benefit comes with a good combination of both, modeling/documentation and execution in an engine (e.g. jbpm).

Tuesday, November 20, 2007

[Pub] Central and East European Conference on Software Engineering Techniques

The 2nd IFIP Central and East European Conference on Software Engineering Techniques (CEE-SET 2007), was held in Poznan, Poland from 10 to 12 October 2007. The conference aims for exchanging ideas and experiences concerning software engineering techniques, and this year special topic was "balancing agility with discipline". As Barry Boehm and Richard Turner (2003) have noted that both agile methods and plan-driven (discipline) methods present values for successful software development in a changing world, however following question arise, “how much formalism is enough in order to keep a (complex) system responsive to changes?”.

Derived from needs and experiences from Siemens PSE, we presented two technical papers concerning balancing the agility with discipline in context of requirement tracing and coordination support in global software development:

Dindin Wahyudin, Matthias Heindl, Benedikt Eckhard, Alexander Schatten and Stefan Biffl, In-time role-specific notification as formal means to balance agile practices in global software development settings:
In global software development (GSD) projects, distributed teams collaborate to deliver high-quality software. Project managers need to control these development projects, which increasingly adopt agile practices. However, in a distributed project a major challenge is to keep all team members aware of recent changes of requirements and project status without providing too little or too much information for each role. In this paper we introduce a framework to define notification for development team members that allows a) measurement of notification effectiveness, efficiency, and cost; b) formalizing key communication in an agile environment; and c) providing method and tool support to implement communication support. We illustrate an example scenario from an industry background to explain the concept and report results from an initial empirical evaluation. Main results are that the concept allows determining and increasing the effectiveness and efficiency of key communication in a global software development project in a sufficiently formal way without compromising the use of agile practices. (Paper, Presentation)
Matthias Heindl and Stefan Biffl, A Framework to Balance Tracing Agility and Formalism:
Software customers want both sufficient product quality and agile response to requirements changes. Formal software requirements tracing helps to systematically determine the impact of changes and keep track of development artifacts that need to be re-tested when requirements change. However, full tracing of all requirements on the most detailed level can be very expensive and time consuming. In this paper we introduce an initial “tracing activity model”, a framework that allows measuring the expected cost and benefit of tracing approaches. We apply a subset of the activities in the model in a study to compare 3 tracing strategies, ranging from agile “just in time” tracing to fully formal tracing, in the context of re-testing in an industry project at a large financial service provider. In the study a) the model was found useful to capture costs and benefits of the tracing activities to compare the different strategies; b) a combination of upfront tracing on a coarse level of detail and focused just-in-time detailed tracing can help balancing tracing agility (for use in practice) in a formal tracing framework (for research and process improvement). Presentation

All accepted papers were included in the proceedings published as Springer Lecture Notes in Computer Science. Beside presentations and discussions of technical papers, key note presentations focused on deriving the need for empirical evaluation of “state of the art” methods and tools such as agile development in different contexts. Dieter Rombach (head of Fraunhofer Institute for Experimental Software Engineering) suggested that many researchers in SE have provided tools and methods that seem useful, however most of them were never or only partially evaluated with external validation such as industry implementation.

Bertrand Meyer, from ETH, Zürich, delivered the second key note with emphasis on software engineering principles related to agility during development process. In his talk, one interesting point was that, the skeptic toward model-driven development (MDD), as he noticed that current MDD approaches put a focus on developing more user-oriented models with lack of intention toward code and programming as part of the model. He insisted that code or programs can be considered as model of the real system, with a high degree of responsiveness to changes.

In conjugation with the conference, several sessions called as Software Engineering in Progress (SEP) were held on current issues in on-going research in field of software engineering, which in my opinion were very good for new researcher to promote their current achievements and to receive expert feedback in early stage of their works.

Overall, the conference was well organized, and attracted more than 60 participants and presenters from academic and industrial background. I discussed the overall quality of the conference with colleagues from Linz, Dresden, and Bozen, and we agreed that the content of the papers in general had in good quality and were technically sound; many of the presentations sparked lively discussions during the sessions.

Dindin Wahyudin (Edited by Alexander Schatten)

[Pub] Modeling Complex Systems with UML (ICEIS)

The development of complex systems calls for appropriate tools; however, at an the beginning of a project powerful tools that enforce too much formalism early on may hinder designers more than they provide support. For sketching UML draft models the Open Source tool UMLet has become quite popular and warrants an evaluation of the tool’s usability compared to industry standard tools.

Ludwig Meyer presented the paper "Explorative UML Modeling: Comparing the Usability of UML Tools" at the 2007 ICEIS in Funchal/Madeira.

The paper argues that there are three main ways UML tools are used in large scale software engineering:
  1. to exploratively sketch key system components during initial project stages
  2. to manage large software systems by keeping design and implementation synchronized
  3. to extensively document a system after implementation
Professional tools cover (3) to some extent, and attempt to cover (2), but the vast number of programming languages, frameworks and deployment procedures makes those tasks all but impossible. By aiming at these two goals, tools must enforce formal UML language constructs more rigorously and thus become more complicated. The can become unsuitable for (1).

The paper discusses explorative UML modeling and compares the industry standard Rational Rose and the open-source UML sketching tool UMLet (available at http://www.umlet.com). It defines usability measures and assesses both tools' performance using 16 representative use cases that are typical to the creation and modification of UML diagrams.

Dindin Wahyudin (Edited by Alexander Schatten)

Monday, November 19, 2007

[Tech] News from Spring

First of all, today I got the news about the name change of Interface 21, the company behind the Springframework. Interface 21 becomes Spring Source. At present spending a lot of time with Mule. Do you think, there are similarities between MuleSource and SpringSource :)))

Ok, back to my topic!

Developing enterprise applications becomes very structured using Spring. But what about testing? Especially integration tests, database issues and the like. The Spring Mock module provides useful Helper classes and abstract test cases in order to make powerful tests, which deals with transaction management and so on. This article provides a sample application where the fundamental steps of testing a Spring application are described.

Additionally I've found the first part of a three-part article series, dealing with the new configuration features in Spring:
  • Spring annotations for configuration
  • Auto detection of spring components
If the second and third parts are online, I'll update the site. See you!

Friday, November 09, 2007

[Arch] Domain Driven Design - Presentation

In this presentation Eric (specialist in domain driven design and autor of the book domain driven design ) steps through a Cargo sample, where he give an example how to identify objects and put them in a relation. When dealing with domain driven design you will realize that complexity is in the domain and this complexity will be also found in the model.

Eric point to a very important thing in domain driven design: The language and naming of objects. The complexity of implementation can be reduced through good domain design. This fact is represented in his sample by using Leg-Based and Stop-Based models. Depending on the used model there is a major impact on the implementation details. While designing the domain model it is necessary to validate the model by playing different scenarios (use cases) with the model. The language of the model depends on the context in which the model is used.

The presentation gives a really good introduction in Domain Driven Design (DDD).

Wednesday, November 07, 2007

[Misc] Software Engineering and Outsourcing in a "Flat World"

I am just reading the book from Thomas Friedman "The World is Flat", definitly a recommended read. Friedman explains (with a quotation from Carly Fiorina, the former HP CEO):
"The dot-com boom and bust were just 'the end of the beginning'. The last 25 years in technology, said Fiorina, have been just the 'warm-up act'. Now we are going into the main event, she said, 'and by the main event, I mean an era in which technology will literally transform every aspect of business, every aspect of life and every aspect of society.' "
I am actually willing to believe that, however he also brings loads of examples of companies, from HP over Mircosoft to Walmart that do extensive outsourcing activities particularly in China and India.

O.k. I also knew that, but I was not aware of the actual amount of work that has beeing moved from the US to Asia and former eastern-block countries (e.g., like Boing is outsourcing to Russia). Outsourcing from research activities to tax-counselor work to journalism (e.g. Reuters) to supply-chaines that spin around the whole world and produce on direct customer needs.

First I wonder if this is also true for European companies, and more important: have I overslept that development particularly related to Software Engineering? (btw. in that context an American manager expressed the fear, that if this is continuing as it started, the only thing that is left for the American workforce will be to sell hamburgers to each other - if they then still have the money to buy them, I would like to add).

Of course I heard about outsourcing to India... but the typical impression I got was that these outsourcing projects were a rather mixed experience. Thomas Friedman is painting this in much brighter colours than I did observe it myself (as a distant observer). I wonder: does any of the reader here has actual experiences of the "flat world" when it comes to Software projects? Does it really work to do SE projects mainly by using electronic communication? I am not speaking of open source projects, where every participant is more or less following his own agenda, I am speaking of a specific piece of software that needs to be designed for a specific customer with specific needs in a specific time-frame. Does this work for non-trivial undertakings?

Any actual experiences, comments?

Wednesday, October 24, 2007

[Pub] ICEBE Conference: Agile Business Process Management with S&R

Currently the International Conference on E-Business Engineering (ICEBE) is taking place in Hong Kong. I am presenting a paper dealing with agile business process management, a joint work with Josef Schiefer.

The core consideration is, that business strategies that were successful in the 80s and 90s are not necessarily successful in todays fast changing and connected economy. We seem to be moving from traditional over dynamic/virtual enterprises to a general structure of agile enterprises. Haeckl an all suggest a move from "Make-and-sell" towards a "Sense-and-Respond" strategy.

Speaking of agility: what does agility mean: it is generally spoken the capacity of a system to react to unforseen changes in the systems environemnt. We all know, that the software industry faced and still faces issues, as meanwhile requirements often change even during the engineering process, so that it is often not clearly known in the beginning which product is needed in the end. The consequence is clear: software developers have to "embrace change", meaning that they have to develop their software in a way, that change requests during the process can be handled.

Business process management will, this is our theory, follow the same route in the next years. Top-down plannes processes will not support changes in the business infrastructure and will be a legacy. Future business IT will have to cope with ever changing processes. Adaptiveness will take precedence over short-time efficiency considerations and plan-driven operations. In other words, the faster competitor will win, not the one who is (in thery) more efficient.

This apparently also poses significant challenges on software engineers who have to deal with such infrastructures in the future.

In our paper we go into more depth and introduce the architecture and implementation of sense-and-respond systems that allow agile reaction on real-world events. To sum it up:
  • We need real-time business information with minimal latency
  • Automatic discovery of situations and exceptions and generation of appropriate reactions
  • Generating more accurate forecasts in near-realtime using "live" and historic data
  • Integration of internal and external data sources
  • Not only "backend infrastructure": Focus on tool support for various target groups and problem domains
For more information, check out our paper and also download the presentation to get the figures.

Tuesday, October 23, 2007

[Pub] Enterprise Integration Patterns with Apache Camel

In the recent Infoweek magazine I wrote about Enterprise Integration Patterns following the excellent book from Gregor Hohpe and the Apache Camel project that intends to support the implementation of these patterns. Hohpe describes in his book a set of patterns that often occur in enterprise integration projects such as message construction patterns, routing, transformation (format conversion...), filtering, splitting and aggregation of messages and so on.

Apache Camel is now a project, that implements a domain specific language (and alternatively an XML based one) that should simplify the implementation of typical patterns. Several examples can be found on the Camel website, e.g. for content-based routing or content enricher.

Btw.: I just figured, that James Strachan added some screencasts to demonstrate Apache Camel!

Particularly interesting about Camel is the fact, that it can be deployed as component following the Java Business Integration (JBI) standard. JBI is a standard following the Java specification request mechanism and describes how ESB components and the required ESB bus infrastructure cooperate.

As a JBI component it can be used e.g., in the enterprise service bus Apache Service Mix and provide important ESB features like advanced routing or filtering.

Friday, October 19, 2007

[Tech] Introduction to Spring 2.5

Spring has become the de facto standard for enterprise Java development and is also a focal point in our best practice examples. Rod Johnson, the founder of the Spring framework writes a nice introduction article to Spring 2.5. The new version of Spring has many interesting enhancements compared to version 1. In this article Rod Johnson explains the major new features of Spring 2, including:
  • Custom XML configuration and how to deal with them
  • There can now be used Spring Annoations to configure applications
  • Some enhancements about JDBC abstraction
  • Simpler usage of transaction management, by using the new XML configuration
  • Better integration with AspectJ to do AOP
About the above mentioned points, the article also gives a birdview of Spring and what Spring is.

Monday, October 15, 2007

[Event] OLPC Presentation, Podcast

Last week Aaron Kaplan and Christoph Derndorfer from OLPC Austria presented the OLPC (one laptop per child project) at Vienna University of Technology. The presentation was vivid, also because several laptops were presented and the interaction was demonstrated (including a drop from approx one meter height).

Presented topics included the idea of the project but also several details to technical implementations and Software Engineering aspects. To provide this information to a larger number of people I recorded a (German) podcast with Aaron and Christoph today. It is an enhanced podcast, so please do not miss the pictures, screenshots and links.

For the English speaking audience I recommend the "main" information source of the OLPC projects:

http://wiki.laptop.org/

Friday, October 05, 2007

[Event] "One Laptop per Child" Presentation: a SE challenge

On monday Oct. 8, OLPC Austria presents the one laptop per child project (initiated by MIT media lab). The laptop is a fascinating piece of technology (not "just" a cheap laptop as some might assume) and the projects is proceeding, but there is still a lack on special software developed for that device.

This goal of this presentation is on the one hand to introduce the project, show some actual scenarios and the hardware, but also to get involvement (e.g. from our faculty) in developing software for that device.

The presentation will be held in German and takes place on:

Oct. 8 2007, 16:00- 18:00
Vienna University of Technology
Lecture room: HS 13 "Ernst Melan"

presenter is Aaron Kaplan from OLPC Austria, I will introduce the event.

[Event] Software and Systems Essentials Conference 2007

From June 4-6 the 1st Software and Systems Essentials Conference 2007 took place in Munich, Germany. An important goal of the conference is bringing together people from business, industry, and academia who are working in software engineering and information technology with its various aspects. Discussions and exchange of experiences between users in public and industrial contexts and vendors of software solutions (regarding software development frameworks) were in the main focus of the event.

Main topics of the conference were software processes and the exchange of experience on the individual application in various contexts (e.g., in the public application domain), project management regarding systematic systems development processes, and software quality.

We gave a presentation in the track "Company-wide Software Processes" titled with "Methoden-Tailoring zur Produkt- und Prozessverbesserung: eine Erweiterung des V-Modell XT" (D. Winkler, S. Biffl):
Software processes support the construction of high-quality software products. Software processes defines the sequence of steps along the project course. In common industry a wide range of different software processes exist, which focus on individual project requirement (e.g., application domain, project type, size). Nevertheless, these common software processes must be adjusted to be applicable to individual project needs (customization & tailoring). Processes define the sequence of steps what have to be done when. A missing link is the support of suitable methods which support engineers in constructing the product (how the product should be developed).
The talk introduces a concept for a method tailoring approach which enables tailoring of a common software process (based on the V-Modell XT) and the selection of an appropriate method set for project application.
The slides of our presentation are available for download (the slides and the other material from this conference are in German).

Beside presentations and discussions of academic and industry papers, "state of the art" presentations focus on relevant topics for industry and best software engineering practice:
  1. Professionelle Softwareentwicklung sichert Standortvorteile (Georg Schmid, Bayrisches Staatsministerium des Inneren)
  2. On Models and Ontologies – or what you always wanted to know about Model-Driven Engineering (Gerti Kappel, TU Vienna)
  3. Erfolg reproduzierbar machen (Reinhold E. Achatz, Siemens)
The keynote slides are available to the conference participants via the conference website.

Dietmar Winkler (edited by Alexander Schatten)

Wednesday, October 03, 2007

[Event] Euromicro/SEAA Conference Publications

In this recent posting I wrote about the Euromicro 2007 conference. We actually presented two technical papers and one "work-in-progress" paper at Euromicro SEAA 2007:

Early Software Product Improvement with Sequential Inspection Sessions: An Empirical Investigation of Inspector Capability and Learning Effects (D. Winkler, B. Thurnher, S. Biffl)
Early defect detection and remove of defects helps increase the software quality and decrease rework effort and cost. Software inspection – a static verification and validation approach – spots on defect detection in early development phases (e.g., in requirements documents and design specifications). Furthermore, inspection promises to be a vehicle to support learning, even for less-qualified inspectors. Main findings, which were reported in the paper, are (a) the inspection technique UBR (usage-based reading) better supported the performance inspectors with lower experience in sequential inspection cycles (learning effect) and (b) when inspecting objects of similar complexity significant improvements of defect detection performance could be measured.

Download Presentation

Aspects of Software Quality Assurance in Open Source Software Projects: Two Case Studies from Apache Projects (D. Wahyudin, A. Schatten, D. Winkler, S. Biffl)
Nowadays, open source software (OSS) solutions provide mission-critical services to industry and government organizations. Nevertheless, empirical studies on OSS development practices raise concerns on risky practices such as unclear requirements elicitation, ad hoc development processes, little attention to quality assurance (QA) and documentation and poor project management. This paper introduces a QA framework for open source projects and presents some preliminary results on two cases studies from OSS projects regarding a couple of variables, e.g., defect detection frequency, defect collection effectiveness, defect closure time, and ratio of verified solutions.

Download Presentation
A Quality Assurance Strategy Tradeoff Analysis Method (S. Biffl, C. Denger, F. Elberzhager, D. Winkler)
The third paper introduces a based concept for balancing existing quality assurance approaches for software process and product improvement along the process life cycle. The selection of a suitable set of methods depends strongly on the project context and is based on measurable quality attributes. Decision makers need to assess and compare the overall effects of QA method combination and the tradeoffs between involved QA strategies and to identify tradeoffs of individual methods. The proposed QATAM assessment method of different strategies is originally based on the Architecture Trade-Off Analysis Method (ATAM).

Download Presentation

Dietmar Winkler (edited by Alexander Schatten)

Thursday, September 27, 2007

[Event] VLDB Interviews 2: E. Brewer, M. Stonebraker, M. Brodie

As already explained in the previous posting, we had the great opportunity to record interviews with the keynote speakers of VLDB 2007. In the second podcast episode we got three more interviews with Eric Brewer, Michael Stonebraker and Michael Brodie (from left to right):

Eric Brewer talks about technologies that can help emerging countries in building up IT and communications infrastructure, Michael Stonebraker (who also writes a Blog: "The Database Column") and Michael Brodie talk about trends in database technology, enterprise data management and limits of current technologies and product.

All three are asked about research and about the time-to-market issue for new ideas Werner Vogels brought up in the last interview. Additionally innovation at "big" companies vs. startups is discussed and how venture capital can be used to get out your innovative ideas.

Again to mention: the Podcast series in general is German, the interviews are English. Thanks a lot to Dr. Ross King from Vienna University who again asked the questions in the interview.

Wednesday, September 26, 2007

[Pub] Agile development with jMatter

In the current Java magazine (also available online) I've wrote a german article about agile development with jMatter. This technology implements the Naked Objects Architectural Pattern by using Hibernate, Swing and Java Web Start. The main focus in Naked Objects or jMatter is the domain object which will be wrapped with different aspects, like:
  • Persistence
  • Logging
  • GUI
  • Validation
  • Searching and some other aspects
Such aspects are supported by jMatter and will be generated in a generic way. All these things can be customized in order to fit your requirements. jMatter enables a realy fast prototyping. Check out this article and let impress you by this impulsive technology.

Tuesday, September 25, 2007

[Event] "Does Amazon do research?" Amazon CTO Vogels in Interview

Today I had the pleasure to record the interview with Werner Vogels, the CTO from Amazon.com, who held the keynote speach of the Very Large Databases Conference currently held in Vienna.

The interviewer is Dr. Ross King from Vienna University and the interview is part of our "Woche der Informatik" podcast. This podcast is actually in German, however for the English speaking audience the VLDB keynote interviews (starting with this one) are accessible as well. Just ignore the German introduction and jump right into the talk with Dr. Vogels (chapter marks...).



Actually the keynote (and the interview) was very interesting, some points Dr. Vogels discussed where:
  • The problem of state management ("state management is a dominant factor in scaling")
  • Amazon as company: "Amazon is a technology company that accidentally works as a retailer"; he also shows a series of other E-Commerce sites like Marks and Spencer, Mothercare, Smug Mug and others that are actually build on top of Amazon technology.
  • Amazon apparently goes the way (similar like EBay): from a retailer (auction house) to an e-commerce technology provider/platform.
  • A dominant issue in the talk was scalability. COTS products typically do not scale the way Amazon needs it (we tried out mainframes - for one year). Vogels refers to the stability and self-organisation features of biological systems and names particularly "Apoptosis": cell-death; allthough daily 50-70 billion cells die every day, the biological system is stable, aka the human stays alive.
  • Thus Amazon services are build highly redundent. The loss of a complete datacenter would not harm the customer experience. He additionally shares two experiences that might contradict certain academic ideas:
    • "Everything fails, all the time"
    • Systems do not fail by stopping, they might actually do all sorts of weird things in between.
  • Vogels claims, that Amazon did SOA before it became a buzzword.
  • So eventually his bottom line is "Architecture for change".
The presentation was very vivid, and I think some of the ideas were also captured in the interview (and btw. Dr. Vogels has his own Blog: All things distributed. However, parts of the ideas he expressed reminded me strongly to a very good book I like to recommend: Stan Davis and Christopher Meyer. It’s Alive: The Coming Convergence of Information, Biology and Business. Texere Publishing, 2003.

Check out the podcast website!
Or go directly to the feed page.

So, to eventually answer the question in the title, a last quote I personally liked: "Does Amazon do research? We call it production."

Monday, September 24, 2007

[Event] Euromicro Conference on Software Engineering and Advanced Applications (SEAA)

From August 28-31 the 33rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2007 took place in Lübeck, Germany. An important goal of the conference is bringing together people from business, industry, research, and academia who are working in software engineering and information technology with its various aspects.

The different topics ware represented in three main conference tracks
  • Component-Based Software Engineering (CBSE)
  • Multimedia and Telecommunications (MMTC)
  • Software Process and Product Improvement (SPPI)
The conference also includes special sessions that reflect particular research and development topics in areas related to the tracks or in new emerging areas.
  • Next Generation of Web Computing
  • Service Orientation
  • Software Management
  • Work in Progress
All accepted papers were included in the proceeding which was published by the IEEE/CS. The proceeding is also listed in DBLP. Beside presentations and discussions of academic papers, "state of the art" presentations focus on relevant topics for industry and best software engineering practice.
  1. Software Components and Software Architecture: Software Design on its Road to an Engineering Discipline (Prof. Dr. Ralf Reussner)
  2. How good is a process: Evaluating Engineering Processes' Efficiency (Tom Gilb)
  3. Grid Computing: Operating Large Distributed Infrastructures for Advanced Applications (Christian Grimm)
Dietmar Winkler (Edited by Alexander Schatten)

Thursday, September 20, 2007

[Tech] Introduction to JPA

Regarding to my previous post about JPA and DAO it is useful to look at the following article, illustrating a small example showing the basic usage of JPA. The example includes:
  • Annotate your object model with Java Persistence Annotations, including relationships and inheritance
  • How do work with lazy initialations
  • Named Queries
  • How to use the Entity Manager
The example should clarify the discussion about JPA and DAO. Have fun!

Wednesday, September 19, 2007

[Arch] Has JPA killed the DAO

I've found an interesting post discussing "JPA killed the DAO". The DAO (Data Access Object) pattern is one of the fundamental patterns used in building software systems. This object abstract and encapsulates all access to the data (database, file, xml...) and provides a common interface, used by the business layer.

The new Java Persistence API defines an interface to persist normal Java objects (POJOs) by annotating the objects with persistence meta data. All the "magic" is done by the EntityManager, providing a generic data access functionality. To write DAOs for each business object where simple CRUD operations take place (only for database) is a boring task. However, JPA is only for database. What if you access files, LDAP or other systems? In these cases a DAO makes sense, because the DAO abstracts all sorts of data access (database, file, ldap or whatever "behind" a common interface). Still, JPA is a major enhancement towards database handling.

Tuesday, September 18, 2007

[Event] European Software Engineering Conference (ESEC)

From August 5 to 7 the European Software Engineering Conference (ESEC) 2008 took place in Croatia, Dubrovnik. An important feature of the conference, besides presentation and discussion of academic papers, were "state of the art" presentations on research topics that are particularly relevant for industry and Best Software Engineering Practices (Slides are online). In this Blog article I review some interesting presentations from the conference:

1. Software Engineering Research on Test Prioritization (Elaine Weyuker, AT&T Labs Research, USA)

This talk provided a case study on research on software testing prioritization (prediction of location of faults in the next release of large industrial software systems) from problem inception to algorithm definition and small proof-of-concept studies, large empirical studies in several industry contexts, and finally tool building to automate the process and make it easily accessible to practitioners.

Particularly interesting aspects were:
  1. how to get industry to take part in research activities;
  2. how to package research results in a way that is useful to practitioners, and
  3. fosters academic discussions.
2. On Marrying Ontology and Software Technology (Steffen Staab, U. Koblenz, Germany)

Software engineering models for purposes such as software design, software configuration or software validation can be augmented with Ontologies, which constitute domain models formalized using expressive logic languages for class definitions and rules.

This talk gave an outline of current ontology technologies and described avenues of research for joining ontology and software technology, i.e.
  1. by increasing the expressiveness of software design models through ontologies,
  2. by improving accessability and maintainability of software configurations, and,
  3. by validating software design models using ontology reasoning.

3. Quantitative Verification: Models, Techniques and Tools (Marta Kwiatkowska; Oxford U., England)

The talk addressed modeling and simulation of the usage of critical resources in model-driven engineering for complex software systems. While software modelling and analysis techniques such as testing, static analysis, model checking, and run-time monitoring are used routinely in the software engineering practice, quantitative verification techniques are needed to establish properties such as "the chance of battery power dropping below minimum is less than 0.01" and "the worst-case time to receive a response from a sensor is 5ms".

The talk gave an overview on state-of-the-art methods and tool support for probabilistic model checking for quantitative verification of systems which exhibit probabilistic behaviour.

4. Free/Open Source Software Development: Recent Research Results and Opportunities (Walt Scacchi; U. Irvine; USA)

The talk reviewed what is known about free and open source software development (FOSSD) work practices, development processes, project and community dynamics, and other socio-technical relationships. It explored how FOSS is developed and evolved based on an extensive review of a set of empirical studies of FOSSD projects:
  1. why individuals participate;
  2. resources and capabilities supporting development activities;
  3. how cooperation, coordination, and control are realized in projects;
  4. alliance formation and inter-project social networking;
  5. FOSS as a multi-project software ecosystem, and
  6. FOSS as a social movement.
Identifying emerging opportunities for future FOSSD studies gave rise to the development of new software engineering tools or techniques, as well as to new empirical studies of software development.

Stefan Biffl (edited by Alexander Schatten)

Sunday, September 16, 2007

[Tech] XFire and Celtix merge

For several month I developed an application dealing with web services. For this project I discovered XFire, the new generation SOAP framework. Last year the XFire and Celtix communities considered to merge XFire with Celtix which eventually became real! XFire is now Apache CXF (Incubator) focusing on providing an easy to use service framework. Services can deal with a wide range of protocols, such as SOAP, XML/HTTP, RESTful HTTP and some others. The XFire team see this important step as an huge enhancement for current XFire users:
  • JAX-WS Specification compliance
  • Improved HTTP and JMS Transports
  • Spring 2.0 XML support
  • RESTful services support
  • Great WS-* support: WS-Addressing, WS-Policy, WS-ReliableMessaging, and WS-Security are all supported
  • Support for JSON
  • SOAP w/ Attachments support
  • Improved APIs and extension points
  • A larger community, which means faster development, and better support
It is recommended to use CXF in future projects, because feature development of XFire will take place in CXF and not in XFire itself. Actual XFire projects can be migrated to CXF by using this guide.

[Tech] Briefings Direct Podcast Series

Today I found a potentially very interesting Podcast for developers and architects interested in, let's say B2B scenarios: Briefings Direct Podcast. This is a Podcast series from the Interarbor Solutions Analyst Dana Gardner.

However, this podcast series already has nearly 100 episodes with titles like:
  • SOA Insights Analysts on SOA Appliances, BPEL4People and GPL v3
  • Open Source Projects Empower SOA Infrastructure Definition and Development
  • SaaS Providers Increasingly Require 'Ecology' Solutions from Infrastructure Vendors
  • Apache Camel Addresses Need for Discrete Infrastructure for Services Mediation and Routing
I personally was most interested in the Podcast covering Apache Camel, interviewing one of the core developers James Strachan. I am most interested in Camel, as it implements (many) of the EI patterns suggested by Gregor Hohpe. Camel helps developers who use a broad variety of middleware technologies and strategies (JBI ESB, Active MQ, SOA with webservices, ...) to implement these patterns.

Speaking of which (ok, I am a little jumpy today), whoever might not know it by now: Gregor Hohpe will give a speech at Vienna University of Technology this friday.

Update: Please read the comment Dana Gardner postet to this Blog entry: They actually provide a transcript for each podcast episode (whoever is doing this heroic job of transcribing interviews, it sure is very helpful to dig into some details or search for quotations.). So check this out too!

Friday, September 07, 2007

[Event] Informatics-Week

From September 19 to September 28 the "Informatics-Week" takes places in Austria. This series of events is organised by the Austrian Computer Society. In this week a series of high-profile IT conferences are held in Austria, most prominent and from the Software Engineering point of view maybe the most important one is the Very Large Databases (VLDB) conference.



The informatics week additionally launches a set of events ("day of meda", "day of economy", "day of research" and so on), however a detailed program can be found here.

I am running a podcast that started reporting this week about the preparations of the events and gives insight into upcoming events. For SE people I will make also a coverage of VLDB with the support of the general chair of the VLDB Prof. Klas. The first VLDB coverage will be "on the (podcast) air" by next week.

So if you are interested, check out and subscribe to the Podcast. Or directly subscribe to this URL e.g. in iTunes (check the advanced / erweitert menu):

http://feeds.feedburner.com/woche-der-informatik

This is an enhanced Podcast (i.e., contains images and urls), if you are not experienced with listening to podcasts, please check out the brief description I made for the Best-Practice-Software-Engineering Podcast here (but of course use the URL above; unless you want to subscribe also to the SE podcast...).

Monday, September 03, 2007

[Arch] Design a Good API is hard

It takes a long time to learn an API. Especially in the Java world, where a lot of Open Source frameworks are out there. Beside the documentation, also the API is an important factor whether a framework success or failure. It heavily depends on developers experience how easy to handle with the API. Are there any best practices out there to design good APIs? I think there are many start points to design good APIs, e.g. naming or provide small APIs. But there are many other factors influence really good APIs. The ACM queue published an article called API: Design Matters, analyzing apsects of good and bad APIs.

There is also a very good presentation about How to design a good API and why it matters, hold by Joshua Bloch. In this presentation, he assumed why good APIs are important, e.g.:
  • Many people work with API
  • People invest heavily: buying, writing, learning
  • Successful public APIs capture customers
  • and as I mentioned before it contributes to the success of a product/technology
The characteristics of a good API are:
  • Easy to learn
  • Easy to use, even without documentation
  • Hard to misuse
  • Easy to read and maintain code that uses it
  • Sufficentily powerful to satisfy requirements
  • Easy to extend
  • Appropriate to audience
Additional to the above mentioned characteristics an API should also be good documented by using JavaDoc in case of Java. Why JavaDoc? The javadoc goes hand in hand with the development of an interface and is easy to update. It just depends on the developers discipline. The presentation provides a really good starting point by explaining how to write good APIs. He always provide short code snippets as a discussion base. Check out the article and presentation, and learn how to write good APIs.

Sunday, September 02, 2007

[About] Happy Birthday :-)

As time goes by...

Actually our Best-Practice SE Blog is meanwhile one year old! We started with this Blog end auf August 2006.

I must say, that I am quite happy with the impact so far. We have regular writers and readers, and the quality of the blog entries is good from my point of view. In 2006 we had 44 articles, and in 2007 (until now) we have 42 so far with on average more than 5 articles per month, so it seams, that writing is a rather steady process.

Also the number of readers is slightly, but continuously raising over the last years. About the Feed subscription I don't have proper information yet, I just recently started the feedburner service.

However, I want to thank all authors for their articles, and hope that everyone is motivated to participate even more in the next year.

I also (last but not least) want to thank our readers and would ask them for critical and positiv feedback!! Please use the comment function!!

Wednesday, August 29, 2007

[Arch] What is an ESB?

Well, good question. Dealing with a lot of middleware technologies in the last month, I was trying to get a clean description about what an ESB actually is. What is e.g., the conceptional difference between Mule and Service Mix (not to mention about the commercial products like Tibco and Sonic)?

You can actually get an impression, when you read what they write about the other projects respectively, e.g.:
We even get some nice side-blows like: "so if you already have an investment in some Mule configuration or code, you can reuse it inside ServiceMix along with any other JBI components". Good to know, that I can operate Mule within Service Mix, whatever might be the reasion I would want to do that. If we still do not know what an ESB is, and what we would use it for, I recommend the talk of Mark Richards:

Mark Richards poses the question: is it a
  • Pattern?
  • Product?
  • Architecture Component?
  • Hardware component?
Well, maybe all of that, check out his nice video presentation. In this talk he gives some general ideas and also talks about the two mentioned Open Source competitors, SOA, decoupling and all the other nice buzzwords.

Monday, August 27, 2007

[Tech] Combine Hibernate and Wicket

In the previous post, Alex gives an overview about Wicket and what advantages and disadvantages this component based framework has. The majority of web applications are very data intensive, where simple CRUD (Create, Read, Update, Delete) operations needed. Apart from this, many software projects use the popular ORM (Object Relational Mapping) Tool Hibernate in association with annotated domain objects. Such a tool close (or better "try" to close) the gap between the object oriented world and the relational world. To combine these two different worlds is not a trivial task. Consider object oriented appraches like inheritance, polymorphism and other points.

Back to the blog issue! Wicket is an component oriented web framework, commemorating on Swing. On the back end you have Hibernate, often used to implement the DAOs in an application. Currently finished a data driven web application, using Databinder, a simple bridge from Wicket to Hibernate. As a classical MVC framework, Wicket uses models in order to populate data in forms and vice versa. What are the most uses cases of data driven solutions:
  • Show a list of data
  • Filter a list of data
  • Show details of data
  • CRUD operations of objects
  • and some other features
Databinder is a library providing different types of wicket models and view components enables an easy Hibernate integration. For example a data form, containing a number input components for the user and a submit button. Submitting the form entails a model update in the database out of the box. Another use case is to set up a query (Hibernate Query) to receive a list of objects. The list is wrapped in a HibernateListModel which can be populated in an easy way. In order to illustrate the main functionality of databinder, consider the following code snippet, taking from the databinder examples.
@Entity
public class Graffito implements Serializable {
private Integer id;
private String text;
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
@Id @GeneratedValue(strategy = GenerationType.AUTO)
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}

Each databinder application must provide an implementation of the DataApplication, which enhances the WebApplication object from wicket. Databinder must know about the managed annoated data objects, by add all annotated classes to the applicaiton.
public class GraffitiApplication extends DataApplication {
@Override
public Class getHomePage() {
return TheWall.class;
}

@Override
protected void configureHibernate(AnnotationConfiguration config) {
super.configureHibernate(config);
config.addAnnotatedClass(Graffito.class);
}
}

Now databinder is ready to use and manage your domain objects in components, like DataForm, DataPanels, Tables and the like. The following code snippet provides a data form to display domain object details.
    public class MyForm extends DataForm {
public MyForm(String id) {
super(id, Graffito.class);
add(new TextField("text"));
@Override
protected void onSubmit() {
super.onSubmit();
clearPersistentObject();
setResponsePage(OrderList.class);
}


Databinder provides a HibernateListModel enables the developer to create list models by using the powerful Criteria API of Hibernate or Hibernate Query Language. The generated list model can then be populated in a list, by using property list view for instance.
IModel lastFive = new HibernateListModel(Graffito.class, new ICriteriaBuilder() {
public void build(Criteria criteria) {
criteria.addOrder(Order.desc("id")).setMaxResults(5);
}});

// show previous scrawls in list
add(new PropertyListView("graffiti", lastFive) {
@Override
protected void populateItem(ListItem item) {
item.add(new RenderedLabel("text", true)
.setFont(spidershank)
.setColor(orange));
}});
The aim of this blog was not to introduce in databinder, rather I want to give a short overview of an interesting integration library between wicket and hibernate.

Tuesday, August 14, 2007

[Tech] Wicked Wicket

In the last years, the "market" of Web-frameworks was probably the most active one. I know hardly any other domain, where we can find such a number of different frameworks and tools. There are probable hundreds of different web-frameworks out there; just the Apache Software Foundation has: Struts, Tapestry, Cocoon, Wicket, Turbine (? never knew what this is good for), myFaces. So about 6, if I have not forgotten anything, plus a wide range of subprojects dealing with specific problems or providing taglibs or components.

So what to choose?

Actually it seems, that there are three streams of framwork concepts:
  • The "traditional" ones based on JSP (or PHP alike), HTML; i.e., more or less extending JSP like Struts or building upon template engines like Velocity
  • Special-purpose frameworks like Cocoon, that have a stronger focus on XML processing or content management
  • Application-oriented frameworks like Google Web Toolkit or Apache Wicket.
I recently had a look at the "new kid on the block" Wicket, that is recently graduated from the Apache incubator:

The general idea of Wicket is to provide a very clean separation between html/css presentation layer and application logic. As a consequence each "webpage" is a duo of one html page and one Java class. The Java class looks a little bit like a Swing class, in the sense, that visual components are initialised and assembled like in a Swing project. One could create a text-box and put it into a form component for example.

However, the concrete visual appearence is controled by a html document aside, in which html code is used with wicket id-attributes that reference to the Java code. The consequence is, that the html code is pretty clean, there is actually no logic in it, and the complete logic in in Java classes. Wicket also abstracts from the Servlet session, so that the developer never needs to actually touch the "naked" Servlet API.

Additionally Wicket has a lot of additional features like a validation framework, support for authorisation and authentication and the like. Wicket is additionally very Java-oriented, meaning that there is hardly any (XML) based configuration required to get a Wicket application up and running.

That said, the documentation is currently problematic and rather "example-driven". Also the examples are not particularly well documented and partly difficult to understand. Also to website and wiki is rather confusing from my point of view.

To get started I suggest to check out:
  • First check the Homepage. Unfortunately information there is rather brief. Also the examples are only a small extract of the actually available examples
  • Download the recent Wicket distribution or get it from the SVN. IN the distribution you also find a large number (of hardly documented) examples
  • The Framework documentation is an index about documentation on the Wiki.
  • The Wicket Tutorial (Platinum Solutions)
  • Wicket Javadoc
  • Additionally I highly recommend the Maven 2 archetype from the Jetty group to get started. I explained that in a recent post.
  • Additionally Wicket in Action is on the way for the patient ones.
My first impression about Wicket is a quite good one, however there are issues, that make me a little bit doubtful: one thing is the fact, that Wicket keeps a quite "fat" session, which might make problems with scalability.

On the conceptional level however, I am not sure if I like the HTML/Java duo with each page. Somehow I feel it is odd when I write something like this (from the Wicket documentation):



Why do I have to write additionally a html document then, where these two components appear again like so



Quite redundant, isn't it? In the Java class I made already clear, that I want a form and a text-field, so just render it!

I see that this attempt gives a lot of freedom how to render webpages. I am not sure though, if the better strategy in many cases might be to stick to the Swing concept of "Layout Managers". I.e. to refer to the example before: A layout manager (that I can select and customise) decides how the components are arranged on the page. As far as I see it, this is the strategy of the Google Web-toolkit. This approach avoids the beforementioned redundancy by also providing a clean separation between logic and presentation layer.

Comments anyone?

:-)

Tuesday, July 31, 2007

[Pub] IEEE CEC/EEE Conference: Event Mining Paper

This week I had the opportunity to attend at the IEEE CEC 2007/EEE 2007 conference in Tokyo to present our paper Event Cloud - Searching for correlated business events. This event includes the 9th IEEE Conference on E-Commerce Technology (CEC' 07) and the 4th IEEE Conference on Enterprise Computing, E-Commerce and E-Services (EEE ' 07).
The joint conference will focus on new technologies and methods that enable business processes to smoothly extend in a cross-enterprise environment, including solutions to facilitate business coalition in a flexible and dynamic manner over coming Next Generation Internet which provides ubiquitous, multimedia, and secure communication services.Conference Link
At this point I'm happy to announce that our paper has been chosen for the "Best Paper Award" for EEE '07.

Event Cloud was first introduced by my diploma thesis Efficient Indexing and searching in correlated business event streams and covered a proof of concept for managing and searching for correlated events by applying an indexing approach of events. This work included several architectural iterations with the result that an indexing approach was the most promising concept for such applications. Later on Roland Vecera introduced a full-blown prototype of Event Cloud, including several key features, in his diploma thesis Efficient Indexing, Search and Analysis of Event Streams.
Event Cloud is basically a solution that allows domain experts to search for business events and patterns of business events within a repository for historical events. We consider this repository as a cloud of events, which is used for searching and analysis purposes. Event Cloud processes events, thereby creating an index for events and correlations between events in order to enable an effective event search. It provides a historic view of events with drill-down capabilities to explore and discover different aspects of business processes based on event correlations. Event Cloud allows users to investigate events, such as picking up single events and displaying their content and discovering related events or event patterns.
For further details and related paper downloads please visit Senactive Competence Center.

[Tech] Webapplication "Quickstart"

Creating (Java, what else *g*) Web-Applications with a new framework is not always an easy task. Where to put which config file, which config options, how to make the build process correctly, how to start the application in the servlet container and how to make an eclipse project and the like. Yesterday (thank you Reinhard) I was pointed to a really great resource:

Webtide, the company behind the Jetty Servlet-Server project provides a set of archetypes for Maven to create "hello world" webapplications for a broad range of different Web-frameworks (Wicket, Struts, Tapestry, Spring, Webapp with ActiveMQ...).

What does that mean?

(1) you download the artefact you are interested in, e.g. Wicket, unzip it and call mvn install (to install it in your local repository)

(2) you call for example:

mvn archetype:create -DarchetypeGroupId=com.webtide -DarchetypeArtifactId=maven-archetype-Wicket -DarchetypeVersion=1.0-SNAPSHOT -DgroupId=info.schatten -DartifactId=my-wicket-app

and the archetype creates a simple Wicket (or whatever archetype you selected) webapplication with Maven build settings, including the jetty plugin. So type:

(3) mvn jetty:run and the application start.

This is really a helpful set of artifacts to start from!

Tuesday, July 24, 2007

[Tech] Template Engine Stagnation?

Frank Sommers discusses several new attempts in template-engine design and implementations like String Template, the Rails approach, Velocity and the like. Gert Bevin introduces this article in TheServerside.com with the words:

"Template engines seem to be one of the most stagnant technologies in Java".

Actually this question is quite interesting for me, as I made the same observation over the last years. Maybe the answer is very simple though: The reason might be, because they are actually not often used and hence there is not much demand.

For XML related processing XSLT is a proven and powerful technology; Web-Development apparently moves away from template-based approaches as new frameworks like Wicket or Google Web Toolkit show. Then there are some minor application scenarios where template engines are used in the "backend" like generation of Java code by O/R mappers or generating SQL statements and the like. But most developers use strategies one abstraction layer above. Meaning: you do not put your SQL statements together with Velocity, you might use Hibernate or Cayenne.

[Pub] End of Hibernation

I wrote an article in the current iX magazine (German) about O/R mapping strategies. I hope some of you might read the article and might want to discuss some aspects here.

In short, I am discussing the significant differences between "the world of relations" and "the world of objects" and the strategies O/R frameworks like Hibernate and Castor try to overcome these. What seems simple at the first glance turns out to be very complex in many details. If you do not believe me, check out the size of the (very good) documentation of Hibernate. This is the meanwhile often discussed problem, that using Hibernate & Co requires excellent and detailed skills of that framework and the complexity of the undertaking is often underestimated by development teams.

The specific problem here, however, is that the lack of knowledge is not evident immediately. People think they have the framework under control and know whats happening and the mapping initially also might work fine, but during the runtime of the project it often turns out, that things are not so smooth as initially thought. Unfortunately severe problems and issues then bubble up at the worst of times. Very bad performance, severe memory problems, mapping issues, session problems and the like.

In the article I suggest to have a closer look to alternatives like the Spring jdbc templates or Apache iBatis or even to OO databases like db4o.

The bottom line is, even if some articles might suggest it, there is no single best or "default" solution. A good knowledge of the alternatives is required to choose the best framework for the specific problem.

p.s.: there are several articles in this blog dealing with related aspects, please check out the technology section or search for Hibernate, iBatis, Spring... I am honestly to lazy to link all of them here ;-)