Saturday, April 24, 2010

[Process] The Human Factor

In one of his recent blog-posts Martin Fowler explains, why he is not interested in participating in the Software Engineering Method and Theory initiative (SEMAT) by Jacobson, Meyer, and Soley. Now, this is not really big news, however if found the argument interesting. He refers to Alistair Cockburn saying: 
" [...] that since people are the central element in software development, and people are inherently non-linear and unpredictable - such an effort is fundamentally doomed"
This is surprisingly thin reasoning. In my opinion. Don't get me wrong: I am myself not particularly interested in heavy weight processes, yet, the idea to found software engineering on proven concepts not on "fads", "fashion" and I would like to add: "guru speak" sounds good to me. Software engineering still is,  I believe, strongly driven by opinion and authority, not (scientific) evidence. I don't know much about the particular approach though.

However, the "people as a central element" argument is in my opinion flawed to the extreme. People were involved, yes central in most technical enterprises; from flying to the moon, medical treatment to installing a bathroom. And yet, most of these activities have standards, a scientific foundation and well established best practices. This is not to say that every doctor is good, far from it, also not every plumber is doing proper installations. Yet, we would not excuse bad medical treatment or a leaking toilet with "well, there are humans in the center...". Come on.

There is a second misunderstanding at the core of the argument: it is true that individuals are often non-linear and unpredictable. At least this is what we all want to believe. Who does not want to be unique. We are all artists, coding gurus, geeks; everyone is brilliant and irreplaceable. And yet psychology shows us pretty convincingly that we are way more predictable than we would like to believe, even in our irrationality. Particularly in larger numbers and over larger amounts of time. Now, every projects is somewhat unique, yet, there is a lot we can learn about human nature, human/technology interaction. I claim, that it is simply not true that software projects are inherently unpredictable. At least not due to the human factor

We simply have not learned our lessons yet. Agile processes for example have shown significant improvements where applied properly and where management and customers are participating. There might still be a (long) way to go, but human nature sure does not serve as proper excuse.

Wednesday, April 21, 2010

[Arch] Build scalable systems that handle failure withtout losing data

I found a very interesting article on the MSDN Architecture Center illustrates a real life use case about scalable systems. Designing and building scalable systems is one of the major challenges of Software Engineers. A lot of best practices and patterns exist on the web illustrating the problem, but the specific design and the implementation differ in projects. This article tells a real life example of such a system and the essential steps that were done in order to build a scalable system that also handle failure without losing data. The following topics are covered:

  • HTTP and Message Loss
  • Durable Messaging
  • Systems Consistency
  • Transactional Messaging
  • Transient Conditions
  • Deserialization Errors
  • Messages in the Error Queue
  • Time and Message Loss
  • TimetoBeReceived
  • Call Stack Problems
  • Large Messages
  • Small Messages from Large
  • Idempotent Messaging
  • Long-Running Processes
  • Learning from Mistakes

Tuesday, March 16, 2010

[Misc] Programming from Scratch- Rareness

When you learn a programming language you usually develop small applications to analyze the pros and cons of the language. Another important part is to check the offer of available libraries, frameworks and last but not least the tool support.

Programming from scratch is the unusual way when you develop a software system. In Mike Taylers blog is a very interesting discussion about "Whatever happend to programming?". In former times we wrote compilers, operating systems and cool stuff like that. Today, the programming world is busy to look for libraries for any kind of problem and paste them together.
We did all those courses on LR grammars and concurrent software and referentially transparent functional languages. We messed about with Prolog, Lisp and APL. We studied invariants and formal preconditions and operating system theory. Now how much of that do we use? A huge part of my job these days seems to be impedence-matching between big opaque chunks of library software that sort of do most of what my program is meant to achieve, but don’t quite work right together so I have to, I don’t know, translate USMARC records into Dublin Core or something. Is that programming? Really?
Read more about this topic in his blog. I thing, especially in the Java and Open Source area, Frameworks help you to concentrate on the business case. Through to the adoption of such frameworks decrease the initial development time of new applications. But it also depends on your knowledge about the used technologies and libraries. Often it will be better to write parsers and utilty classes from scratch instead of using a open source library.

According to this blog the JaxCenter provides an online quick vote in German.

Sunday, February 28, 2010

[Misc] Future IT Trends

Just a quick Post:

I (and DZone readers) found this link quite interesting.
It shows technology trends and the interesting figures are not
the absulute values but the technologies that have the fastest grow.

Have a look: IT job trends - Which technologies you should learn next

Is the horse you bet on in the list?

Thursday, February 18, 2010

[Tech] GIT:Mercurial = Assembler:Java

I am using Mercurial since about half a year pretty regularly and I am also (forced) to use GIT recently. And I must say, that I am not pleased with the GIT experience at all. An initial statement first, though: I am not arguing about features here; it is no doubt, that GIT is an extremely powerful and also reliable sourcecode management system. But the user experience is, in my opinion, questionable at least. 

The first time I got into contact with distributed SCM systems was through Mercurial. There are some new concepts that have to be understood (coming from Subversion), but generally spoken it is pretty easy to start with. Simple things are simple, complex things are mostly reasonable to understand. The basic set of commands and switches is kept simple and are hence easy to learn and understand. One is not flooded with commands and options in the beginning; specific functions (e.g. patch queues, rebasing, ...) can be switched on later by enabling the very extension. About 35 extensions are part of the Mercurial distributions and can be enabled by adding one line to the config file. Other extensions can be installed when needed.

In my opinion, this is a very clever way to hide unecessary complexity in the beginning and to provide the new user with a clean and simple set of commands. Later on, enabling specific extensions allow a "fine-tunes" feature set. Along comes a concise but pretty good help documentation.  One example: "hg help log" explains the log command in about one screen page.

The first encounter with GIT on the other hand is in my opinion rather terrible. Try "git help log" and you get around 20 (sic!) screens with Unix-style documentation (and this is not meant as a compliment) of the log command. There is a lot of documentation on the GIT homepage though. But there are a lot of other "goodies" in the man pages as well. One more example? 

Well: "hg glog" shows me a representation of the history of my repository including branches. "git log" does only show the current branch. OK, so there should be an option to control that. Well, first of all: good luck with 20 man pages... Then I searched for "/branch" in the man-page and found the "--branches" switch (which apparently does the job). Yet what do I find as explanation of this switch?
--branches: Pretend as if all the refs in $GIT_DIR/refs/heads are listed on the command line as
I am a very peaceful character, but honestly, when I read such a statement in a documentation, I feel the urgent need rising to beat the developer who wrote that line.

The Git community book that is (according to the website) "meant to help you to learn how to use GIT as quickly and easily as possible" start in the first chapter with a detailed explanation of the internal GIT data model instead of explaining fundamental principles of DSCM und Git. WTF? To be fair, there is a set of other documentation artefacts on the website that appear to be significantly better for starters though.

The main issue in my point of view, all considered, is the lack of encapsulation/layering of functionality. What is done great in Mercurial (e.g. with extensions) is done very bad in GIT.  My feeling with GIT is, that there is at least one level of abstraction too few in the design. The user experience reminds me to the less good student projects I have seen over the years: In many of these projects there is no need to take a look at e.g. the ER diagram or database schema: a glance at the GUI is sufficient. Each database table is represented in the GUI, probably separated by "tabs". "Great" design: all internals layed out to the user, who is usually not interested in technical details, but in solving a specific (higher level) problem.

I got a similar feeling with GIT: no doubt it is technically on a very (!) different level than the mentioned student projects, but the usability feels pretty much the same. With nearly every interaction I have not the impression of solving my SCM problem, but interacting with technical details I am not actually interested in. It feels like cars in the 1920 (or Vespas in the 90s), you spend more time under the hood fixing some crap then driving to your destination.

I really hope, that the GIT team is going to improve the user interface. in the future.  Allowing low-level ("assembler commands") for specific purposes and experts and a significantly better abstracted set of commands for "day to day" operations. Consequentially, at the moment, I would definitly recommend Mercurial over GIT, simply by the much better user experience and layering/abstraction of functionality.

p.s.:  5min after I wrote this article, I figured, that Martin Fowler wrote an article yesterday about VCS;

Wednesday, February 17, 2010

[Tech] Balsamiq Mockups

If you develop Client or Web applications providing a User Interface you end up with questions like:

  • Which GUI do we provide
  • Elements (Input fields, Buttons, etc.) should the GUI contain
  • What is the structure of the GUI
  • and many other questions
Usually you have several workshops with the end users who work with the final software systems. Balsamiq mockups is a great tool to create mockup GUIs in a very fast time. I always use this tool in workshops with the customer. With this smart software product you can create and tweak UI designs in real time during the meeting and the user will see the result immediateley. The tool provides a predefined mockups, like Buttons, Tables, Fields, Tabs and many other common GUI elements. There are also elements available for iPhone applications. A design of a GUI can look like this:
A more complex GUI:

Several designs can be exported to PNG images. Therefore you can create some variants and play use cases through the GUIs. Another very important point for GUI Developers is to determine common GUI components that are used in different modules of your application.

Don't paint the GUIs on yourself, the time is too valuable to waste on it.

Monday, February 15, 2010

[Arch] Event Based Programming

In his blog / twitter feed (worth following) Ralf westphal writes this:

> A "must read" for everyone interested in Software Architecture:
> About the damn being in Software Development - Coupling:
The expanded link is here.

The link references a very good Apress Book! (some pages 100 to 300 are missing).

The only thing I am missing in this book is that there are many tools out there to supervise and reduce coupling. So Event Based programming is not the only solution to complexity / coupling.

Tuesday, February 02, 2010

[Pub] Eclipse Plugin for Mule and Mule Data Mapper

The main topic of the actual Eclipse Magazin is called Plugin Parade, where I published a short article about the new Mule IDE and Mule Data Integrator, two Plugins for Eclipse. The Mule IDE provides an integrated Mule server for Eclipse. Therefore the test of Mule environments in Eclipse is very comfortable and easy. As data transformation is a significant part in an ESB, a graphical support tool such as the Mule Data Integrator provides a powerful tool for integration developers. Mule Data Integrator is an end-to-end solution for complex data integration and transformation, simplifying the development, maintenance, and deployment of data maps. One of the major advantage of the data integrator is the integrated Test Suite.

The combination of the Mule IDE and Mule Data Integrator provides a really good environment for your integration development.

Tuesday, January 26, 2010

[Conf] Cloud Computing at OOP 2010

Today I attended the session called "Cloud Computing ohne Buzzwords - und wie sieht die Zukunft aus" at the OOP 2010, provided and overview of Cloud Computing and their effects to present Software Architectures. Till this day I had no experience with Cloud Computing, and this session gave me the opportunity to hear some basic information about Cloud Computing and the benefits and risks that such a paradigme comes with. Good canditates are Amazon or Google. Both provide a wide range of services around the topic Cloud Computing.

A very interesting point is that RDBMS do not work very well in  the Cloud and alternative systems such as "NoSQL" becomes more popular through Cloud Computing:

Next Generation Databases mostly address some of the points: being non-relational, distributed, open-source and horizontal scalable. The movement began early 2009 and is growing rapidly. Often more characteristics apply as: schema-free, replication support, easy API, eventually consistency, and more. So the misleading term "nosql" (the community now translates it with "not only sql") should be seen as an alias to something like the definition above [Source:]

From my point of view, new kind of software systems make use of new platforms, such as Facebook, LinkedIn, Twitter in order to persist their data. And these platforms use this new mechanism to persist data. There are already interesting frameworks, such as CouchDB, Amazon SimpleDB and many others. A list of NoSQL candiates can be found here.

The session also provides some other interesting information:
  • Characteristic of Cloud Computing
  • Public and Private Clouds
  • Grids vs Cloud
At present Cloud computing is the new Hype and Software Engineers must track this topic!!

Friday, January 15, 2010

[Misc] Software Carpentry

In "Interviews with Innovators" Jon Udell talked recently with Greg Wilson. Greg Wilson is well known for his "Software Carpentry" courses. These courses did not focus on computer science students but mainly on students from other scientific disciplines like Chemistry, Physics or generally engineering studies. His goal in the "carpentry" courses was, as I understand it, to teach scientists who need Software for their work the essential tools and practices. I have the feeling, that this is pretty similar to our "best practice" efforts like our websites, this blog and the book.

However, the motivation is very interesting. One nice example he gave comes to my mind: He occasionally noticed that scientists (using apparently Matlab and the like) use functions mainly for code that is not used any more. Pretty counterintuitive? Well: they use to write the code (as it is possible in many scripting languages also e.g. in Python) in "Spaghetti" script style; as soon as they make major updates they put unused code (that they still want to keep for some purposes, because they also are not aware of versioning tools) into a function, because code in a function is not executed automatically. Nice approach ;-)

However, aside these extreme examples there is, in my opinion, a lot to learn also for education in "regular" computer science classes. As Wilson puts it in an example:
"The research money and PBS Nova programs focus on artificial hearts when in fact all the increase in longevity comes from clean water, anti-smoking campaigns, better nutrition, vaccination and the like. These routine public health measures that no longer are exciting, so they are actually loosing ground."
The same is probably also true in Software Engineering education: 95% of  engineering, but probably also of IT students will not develop the next "Google" but some Web-Forms that connect to a database and produce some reports. But this should be done in an efficient and maintainable way.

I believe this is an interesting thought: we write develop and focus on the latest "cutting edge" technologies, probably leaving the majority of developers behind and probably even worse: confusing them more every day with always new approaches and tiny improvements instead of focusing on the problems the majority of developers and companies have.

Listen to the interview and share your opinion!