Thursday, February 21, 2008

[Misc] Yes, there is life outside Eclipse land :-)

A colleague sent me an interesting email, that reminded me in a way to what I wrote recently about the Hibernate/Cayenne thing. Eclipse is a great project, but yes, there are other also very good IDEs available, that should not be overseen, as they might be more productive for certain scenarios or programmers. Netbeans has a somewhat shaken history, but is now since some years a part of Sun Microsystems and became a fierce competitor to Eclipse. My feeling is, that the foundation of Eclipse (OSGI...) is better, yet Netbeans definitly deserves a second look.

Now the Netbeans foundation offered 1 mio $ for projects in the Netbeans environment, which are precisely 10 grants for larger projekts (each $11.500) and 10 grants for small projects (each $2.000) plus some additional special prices.

Submission started on Feb. 1 and ends at March 3, so get movin ;-)

For details check out the Netbeans grant site.

Wednesday, February 20, 2008

[Arch] Lightweight UML

UMLet

Ok, this time I got a guest article from Martin Auer who is an affiliate of my institute; he and his team developed a small UML tool called UMLet. This tool can be of great use in many use-cases, and Martin gives us an overview here:

Goals and Design Rationale

UMLet is an open source UML tool developed by affiliates of our institute. This post outlines the tool's goals and design.

UML has become the standard modeling language in software design. It is often used to create early, exploratory sketches of a system, in industrial and academic environments. Yet this is difficult with many commercial tools: they are bloated with seldom-used features; they require tedious procedures and pop-up windows to change even small parts of a model; and the restrict the user by conforming to the official language specification. 

UMLet main goal is to allow people to create UML diagrams like they would on paper - fast, intuitive and free of restrictions. It uses three main approaches to achieve this. First, UML elements are not represented by tiny obscure icons, but by full-fledged template diagrams. That way, even first time users can readily create diagrams in a template-based way. Even experts benefit by being reminded of the syntax of some of the more arcane element types. 

Second, the elements are modified not with extensive dialogs, but with a text-based approach and a simple markup language. This, e.g., greatly speeds up the creation of attributes and methods for classes.UMLet also provides fast, text-based ways to create sequence and activity diagrams - avoiding the heavy mouse lifting required with other tools. 

Finally, UMLet provides an integrated Java engine to create new graphical element types on the fly, without restarting the tool. This provides maximum flexibility in light of an ever-changing UML standard.

UMLet gets a favorable nod in one of Matt Stephens' blog posts titled "Penguin-powered UML modeling". Matt also comments on the design principles of UML tools in general in his post "Tools vendors stuck on UML and agility".

Conclusion
In one of my next postings I would like to talk a little bit more about UML tools, as I did some "research" on that topic recently. As Martin points it out, often there is really not the need for a bloated and expensive "big" UML tool. E.g. when you need to make some UML sketches for a lecture or a documentation. In these cases I myself ended up using UMLet, which has of course limitations, but turns out to be cool for such scenarios.

Tuesday, February 19, 2008

[Arch] Lazy O/R Mapping

As you might know already, I am not the biggest fan of O/R mapping tools, but I understand, that they can be of great help in many projects. However, a nice article discusses the dominant position of Hibernate, which is really a pity. Many developers these days apparently believe Hibernate is the only option out there. It is not. As this article points out there are other probably better solutions like Apache Cayenne (and of course my favorite iBatis, not O/R anyway).

The main reason for this blog entry is a reference from this article though, pointing to a Javalobby article that gives a nice introduction into the lazy loading problem (for all O/R rookies, highly recommended) and problems that are assoziated with Hibernate there.

Monday, February 11, 2008

[Misc] Global Warming and the SE Power of Ten

Ok, I confess, I did not find a better title for this blog entry. Now the question is, where is the connection between global warming and software engineering. Actually, this is exactly one of the things I try to find out these days.

We all face a severe challenge. Global warming is real, and so is the fact, that resources, particularly also energy resources are limited. IT is often seen as a "cleaner" way to do things; supposedly doing things in the virtual world is less resource intensive than doing them in the real world. E.g. if four people have a Skype meeting this should be less resource intensive than having three people traveling by plane to the meeting location.

Yet the IT industry, and now I come to the point, particularly also the software industry was not so particularly interested in efficiency in the last decade. We develop software that somewhat runs on the current hardware, because with the next generation of hardware it will be fine. This is actually embarrassing. Consider software engineering practices: there is a lot of talk about clustering, putting more iron to the backend if the application is slow, but who is really skilled in analysing an application, figuring out where the hot spots are, and optimizing those? Don't worry, just start a new server, that will do it.

Now the consequence is, that power consumption by IT (servers) increased dramatically over the last years, ars technica writes, that US servers meanwhile consume more power than color tv nationwide and the energy consumption is doubling every 5 years (!). Meanwhile even companies like Google realized that facts and initiate research in the field of renewable energies.

Some other examples struck me recently. I compared three recent game consoles: Playstation 3 and XBox 360 consume approx 200 Watts during playing (some sites even quote numbers up to 300). The Nintendo Wii approximately 20 Watts. So this is a factor of roughly calculated 1:10. Playstation 2 takes approx 50 Watts, Gamecube approx. 20 Watts.

Second example: the XO Laptop from the OLPC projects consumes about 2 W during regular work, a conventional Laptop about 10-45 W, again we have a factor roughly of 1:10, maybe more.

Now, it is clear, that the Nintendo Wii is not as powerful as a Playstation 3 and the XO laptop is not as powerful as a Macbook Pro. Yet, is the difference 1:10? That is the question. I was playing a rather new Playstation 2 game and was astonished how the quality of the graphics can still increase compared with games 5 years ago. The same observation as with the old C64. Consider the quality of the games in the early 80s to the later ones. Clearly, developers learn how to operate the device and got over time the best out of the box; and it can be astonishing what is in these devices.

The point I want to make is this: Up to now energy consumption was hardly an issue for us Software Engineers, as it seems. The result is, that inefficient programming wastes a lot of hardware capacity because we just do not care. The OLPC project was very important as it showed us, what can be done with a laptop using 2 W! Now, I do not care if the XO is the next big thing or the Asus EEE, Christoph wrote a good article about this issue, but the point is, that also we as Software Engineers should start thinking about resources much more than we have done so far. It is just embarassing when a PS 3 consumes 10 times the energy of a Wii (which is probably also not optimised as it could be) or if a laptop consumes 10 times the amount of an XO and the user is just typing a plain text.

Now what can we do? How can we incorporate this issue in teaching and training young engineers. One thing that comes to my mind immediately is the use of profiling tools. Listen for example to the presentation from Rasmus Lerdorf (yes, PHP, I know *g*, still...) from IT conversations. I was not so interested in PHP, but it was very interesting what he told about profiling and optimizing web-applications. It seems, that in many, if not most web-application there is again easily the potential for a factor of 10 in efficiency gain. What does that mean: not only is our application faster, during regular operation it consumes less energy, takes less servers, hence consumes less hardware resources doing the same operation as before.

And particularly the latter is of importance: These days we tend to think of operational costs, CO2 emission and energy use in operation. However, most of the energy is consumed before the device, the server gets into operation by manufacturing it. So whenever we can avoid a new server, we should!

Maybe there are other things to be done? For example: what about a typical server, where several applications run parallel: As in energy consumption and power plant/grid planning: when all apps like to do resource intensive things at the same time, we have to provide a server that can handle the peak load. Could it be possible to let application communicate with e.g., the task scheduler about the current "computing cost" on the machine? And if they are too high to postpone or "nice" the currently planned activity?

I must confess, that I have not more ideas at the moment, but I wanted to get this issue out, hoping for some interesting ideas and reactions from the reader.

Monday, February 04, 2008

[Tech] About Google Android: Old Wine in New Skins?

To give the quick answer immediately, I don't thinks so.

I was dealing the last weeks a little bit with Android, and I must say, that I like the concept a lot. There are several differences to existing systems that might be noteworthy:

Liberté

Android is published under a liberal Apache license and defines the complete stack starting from the Linux kernel up to the application API. This whole stack is Open Source, i.e. handset developers are free to implement this without license fees. Let's hope now, that the members in the Open Handset Alliance see the opportunity as well.

The concept of Android is in my opinion directly targetting the typical mobile-phone concept, where network carriers pretty much define the functionality of the handset. This was and is IT in cuffs and was a reason why hardly useful mobile applications and user interfaces existed. The new systems (not only Android, there are others like Open Moko) could change the game. A mass of developers could finally discover the mobile platform and develop innovative applications.

I personally think that this openness is also a chance for usage in developing countries, where mobile phone penetration typically is much higher than PC usage. Cheap, and locally produced Android handsets could provide richer access to information resources compared to conventional mobile handsets.

Egalité

First, opposed to e.g. the iPhone, all Android "end-user" applications are equal. I think this is a very important conceptual difference to other platforms. That means, that even "core" applications like the addressbook or the phone-manager could be exchanged. This is quite a different concept compared to typical Java ME applications that run "as Java" applications started from a special folder in the mobile. A home-brewn Android application runs on the same level as default-applications provided by the mobile phone.

Interestingly this seems to be a concept feared by Apple (iPhone) and other companies, however, IT history showed, that locked up systems were hardly capable to bring innovative solutions. Let's wait and see if this concept will work or not, I am personally optimistic.

Fraternité

Communication (between applications) is a core part of the concept. I like particularly three aspects:
  • Activities and Intents: An Activity is e.g. a screen that interacts with the user. The interaction is actually expressed by an intent. Now the nice thing is, that these intents can use late-binding over applications. That means: If a user clicks e.g. on a telephone number in the address book the intent "dial phone" is initiated, however, which application "processes" this intent can be decided in the runtime configuration.
  • Rich persistence and communication APIs including a relational database, a key/property store, network access, XMPP (Jabber) libraries and so on.
  • Content Providers define interaction and data access between applications.
Finally, Development in Java

Applications have to be written in Java, which is good news for the large Java community, however there is one issue to be be taken into consideration: The Android platform uses the so called Dalvik virtual machine, which is a Google internal development and has the drawback (?) that it is not binary compatible with the other Java VMs.

This means specifically that it is not possible to use .class or jar libaries directly on Android. This does not work by two reasons (1) as mentioned, Dalvik cannot use .class files, these have to be converted to .dex files e.g. with the dx tool from Google. (2) The Dalvik VM implements only a subset of the Java 2 SE library.

However, this problem should not be too significant. One month after the Android launch first projects published special Android packages, e.g. the excellent db4o object oriented database, which could become quite popular on Android.

From the developers point of view, Google provides a quite good documentation, command line tools and Eclipse plugins including Hardware emulators for different types of mobile phones. I also suggest to check the Android Developer challenge!

Friday, February 01, 2008

[Tech] Maven as standard build tool?

For three years I have my first contact with Maven (started with version 1) and was appreciate about their dependency management. Okay, there was a lot of configuration to do and the performance in bigger projects was unsatisfied, but I think my build processes became clearer through the use of Maven, especially in the deployment area. Maven 2 was a complete redesign of Maven 1 and they introduce new major features, like transitive dependencies, Mojos and other nice thinks. First projects which are based on Maven 2 ends with a lot of unused jars in my lib, because there were problemens with the transitive dependencies, especially with their scopes. Unfortunately, Maven 2 do not work very well when developing Eclipse based applications (RCP, Plugins).

We still use Maven 2 as standard build system in our company and make profit in many areas:
  • Company based settings
  • Company repository, including 3rd party libs and inhouse components --> No more jars in our SVN
  • Repository was set up with Artifactory
  • Dependency management
  • Documentation
  • Integration tests and their documentation
  • All projects have the same structure
  • Continuous integration
  • Major Open Source projects are based on Maven and many of them will migrate to maven in near future
  • There are a lot of plugins and you can write your own plugins
  • Archetypes, providing a basic structure for your projects or component development.
My impression about this blog I got on the InfoQ site, discussiong about "Maven the right tool for build". Our Best-Practice sample also based on Maven 2.