Friday, January 19, 2007

[Event] Software Engineering for Everday Business - Review of the Last Event

Today we had the last event of the series "Software Engineering for everyday business" in coopertion with the chamber of commerce (Vienna) and the Austrian computer society OCG:

Dietmar Winkler introduced the day defining the term quality and giving some introduction about quality factors and quality management, quantification of quality and measurement issues. He continued describing quality assurance strategies on the example of V-Model (XT), Rational Unified Process and SCRUM. Then the process enhancement cycle following PDCA (plan, do, check, act) was described.

In the next presentation, Dietmar explains strategies to review and audit Software artifacts. He explains different review strategies and reading-techniques (explaining typical checklist-based approaches) and the planning of Software Reviews. He recommends the usage-based reading (UBR), that is priorised according to business value, as best-practice inspection technique following studies that have been performed at our institute.

The second speaker, Denis Frast, introduces the term testing and discusses the cost and efficiency of testing. Validation and verification are discussed in the context of the V-model. Fundamental test-strategies are described:
  • "Private" tests
  • Module test
  • Integration test strategies (increment, Big-Bang...)
  • System-test
Integration tests can be leveraged by modern component-based software engineering strategies using containers like Spring, as these technologies allow to flexible change the concrete binding between components and objects. This is important to be able to rewire systems according e.g., to use mock/stub components for testing and integrating the system in various test or production environments.

Also the psychology of testing is important to acknowledge. It suggests to implement own test-teams that are only focusing on testing and are not developers of this system (module). Developers themselves sometimes unconciously try to proof their program right (show that it works) and not necessarily try to push it to or over the limits. This is understandable, as a programmer is successful when he makes no mistakes, whereas a tester is assumed successful when he finds errors.

We can eventually identify two general strategies to derive test-cases: black-box and white-box methods. Black-box tests are more derived from specifications, whereas white-box tests are based on code-structure.

Finally Alexander Schatten gave a brief overview on modern tools to automate project-tests:
  • Unit Tests
  • UITests
  • Codestyle Checks
  • Profiling
He particularly focused on the capabilities of tools like checkstyle. This tool offers a great variety in options to check sourcecode. Starting from coding conventions, over metrics to code-duplications. This tool will be probably described in an special blog article soon. Eventually he lined out the features modern profiling tools like the Eclipse TPTP offers. They allow to solve difficult issues in complex software systems like tracking down performance and memory problems or detect other runtime issues.

Please download all presentations from the Event-Website (partly in german).

Sunday, January 14, 2007

[Tech]EAExpression - building a new DSL

Today everybody speaks about DSL's (Domain Specific Languages) and how they can help solving problems in a specific domain better and easier than general purpose languages.

Senactive is developing a Sense and Respond System (InTime) which uses runtime objects - so called events - to send information from one component to another. For more information on the system see our website. The problem we had was, how we can describe criteria like filters or rules on runtime objects within design time that can be easily described from users and evaluated within the runtime. Our fist approach was using XPath, because C# has a very flexible XPath engine where you can implement the navigator on your own objects (maybe I will post about this another time or Rupert as he did it :-)). We implemented the navigator some time ago and the events can be navigated using XPath expression. The power the language gave us was really greate (functions, adressing ...) but the tradeoff was, that the language is not really intuitive for business users.

So we decided to implement a new language called EAExpression (Event Access Expression) Language. The language should be easier to understand for business users but should also have a similar power XPath gave us. So we decided to implement a DSL and after some brainstorming we came up with the following key points our language must provide:
  • addressing events - this includes events itself and their attributes which can be primitive types, collection, dictionaries, other events ...
    we use a "." notation for this e.g. Event1.Attr1.Attr2 can be used to address the Attribute "Attr1" of Event1 which in this case again is a Event where the Attribute "Attr2" is evaluated. For Collections we use the Syntax Attr1[1] and for Dictionaries Attr1["Test"]
  • Constant values (e.g. Strings "Test", integer 12, float 12.5f, boolean true|false, ...)
  • calculation at least we need to calculate +,-,+,/ and % (modulo)
  • boolean expressions AND, OR, XOR, NOT
  • comparison expressions =, if possible chained
After we knew what we wanted to support, we were looking for a flexible and easy to use Lexer and Parser and came up with ANTLR. It is really a very powerful tool with greate amount of helpful documentation that is needed if you develop a new language the first time. ANTLR is a Java tool which can generate code for the Lexer, Parser and TreeParser to Java, C#, C++ and Python.

Starting to play with it felt like sitting in a compiler course at the university :-) . I really didn't believe i would need this stuff once again - I always said "compilers are just for geeks" but i changed mind as i got deeper into the stuff again. I took us about one and a half week to learn the stuff needed and build up the lexer and parser. But if I need to do it again I would calculate a maximum of two days for a related grammar.

After the expression is evaluated, there is an AST (Abstract Syntax Tree) of your language code which can be easily navigated with ANTLR buy implementing a TreeParser. The TreeParser can now be used to generate code that should be executed in the runtime. This was the hardest part within building our language because of missing support in the C# API to evaluate the numeric operations for the datatypes in a generic way. We were looking for a Expression library that could support us but in the end we had to do it on our own. We used Wrapper classes for the datatypes in order to evaluate the calculations type safe, an other solution would be to use reflection.

In the end we build a wrapper classes for the whole Language called EAExpression like the XPathExpression Object in C#.
Now you can do things like this:

Event1 ev1 = new Event();
ev1["Attr1"] = 12;
ev1["Attr2"] = 15;
ev1["Attr3"] = 1;

EAExpression expr = EAExpression.Compile("Attr1 < Attr2 + Attr3");
bool val = (bool)expr.Evaluate(ev1);


This is just a very simple example where the first 4 lines shows how an event is created.

We have learned several things while building the language:
  • don't be afraid to create one for a specific purpose - some times it is really useful to do it
  • it was much less work than I expected, because there are several tools out there that can help you
  • the language you build should be as simple and easy as possible. Don't try to do fancy stuff or allow several ways to do the same thing. This will be more confusing than helpful to users.
  • in the end we needed to introduce defined functions e.g. Now() for the DateTime. Now in C# which was a little bit tricky
Things we don't have until now:
  • Autocompletition and syntax highlighting for user inputs within the GUI - I will post as soon as we have it and how we will solve it because i think it is a very important part of languages for business folks.

[Event] Software Engineering for Everyday Business

On thursday Jan, 18 2007 the final event of the lecture series "Software Engineering for Everyday Business" in cooperation with the chamber of commerce Vienna and the Austrian computer society takes place at Museumsquartier, Architekturzentrum.

In this final installment Dietmar Winkler, Denis Frast (and I) will discuss methods of quality assurance in software projects. Topics will be (among others):
  • Enhancements of Softwareproducts and -processes
  • Reviews and inspection
  • Methofs of software testing
The lecture will be held in German, the slides will be mainly in English and will be available at the lecture page as usual.

[Tech] Generate the Persistence Layer for iBatis from the Database Schema

As I am writing some articles about iBatis these days, I came across the iBatis "Subproject", or "Tool" Abator.

I think it was obvious from my recent posts, that I have quite some sympathies for the iBatis project. However, writing the persistence part with iBatis (and also other persistence framworks) still means a lot of typing. Typically you have to create the following artifacts:
  1. Create database schema and build a database from it
  2. Write the SQLMaps for each table, including all SQL statements, i.e. typically inserts, updates, selects and deletes
  3. Write the domain models (transfer objects)
  4. Write the data access objects following the DAO pattern typically using the Spring framework's iBatis templates.
  5. Write the unit tests for each data access object
So counting this means you have to write four artifacts (assuming the database is existing) for each table you want to access. This is obviously awkward. So, how can Abator help you: Actually Abator works like so:
  1. You create an XML config file (rather simple) containing the information about the databas (URL, JDBC driver and the like)
  2. Fine tune the code generation setup, e.g., what type of DAOs should be generated: iBatis style (deprecated) or Spring DAOs.
  3. Add the tables you want to access into the config file
  4. Make some optional configuration steps (like, should the pojo be named different to the table, should some fields in a table be omitted, should they by named differently in the POJO, ...)
  5. Define the target directories
  6. Start Abator
Then Abator creates the SQLMaps, the model beans, the DAO in the desired technology (so you get steps 2-4 in the upper artifact-list)! Now, the question arises, if this makes sense, as you also have to write the Abator definition XML and learn to use the tool. Moreover this is a "one time" thing: the generation step will be done only once and then you will most probably modify the generated classes and SQLMaps to better fit your needs.

I personally think I will make sense in many cases, particularly when the data base consists of many tables. Writing the Abator config is rather simple and it generates all the artifacts of n-classes within one step. These generated artifacts are often a robust point to start from, much faster then writing everything from scratch.

Additionally it very easy to understand Abator and allows "newbies" to get a set of SQLMaps, objects and DAOs plus one example class for each table showing how to use the DAOs. This helps to get a quick an easy introduction into the iBatis concepts.
Remark: Please read the design philosophy part of the Abator documentation. It is clearly mentioned that this strategy is a database-model driven strategy. If this is fitting for the project it seems to be great, if the project or developer focus is more a object-model driven one (which is seldom the case in enterprise projects) this approach might not be fitting.

Monday, January 08, 2007

[Tech] Best-Practices for Spring Configurations

I've found an interesting article about Best-Practices for Spring configuration. More and more Java applications are based on the Spring framework. Over the time, software engineers develop their own best practices, but hardly anyone publishes them. Here are some suggestions described in the article:
  • Avoid using autowiring
  • Use naming conventions
  • Use shortcut forms
  • Prefer type over index for constructor argument matching
  • Reuse bean definitions, if possible
  • Prefer assembling bean definitions through ApplicationContext over imports
  • Use ids as bean identifiers
  • Use dependency-check at the development phase
  • Add a header comment to each configuration file
  • Communicate with team members for changes
  • Prefer setter injection over constructor injection
  • Do not abuse depencency injection
Certainly some software companies and developers have their own best practices, but the above mentioned one are general and should be used in every Spring based project.

Maybe our readers want to contribute their practices?