Monday, June 29, 2009

[Misc] Hot deployment with Mule 3 M1

Some interesting news from the Open Source ESB Mule. The first milestone from the third version of Mule is out and comes with a major important feature: Hot Deployment

What is the meaning of hot deployment?

Hot deployment is a process of deploying/redeploying service components without having restart your application container. This is very useful in a production environment when you have multiple applications connected over the enterprise service bus without having to impact users of applications.

Check out the example on the mule homepage.

Thursday, June 18, 2009

[Misc] Resilient Services & Software Engineering

I recently read the interesting paper by Brad Allenby and Jonathan Fink "Toward Inherently Secure and Resilient Societies" published in Science August 2005 Vol. 309 and surprisingly enough, free to download. This paper is apparently "inspired" by the attack to the World Trade Center, however discusses resilience of important systems our modern societies are depending on in a more general way. The authors definition of resilience is:
"Resiliency is defined as the capability of a system to maintain its functions and structure in the face of internal and external change and to degrade gracefully when it must."
The further state that:
"[...] the critical infrastructure for many firms is shifting to a substantial degree from their physical assets, such as manufacturing facilities, to knowledge systems and networks and the underlying information and communications technology systems and infrastructure.

[...] the increased reliance on ICT systems and the Internet implied by this process can actually produce vulnerabilities, unless greater emphasis is placed on protecting information infrastructures, especially from deliberate physical or software attack to which they might be most vulnerable given their current structure."
The authors apparently have more physical infrastructure in mind (like physical network backbones and the like), however, I am a little bit more worried on the pace certain type of pretty fragile IT services becomes a foundation for our communication and even business models.

I wrote in a recent blog post about my thoughts on Twitter, which became even more important considering the latest political issues in Iran and the use of this communication infrastructure in the conflict. Twitter is (as we know from the past) not only a rather fragile system, it is additionally proprietary and has in case of failure no fallback solution in place.

But Twitter is not the only example: many of the new "social networks" are proprietary and grow at a very fast speed, and we wonder how stable the underlying software, hardware and data-management strategy is. Resilience is apparently no consideration in a fast changing and highly competitive market. At least not until now.

But not only market forces are troubling these days, also political activities that can effect large numbers of systems. Consider the new "green dam" initiative, where Chinese authorities demand each Windows PC to have a piece of filter software pre-installed that should keep "pornography" away from children. This is of course the next level of Internet censorship, but that is not my point here. My point is, that this software will be installed probably an millions of computers and poses a significant threat to the security of the Internet in case of security holes.

Analysis of the green dam system already reveal a number of serious issues. For example Technology Review writes about potential zombie networks, Wolchok et al. described a serious of vulnerabilities. Now this is not the only attempt in that direction. Germany for example is discussing "official" computer worms that are installed by the authorities on computers of suspects to analyse their activities. France and Germany want to implement internet censorship blocking lists of websites. The list of the blocked websites are not to be revealed and it is questionable who controls the infrastructure. Similar issues can be raised here.

I believe, that also software engineering should start dealing with resilience of ICT services and describe best-practices and test-strategies that help engineers to develop resilient systems, but also to allow to assess the risks that are involved in deployed systems. I am afraid we are more and more building important systems on top of very fragile infrastructure and this poses significant risks for our future society. This infrastructure might be fragile on many levels:
  • Usage of proprietary protocols and software that makes migration or graceful degradation very difficult
  • Deployment of proprietary systems to a large number of computers that cannot be properly assessed in terms of security vulnerabilities or other potential misuses, instead of providing the option to deploy systems from different vendors for a specific purpose
  • Single points of failure: many of the new startups operate only very few datacenters, probably even on one single location
  • Inter-dependece of services (e.g. one service uses one or multiple potential fragile services)
  • Systems that can easily be influenced by pressure groups (e.g. centralised infrastructure vs. p2p systems) e.g. to implement censorship
  • Weak architecture (e.g. systems are not scaling)
  • Missing fallback-scenarios, graceful degradation.
Comments?

Saturday, June 13, 2009

[Misc] Technical Debt

Recently I stumbled over a smart blog entry about the 'technical debt' (link).

The idea is quite nice: imagine everyone would have a 'perfect' software system in mind to be build. Well in fact we live in a 'real world' and a 100% perfect project is always a goal but not the current status. But of course we all strive for 100% as we strive for 100% test coverage.

But the fact is that some companies / developers build better code and some build a little worse code. Now imaging if we could measure this 'worsiness'. Of course a 100% accurate and correct measurement is not possible and subjective. But sonar from codehaus tries to go that way.

Their technical debt is shown:
  • in $ (!!! ouch this hurts)
  • in a spider figure
  • in the form of numbers you can drill down
What they do is they measure at least:
  • The Code coverage
  • The Complexity
  • The Code Duplication
  • The Violations
  • The Comments
There might be more measurements to be integrated soon. And you will surely agree that code comments should have a different weight then the code complexity. Should they?! But what I suggested in my comment is, that it would be great if this measurable debt would be a standard for all projects.

Software developering companies could use a low debt as a marketing instrument. And they likely sell more! The buyer will check the technical debt of the software they buy. As a usual procedure. If the debt is low, the product might be a good and changeable investment that can grow.

If the debt is high the vendor has a problem. Vendors might think they can lock buyers in because they don't check the technical debt. But I am sure time will change and tools like this will be standard in IDEs in 5 to 10 years. Even to check projects in multiple languages.

So for me it's time to face the boss with hard dollars he has to pay back. Sooner or later. The later the more expensive. Let's fight for a technical debt / good metrics analysis as a common procedure!

Stefan Edlich

Tuesday, June 09, 2009

[Tech] Cloud Computing

As I think, I already mentioned here, I believe, that Cloud Computing (and Software as a Service, but this is a slightly different topic) are true game changers in our understanding of software infrastructure and development/deployment. Currently things are still quite rough around the edges, but I believe, that in like 3-5 years the default option of application deployment will be in one cloud service or another. Putting iron into the cellar or storage-room will be what it should be in my opinion: mostly a stupid idea ;-)

In the current stream of IT Conversations George Reese talks about practical aspects and experiences with current cloud-services like Amazon S3, Simple DB, virtualisation... Recommended!