Software Architect 2015: Day 2
The second day of the conference sessions of this year’s Software Architect conference is over. To follow up from my overview of day 1, here’s my thoughts on day 2.
Big Data’s Place in Microservice Architecture
By Gary Short
The morning kicked off with an excellent session discussing how Big Data(™) can fit in to a modern microservices-based architecture. Gary began by giving a primer on both microservices and big data to ensure that everyone had enough background. I didn’t find anything new in the microservices section, but I’ve not had the need to deal with any real-life Big Data, so his discussion of the background of Hadoop, as well as the different data processing mechanisms (batch + real-type processing) was useful.
The second section of the talk was giving examples of how both batch jobs and real-time processing of data can be exposed as services. For batch jobs, it’s a case of creating one or more services that:
- Define and submit the job to the Hadoop cluster, plus publishing a “started” event to an event bus;
- Cache the results once the job has been processed (his example used Redis as a caching layer), and publish a “ready” event;
- Expose the result to any client that requires the data.
For real-time processing, something similar can be achieved using Storm, where the a topology of spouts and bolts can be composed of a system of small, independent services.
I particularly liked his point that the problem of storing and processing large volumes of data was now a solved one, through tools such as Hadoop and Storm:
Both the storage of large volumes of data, and the processing of that data, is now a function of the size of your cheque book.
Interestingly, there were lots of people saying “I know all about microservices” at the beginning, but almost no one knew about the two canonical ways of deploying these sorts of architectures: canary builds + blue-green. In discussing this with Gary at the end, we came to the conclusion that although most people had heard of microservices, and even knew enough about them to understand the basic concepts, advantages and disadvantages, but very few people had got as far as needing to know the specifics of how to deploy these sorts of applications.
Breaking Bad: The Current State of Agile in 10 Easy Lessons
By Sander Hoogendoorn
Sander gave an excellent talk on his thoughts of the current state of agile development, based on his experiences over recent years. It resurfaced some thoughts and ideas that have recently been going around my head in terms of the application of “agile” to our development practices.
Although Sander didn’t directly use the terms directly, his session was effectively a discussion of the differences between the lowercase-a agile that the authors of the original Manifesto envisaged, from the productised, commercial version of captial-A Agile that we often see.
Being agile is not a destination, nor is it a package of software and guidelines that you can take from the shelf and blindly apply. It’s a continuous process of reflection and learning to help you improve the way in which you develop software, with the sole goal of producing better software.
My coverage of this session got quite long, so I’ve broken it out in to a separate article, which I’ll post shortly.
The Docker Revolution: Microservice Container Architecture
By Uri Shaked
Having briefly dabbled in the world of Docker, I wasn’t quite sure what to expect from Uri’s talk: was it going to be a very high-level overview, or was there going to be a bit more in-depth discussion on the actual use of the Docker ecosystem? Thankfully, it was both.
He started off with the basics of VMs vs. containers, before breaking down the components of the Docker ecosystem (Engine, Compose, Kitematic, Machine etc). He then stepped through an example of building a Docker image and running an individual container, then on to running two containers and linking them together (using Wordpress and MySQL). It was good to then see him go on to demonstrate the use of Docker Compose to both orchestrate the creation of multiple containers, and to quickly scale the number of containers used.
He finished up giving an explanation of Google’s Kubernetes, and a demonstration of a simple use case for it. Personally I would have liked to have seen more in terms of this production deployment side of things, including service discovery etc, but it was good to see it mentioned at all.
The Science of Technical Debt
By Brian Randell
To round off the conference, Brian gave a very engaging talk on the potentially-dry subject of identifying, quantifying and dealing with code-level technical debt. He described some tools for estimating levels technical debt (static analysis, code clones, coverage reports, quality metrics, application insights such as exceptions, etc).
- The development team owns quality, which requires buy-in from the whole team, and the done defines and enforces the code quality.
- Short feedback loops for developers to understand if what they’ve built it correct and solves the problem.
- Setting a unified quality gate that all code must pass, even if this is just as simple of “compiles without any errors or warnings.”
- Focus on the definition of done, specifically around what “done” code looks like.
From there, he introduced the tool SonarQube (which, unfortunately, doesn’t support Ruby as a language), but gives an objective measure of the level of technical debt within a system. You can then use this data as part of an actionable plan to reduce your technical debt:
- Collect code quality data (code analysis issues, metrics, coverage reports etc).
- The amount of data collected it likely to be overwhelming, especially for a brown-field product, so the next step is to define a quality profile or lens: rules and data relevant to your team and project.
- This filtered set of data is probably still too voluminous, given that there are probably a whole heap of issues that already exist in your codebase. As such, it’s important to set a baseline for the point you’re going to work from (the last commit, specific version etc), and agree this with the team.
- You then need to define acceptable thresholds with the team for what “good” is, and apply appropriate quality gates in your process to ensure that new code doesn’t cross these thresholds.
- With those gates in place, it’s time to define a remediation policy to deal with cases where the threshold is crossed. The key goals are to clean up as you go, and not make things worse!
- Once we’re not making things worse, we can identify the older issues that need to be focussed on and cleaned up.
One thing I did take away from this talk (and others) is that the quantity and quality of tooling available for Microsoft- and Java-based developers far outweighs that available for more dynamic language bases such as Ruby and Python.
Given the focus of the talk was on the lower level code elements of technical debt, I’d be interested in hearing Brian’s thoughts on similar techniques that we may be able to apply to identity, measure, and mitigate technical debt at a system design or architecture level. Maybe one for the next conference?
Overall, the quality of the conference sessions was very high. Although two I attended were disappointing (to me), the rest were impressively delivered with excellent content.
I did notice that both the talks and attendees were very “enterprisey,” generally living in either the Microsoft or Java ecosystems. It would be nice to see some talks that dealt with the concept of architecture outside of these domains. Of course, there’s always a good chance that those sessions were just the ones I didn’t happen to see… Perhaps I could think of a topic to submit myself, next year?
In general: excellent conference, A+++, would go again.