Software development is hard. Sure, there are things that can make your live easier (e.g. Containers or ubiquitous language), but sadly there is "No Silver Bullet" as Frederick P. Brooks Jr. concludes in this 16-pager. With our advance in technology, development becomes easier and faster. But some things may not bring the redemption we hoped they were (like "automatic" programming code or even OOP and somewhat newer AI).
One of the more promising members of the redemption-club is the "Great Designer" (p.15) of the software system. They build software "faster, smaller, simpler, cleaner […] with less effort". Today, we call someone with the skillset described by Brooks a "software architect".

In 2019, I went to a great Summit in Munich where Trisha Gee (@trisha_gee) held the key note about required skillset for a software architect. I want to share her insights mixed with my views here:

Master of communication

The software architect is a master of communication. Obviously, this is not limited to verbal communication, but also includes writing skills. Writing does not stop with good programming and documentation skills. Things that matter are e-mails, slack and twitter! Asking questions like "what are we building?" and "what skills does the team have?" are as important as listening to the answers and translating it into software.

"Your code does not speak to the machine. It speaks to the next one who reads it!"

Talk to different people. Talk to developers, domain experts and users. Try to get a feel for their problems, challenges and constrains within their domain.

Adaptability & open minded-ness

Be openminded! There are a thousand views on a simple topic. Users und domain experts might change their minds rapidly, technology and processes change. It is your job to order things, estimate the impact and derive actions.

It’s not the year of K8s!

No, Kubernetes, AI and agile development are not the magic solution to every problem. Always learn what’s needed.

Prioritization & time management

We all work in projects. There is always too much work for too few people – deal with it. Allocate time for yourself. Make a plan for your work, for time at home and absolutely free time. Mental health is an essential part of a "Great Designer". As an architect, your time is limited and valuable. You cannot learn everything, but try to keep up.

Stay technical

Most of the things up until this point are non-technical. But be careful; do not underestimate the "Business Analyst Movement". Trisha points out that especially women are pushed into non-technical and softer roles too often. Don’t become a PM, stay an architect.

Scale out

At some point in the history of software engineering we got to the point where we understood, scaling out may be better than scaling up [Admiral Grace Hopper]. The same applies for great engineers. Instead of just getting better, help others to get better.

If you want to be 10 times more productive, teach 9 people your skillset.

Use "pair programming" more often, but do not stop at development. Do it for deployments and troubleshooting with a DevOps engineer and for domain building with a business analyst. Code reviews and walkthroughs "are not for finding bugs only – they are about sharing information and writing the best system you can". If your company supports it, Trisha recommends 20% time. Another idea to share are book clubs where five people read a book – one or two chapters each and tell the others about key information in their part. This way everybody can get a little knowledge and decide if it’s worth reading the whole thing.

"Nobody knows how good you are! Teaching makes you look good."

There are different ways of teaching and being taught. You can teach in internal, informal (or less formal) sessions during lunch time called

Visit user groups and speak on conferences. As usual, there are pros and cons for each type. Decide what’s the best for you.

If you don’t like sharing with foreigners, share with your colleagues. This way you avoid too narrow specialization and knowledge-silos.

Retention and recruitment

Being a good architect means finding new projects and interesting topics in your environment. That is the easy part. Also, watch out for new colleagues and keep your team(s) happy! Be a good role model, be a paragon for great designers.

Community support

We love stack overflow! We visit conferences and we gather at meetups. You cannot explore every technology yourself – especially not as an emerging architect. You need to consume what the community can provide, but you also need to give back. You can talk about your personal challenges when your first big project failed or you commit to an open source project: maybe there are easy enhancements for your favorite JavaScript library or you build a python wrapper for a public REST API.
Do you like Goldman Sachs? Probably. But aren’t they an evil banking company? Probably. Nevertheless, their developers are vivid contributors to Java libraries. They published their enhanced version of JavaCollections (called GS Collections) and influenced a lot of things like the Java Streaming API.
The same things goes for Microsoft. They open source their .NET Core platform as part of the .NET foundation and publish their code to the best IDE ever created on GitHub.

As data-driven and AI-first applications are on the advance, we extend our best practices for DevOps and agile development with new concepts and tools. The corresponding buzz words would be continuous intelligence and continuous delivery for machine learning (CD4ML).

For our current project, we researched, tried different approaches and build a proof of concept for a continuously improved machine learning model. That’s why I got interested in this topic and went to a meet up at Thoughtwork’s office. Christoph Windheuser (Global Head of Artificial Intelligence) shared their experience in this field and gave a lot of insights. The following post summarizes these thoughts [1] with some notes from our learning process.

CD4ML continuous intelligence cycle

The continuous intelligence cycle

1- Acquire data

Get your hands on data sets. There are multiple ways, most likely the data is bought, collected or generated.

2- Store, clean, curate, featurize information

Use statistical and explorative data analysis. Clean and connect your data. At the end, it needs to be consumable information.

3- Explore models and gain insights

You are going to create mathematical models. Explore them, try to understand them and gain insights in your domain. These models will forecast events, predict values and discover patterns.

4- Productionize your decision-making

Bring your models and machine learning services into production. Apply your insights and test your hypothesizes.

5- Derive real life actions and execute upon

Take actions on your gained knowledge. Follow up with your business and gain value. This generates new (feedback) data. With this data and knowledge, you follow up with step one of the intelligence cycle.

Productionizing machine learning is hard

There are multiple experts collaborating in this process circle. We have data hunters, data scientists, data engineers, software engineers, (Dev)Ops specialists, QA engineers, business domain experts, data analysts, software and enterprise architects… For software components, we mastered these challenges with CI/CD pipelines, iterative and incremental development approaches and tools like GIT and Docker (orchestrators). However, in continuous delivery for machine learning we need to overcome additional issues:

  • When we have changing components in software development, we talk about source code and configuration. In machine learning and AI products, we have huge data sets and multiple types and permutations of parameters and hyperparameters. GitHub for example denies git pushes with files bigger than 100mb. Additionally, copying data sets around to build/training agents is more consuming than copying some .json or .yml files.
  • A very long and distributed value chain may result in a "throw over the fence" attitude.
  • Depending on your current and past history, you might need to think more about parallelism in building, testing and deploying. You might need to train different models (e.g. a random forest and an ANN) in parallel, wait for both to finish, compare their test results and only select the better performing.
  • Like software components, models must be monitored and improved.

The software engineer’s approach

In software development, the answer to this are pipelines with build-steps and automated tests, deployments, continuous monitoring and feedback control. For CD4ML the cycle looks like this [1]:

CD4ML Pipelines

There is a profusely growing demand on the market for tools to implement this process. While there are plenty of tools, here are examples of well-fitting tool chains.

stack discoverable and accessible data version control artifact repositories cd orchestration (to combine pipelines)
Microsoft Azure Azure Blobstorage / Azure Data lake Storage (ADLS) Azure DevOps Repos & ADLS Azure DevOps Pipelines
open source with google cloud platform [1] Google cloud storage Git & DVC GoCD
stack infrastructure (for multiple environments and experiments) model performance assessment monitoring and observability
Microsoft Azure Azure Kubernetes Service (AKS) Azure machine learning services / ml flow Azure Monitor / EPG *
open source with google cloud platform [1] GCP / Docker ml flow EFK *

* Aside from general infrastructure (cluster) and application monitoring, you want to:

  • Keep track of experiments and hypothesises.
  • Remember what algorithms and code version was used.
  • Measure duration of experiments and learning speed of your models.
  • Store parameters and hyperparameters.

The solutions used for this are the same as for other systems:

search engine log collector visual layer
EFK stack elasticsearch fluentd kibana
EPG stack elasticsearch prometheus grafana
ELK stack elasticsearch logstash kibana

[1]: C.Windheuser, Thoughtworks, Slideshare: https://www.slideshare.net/ChristophWindheuser/cd4ml-thoughtworks-meetup-munich-christoph-windheuser-may-8th-2019

Introduction

Most people want to learn new things. It could be a new skill, a new hobby or simply to broaden your general knowledge horizons. We have this desire to learn and grow, but yet we struggle to find the discipline to achieve our learning goals. I’m sure all of us can attest to learning intentions – be it from a new year’s resolution or some other source of inspiration – that died a silent death along the road side.

So, why can’t we achieve our learning goals? A complex question indeed, but a part of the problem is that we have to actively do something in order to get where we want to be. For example, if you want to learn a new language, then you have to open a book and read; or log onto a website and complete the lessons and tests; or go to evening classes at your local school or college. And there are more than enough challenges in our lives that prevent us from doing this diligently! But what if we could still learn without actually doing something actively, just by engaging in our normal daily routine?

Active vs. Passive Learning

As already mentioned, Active Learning means that you have to initiate an action by yourself to achieve a desired learning objective. It is a decision and a discipline that you have to set into motion by yourself.

Passive learning, on the other hand, means that you learn without initiating something by yourself. To explain this in more detail, let’s continue with the example of learning a new language. Learning a new language actively means that you have to read a book or go to a class. Now, let’s try to find an example of how you could learn a new language passively.

Let’s say that it takes you an hour to drive to work every day. In this time, you could listen to audio tapes or CDs that help you to learn a new language. You are going to drive to work in any ways, so why not use this time to learn something new? This is a good initial example to dive into the idea of passive learning, but it still has some shortcomings: you have to make the decision to switch the language CD on instead of listening to your favorite music or the radio (even though you are not doing anything actively once it is switched on); audio alone is not necessarily enough to learn a new language, since you might also want to look at grammar structures and alphabet of the new language. But at least we have made some progress. We don’t have to open a book or go to a class anymore. In the next section, we look at how we can use technology to further expand the idea of passive learning.

Technology and Passive Learning

The digital era is upon us. Technology is pervasive throughout society. As a result, we also consume large amounts of information electronically. We surf the web to inform ourselves about topics that interest us; we read the news online; and we use a variety messaging systems and social media – to name just a few! These are things that we are going to do every day as part of our routine. So, can we build in a Passive Learning experience while going about our daily routine? The answer is yes, and in the next section we illustrate how this can be achieved by means of a practical example.

Technology and Passive Learning: A Practical Example

In this section we will look at a practical example, again in the context of learning a language. Vocabulary is an important building block in the language learning process. Within a learning context, it is important for us to map words from one language to another so that we can learn the vocabulary of the new language. Flash cards often get used to achieve this goal. The idea is simple: you have a word on one side of a card and you flip the card to see the meaning of this word in another language.

Flash card software (flipping the card with a mouse click) has also been around for a long time. The problem is that this still requires the Learner to be motivated and do something actively. So, we need to find a way in which the Learner can get exposed to the new vocabulary in a passive way.

As mentioned in the previous section, we consume large amounts of information electronically these days. Let’s say that we consume online information in English and we want to learn German. Our proposal is to now develop a web browser plugin that will replace some of the English words on websites with German words. Selecting the correct amount of words to replace is important, since it should still be easy for the reader to understand the text without too much effort. As a starting point, our suggestion is to replace only 10% of the nouns. The image below has some sample text that shows the difference between the original website and the transformed website. You should still be able to understand the content of the transformed website without too much additional effort. Try it out for yourself!
ArtOuput1

From a programmatic point of view, it is not difficult to tokenize and extract the nouns in a piece of text. Most programming languages have either built-in capability or 3rd party Libraries that does just that. Below is a JavaScript code-snippet (using the pos-tag lib) illustrating this concept.

fs.readFile('input.html', 'utf8', (err, data) => {  
    if (err) throw err; 
    const result = pos(data);
    //extract all the nouns – pos stands for ‘part of speech’
    const nouns = result.filter(item => item.pos === 'NN');        
    nouns.forEach((item) => {
        //get the translation of the extracted nouns    
        var trResult = getTranslation(item.word, 'en', 'de');
        data = data.replace(item.word, '<strong>' + trResult.translation + '</strong>');

    });
});

Completing the circle: Reintroducing Active Learning into the Passive Learning Experience

So far, we have been making good progress in creating a passive language learning experience. But we can go even further!

The idea is to reintroduce a form of active learning back into our current model. The words that we replaced in our original source text will be created as hyperlinks; when the Learner clicks on one of these words, we will provide more information about the word.
ArtOutput2
In our case, we will link to a WordNet browser. Wordnet is kind of like an intelligent electronic dictionary that, amongst others, provides synonyms and word-meanings in context. The image below is an example of a popup WordNet browser that would display once the Learner clicks on one of the hyperlinked words in the source text.
WordnetBrowser

The active learning that takes place here is different from the active learning as described earlier. In this case, the Learner would click on the hyperlink out of curiosity and consequently also learn something. It differs from the scenario described earlier, in the sense that the Learner does not have to find some kind of internal motivation to set the learning process into motion. The learning happens as a result of curiosity that was generated by the embedded Passive Learning experience.

Conclusion

Passive Learning ideas can be embedded into technology that we are using on a daily basis. We illustrated how passive learning can be used in the context of language learning as part of our daily web browsing experience. We also showed how Active Learning could be reintroduced into the learning process as a result of the Passive Learning context in which the Learner is operating. This example only scratches the surface of what is really possible when combing passive learning and technology. Some questions – to name but a few – that come to mind for possible future work in this area, are the following:

  • Can the idea be introduced into messaging platforms such as Skype, Slack and WhatsApp? These messaging technologies are pervasive and get used by millions of people on a daily basis.
  • We should also be able to expand the idea so that it applies to a variety of language pairs. Also, we only looked at replacing a certain percentage of nouns in the text, but we can also include adjectives, adverbs and verbs, and make it configurable to suit the Learner’s needs. The image below illustrates how such a configurable setup could look like.
    Configuration
  • And finally, what about other learning domains? Can we make adjustments so that the Passive Learning experience is also possible in other domains such as Math, Engineering, Biology and Social Sciences?

I went to a great session about CQRS, Event Sourcing and domain-driven Design (DDD) on the Software Architecture Summit. The speaker Golo Roden (@goloroden) did a fantastic job selling these concepts to his audience with a great storytelling approach. He explained why CQRS, Event Sourcing and DDD fit together perfectly while replicating the www.nevercompletedgame.com for his daughter. This is what he shared with us.

Domain-driven Design

The more enterprise-y your customer the weirder the neologisms get.

We – as software engineers – struggle to understand business and domain experts. Once we understand something, we try to map it to technical concepts. Understood the word "ferret"? Guess we need a database table called "ferret" somehow. We then proceed to inform our business colleagues, "deploying a new schema is easy as we use Entity Framework or Hibernate as OR-mapper". He thinks we understood, we think he understood. Actually, nobody understood anything.
As software engineers we tend to fit every trivial and every complex problem into CRUD-operations. Why? Because its "easy" and everyone does it. If it was that easy, software development would be effortless. Rather trying to fit problems in a crud pattern, we should transform business stories into software.
That’s why we should use domain-driven design and ubiquitous language.
Golo Roden proceeds to create a view on the nevercompletedgame with ubiquitous language. So nobody asks, "what does open a game mean" and there is no mental mapping.
I won’t go into detail here, but an example can show why we need this.

  • Many words have one meaning: When developing a software for a group of people, sometimes we call them users, sometimes end-users, sometimes customers etc. If we use different words in the code or documentation and developers join a project later – they might think there is a difference between these entities.
  • One word has many meanings: Every insurance software has "policies" somewhere in its system. Sometimes it describes a template for a group of coverages, sometimes it’s a contract underwritten by an insurance, sometimes a set of government rules. You don’t need to be an expert to guess this can go wrong horribly.

CQRS

Asking a question should not change the answer

Golo Roden jokes, "CQRS is CQS on application level", but actually it’s easy to understand this way, once you read a single article about CQS. Basically, it’s a pattern where you separate commands (writes) and queries (reads): CQS.

  • Writes do not return any values and change the state of an object.
    stack.push(23); // pushes value 23 onto the stack; returns nothing
  • Reads return a value and don’t change the state.
    stack.isEmpty() // does not change state; returns a isEmpty boolean
  • But don’t be fooled! Stacks are not following the CQS pattern.
    stack.pop() // returns a value and changes state

Separating them on application level means: exposing different APIs for reading (return a value; do not change state) and writing (change state; do not return value *). Or phrased differently: Segregate responsibilities for commands and queries: CQRS.

* for http: always returns 200 before doing anything

Enforcing CQRS could have this effect on your application:

For synchronizing patterns see patterns like the saga pattern or 2 phase commit . For more reference see: Starbucks Does Not Use Two-Phase Commit

Event Sourcing

When talking about databases (be it relational or NoSQL) often we save the current state of some business item persistently. When we are ambitious we save a history of these states. Event sourcing follows a different approach. There is only one initial state, change requests to this state (commands) and following manipulating operations (events). When we want to change the state of an object, we set up a command. This triggers an event (that’s published so some kind of queue) and most likely is persistent in a database.

Bank account example: we start with 0 € and do not change this initial value when we add or withdraw money. We save the events something like this:

Date EventId Amount Message
2019-01-07 e5f9e618-39ad-4979-99a7-342cb1962266 0 account created
2019-01-11 f2e98590-7795-4cf7-bdc2-1794ad39874d 1000 manual payment received
2019-01-29 cbf44bfc-7a5e-4514-a906-a313a6e0fb5e 2000 saylary received
2019-02-01 32bc638c-4783-45b8-8c1e-bebe2b4528a1 -1500 rent payed

When we want to see the current balance, we read all the events and replay what happened.

const accountEvents = [0, 1000, 2000, -1500];
const replayBalance = (total, val) =>  total + val;
const accountBalance = accountEvents.reduce(replayBalance);

Once every n (e.g. 100) values we save a snapshot to not have to replay too many events. Aside from the increased complexity this has some side effects which should not be unadressed.

  • As we append more and more events, the data usage increases endless. There are ways around removing "old" events etc. and replacing them with snapshots, but this destroys the intention of the concept.
  • Additionally, as more events are stored, the system gets slower as it has to replay more events to get the current state of an object. Though, snapshotting every n events can get deterministic maximum execution time.

While there are many contra arguments there is one the key benefit why its worth: your application is future proof, as you save "everything" for upcoming changes and new requirements. Think of the account example from the previous step. You can implement/analyze everything of the following:

  • "How long does it take people to pay their rent once they got their salary"
  • "How many of our customers have two apartments? How much is the difference between both rents?"
  • "How many of our customers with two apartments with at least 50% in price difference need longer to pay off their car credit?"

To sum it up and coming back to our initial challenge, our simple CRUD application with domain-driven Design, CQRS and event sourcing would have transformed our architecture to something like this:

While this might solve some problems in application and system development this is neither a cookie-cutter approach nor "the right way" to do things. Be aware of the rising complexity of your application, system and enterprise ecosystem and the risk of over-engineering!

Ein solides File Management ist der erste Schritt zu guter Datenqualität im Datawarehouse. Wie kann man File Management im Modern Datawarehouse realisieren? Ein Ansatz sind die Azure Datafactory und Azure Databricks. Azure Functions bieten hier eine gute Alternative. Bevor ich euch die Lösung zeige, will ich das Problem noch etwas eingrenzen.

File Management

Häufig müssen verschiedenste Arten von Dateien in ein Datawarehouse importiert werden.
CSV-Dateien sind bei großen Datenvolumen, im Vergleich zu HTTP-Schnittstellen, schnell exportiert und schnell importiert.
XLSX-Dateien sind üblich, wenn es eine Schnittstelle mit einem manuellen Prozess gibt und ein Anwender Daten in Excel bearbeitet.
XML-Dateien erlauben den Austausch von komplexen Datenstrukturen zwischen Systemen.
Ein bewährtes Vorgehen ist für Schnittstellen-Dateien Metadaten zu erfassen, etwa wann eine Datei registriert und geladen wurde. Gleichzeitig sollten diese Dateien archiviert werden. Manchmal müssen die Dateien noch konvertiert werden.

Das Modern Datawarehouse

Realisiert man klassische Datawarehouse-Architekturen in der Cloud führt kaum ein Weg an Infrastructure as a Service vorbei. Das größte Potenzial bietet die Cloud aber bei Platform as a Service, da im Leerlauf weniger Ressourcen verbraucht werden und damit Kosten sinken können.
Eine Datawarehouse-Architektur die auf Platform as a Service realisiert werden kann ist das von Microsoft vorgeschlagene Modern Datawarehouse

Das sieht folgendermaßen aus:
Modern Datawarehouse Diagramm

Azure Functions

Azure Functions sind zustandslose Webservices, die auf unterschiedlichste Ereignisse registriert werden können und dann eine bestimmte Aufgabe erledigen sollen. Ereignisse können Zeitpläne, HTTP-Requests oder Änderungen in einem Azure Blob-Store sein. Kosten entstehen pro Aktivierung einer Funktion und skalieren so gut mit der entstandenen Last.

Azure Functions

Problemstellung und Lösungsansatz

Die Datenhaltung im Modern Datawarehouse ist keine Relationale Datenbank sondern ein Data Lake. Ein Data Lake ist im Kern ein Dateisystem.
Die Datenintegration legt Dateien im Lake ab und Konsumenten lesen die Dateien. Dabei ist essentiell, dass die Daten geordnet abgelegt werden. Nach Möglichkeit müssen die Daten partitioniert werden, sodass bei Zugriffen nicht alle Dateien durchsucht werden müssen.

Welche Komponente übernimmt jetzt das File Management? Eigentlich müsste die Azure Data Factory diesen Aspekt abdecken. Ist aber noch recht limitiert und eignet sich eher für simples Data Movement. Alternativ kann das File Management in Databricks implementiert werden.
Azure Functions sind hier aber preislich günstiger und nicht weniger leistungsfähig.

Rechnet man mit wenigen hundert Dateien, die pro Tag verarbeitet werden wird man das Gratisvolumen von 1 Mio. Zugriffen kaum ausschöpfen.
Ein anderer Vorteil gegenüber der Azure Data Factory ist, dass man hier vollwertige Programmiersprachen wie C# oder Python verwenden kann. Mit Konzepten wie Vererbung kann ein großer Teil des Codes wiederverwendet werden und muss pro Datei-Typ nur sehr wenig Code zusätzlich geschrieben werden.

Design

Das Design ist simpel. Es gibt drei Azure Storage Accounts. Einen für den Input, einen für das Archiv und einen für den Data Lake. Im Input und im Archive existiert jeweils für jeden Dateityp ein Blob Container. Im Lake Storage gibt es nur einen gemeinsamen Container.
Jetzt wird eine Datei im Input abgelegt. In meinem Beispiel ist das ein Microsoft Flow, der einen Anhang aus einer E-Mail mit einem speziellen Betreff ablegt. Das könnte aber genauso ein AzCopy-Aufruf von einem Dienstleister sein.
Eine Azure Function ist auf neue Dateien im Input Blob Container registriert und wird aktiviert (Punkt 1 in der folgenden Grafik).
Die Function erzeugt ein Verzeichnis im Archiv Container mit dem aktuellen Zeitstempel und kopiert die Datei unbearbeitet in das neue Verzeichnis. Damit können keine Dateien überschrieben werden oder verloren gehen (Schritt 2).
Im Lake Container gibt es jeweils je Dateityp ein Verzeichnis.
Die Function vergibt der Datei einen neuen eindeutigen Namen, konvertiert bei Bedarf den Zeichensatz und legt die Datei im Lake Storage ab.
Dazu werden noch die Metadaten als eigener Dateityp abgelegt, die bei Analysen verknüpft werden können (Schritt 3). Zuletzt wird noch die Datei aus dem Input gelöscht.

File Management - Flow

Die Struktur der Function ist in der folgenden Grafik gezeigt. Die gesamte Logik liegt in der File-Klasse und ist damit wiederverwendbar. In der spezifischen Klasse wird nur festgelegt in welchem Container der Input liegt und in welchem Verzeichnis die Daten im Lake abgelegt werden sollen. Die Metadaten werden als eigene einheitliche Klasse realisiert und können so leicht serialisiert und im Lake abgelegt werden.

File Management - Structure

Demo

Für die Demo habe ich eine DEV-Umgebung in Azure erzeugt. Zusätzlich zu den erklärten Services benötigen die Azure Functions noch einen weiteren Storage Account und einen App Service Plan. Dazu werden noch die Application Insights verwendet um die Lösung zu überwachen.

Ressource Group

Die Application Insights zeigen auch, dass die Lösung tatsächlich läuft.

Application Insights

Im Storage Explorer kann man dann sehen, wie die Dateien im Lake abgelegt werden. Für jede Input-Datei wurde über eine GUID identifizierbar der Inhalt als CSV und die Metadaten als JSON abgelegt.

Lake - files
Lake - zep

Wie funktioniert Automatisches Deployment von Datawarehouse-Systemen? Im Rahmen von Projekten mit DevOps-Ansatz spielen Continuous Integration und Continuous Deployment eine wichtige Rolle für ein effizientes Arbeiten. Der erste Schritt zu Continuous Deplyoment ist die Deployment-Automatisierung. Das ist bei Datawarehouse-Systemen nicht anders. Eine Anforderung dazu ist die Versionsverwaltung der Quellen. Jede Version kann automatisch in ein Release übersetzt werden. Das umfasst das Erstellen einer Installationsdatei aus den Quellen. Das Deployment ist dann das Aktualisieren einer Systemumgebung (Produktionssystem oder Testsystem). Das vereinfacht insbesondere die Qualitätssicherung, was in Folge auch die Änderbarkeit des Systems verbessert und letztendlich die Qualität an sich erhöht.

CI and CD

Systemdefinition und Versionierung

Bei SQL Server-basierten Datawarehouse-Systemen besteht die Systemdefinition aus Visual Studio Projekten. Für SQL-Datenbanken, Integration Services, Reporting Services und Analysis Services gibt es jeweils Visual Studio Projekttypen. Das ist aber noch nicht ausreichend für das automatische Deployment von ganzen Lösungen. Es fehlen Projekte für SQL Server Agent Jobs und Linked Server. Integration Services Pakete verwenden häufig PowerShell-Skripte für File-Management und Dummy-Dateien für die Validierung der Pakete. Dafür gibt es ebenfalls keine Standardlösung. Reporting Services Projekte unterstützen keine Report Abonnements mit Data-Driven-Subscriptions, die häufig für automatische Exporte von XLSX- oder PDF-Dateien verwendet werden. Ein weiters Problem dieser Projekttypen ist, dass es keine geeignete Möglichkeit gibt um zu definieren, wie diese Projekte zusammenhängen, sodass eine vollständige Lösung deployt werden können und dabei die zielumgebungsspezifischen Reports mit Datenbanken, ETL-Prozessen verknüpft werden können.
Diese Mängel müssen mit zusätzlichen Skripten und Konfigurationsdateien kompensiert werden. Ein Build-Skript erzeugt ein Release-Archiv mit allen Dateien, die für das Deployment nötig sind. Ein Deployment-Skript verwendet das Release-Archiv und den Namen der Zielumgebung als Parameter. Eine Deployment-Konfiguration enthält die Informationen zur Verknüpfung der Komponenten. Eine Umgebungs-Konfiguration enthält die umgebungsspezifischen Parameter wie Hostnamen und Benutzernamen.

DWH Release

Implementierung

Es gibt diverse Möglichkeiten diesen Ansatz zu implementieren. Bewährt haben sich XML-Dateien für hierarchische Konfigurationen, wegen guter Lesbarkeit und Unterstützung von internen Variablen, die z.B. JSON nicht unterstützt. Tabellarische Konfigurationen wie z.B. umgebungsspezifische Parameter können gut in CSV-Dateien abgelegt werden, da diese einfach verarbeitet werden können und leicht in Excel gepflegt werden können.
Die Build- und Deployment-Schritte können effizient in PowerShell implementiert werden. Grafische Werkzeuge wie Microsoft Azure Pipelines, CA Application Release Automation sind für kleinteilige Deployment-Schritte sehr umständlich und eignen sich maximal zur Ausführung von Deployment-Skripten.

Recently, I attend the JavaScript Days 2019 where I participated in two awesome workshops (TypeScript Deep-Dive and Advanced black magic in TypeScript) dealing with advanced TypeScript features. Within this blog post I want to share some of the findings and aha moments I had during these sessions.
Note that these learnings are very personal, not necessarily interrelated and often quite opinionated. I hope that you can nevertheless profit from my experiences. If you have any questions or want to discuss some of the more controversial topics, please leave a comment down below.

Contents

  • Where to and where not to add types
  • How to think about function signatures
  • Enums vs. Const Enums vs. Union Types
  • Immutability in TypeScript

Where to and where not to add types

Let’s start of with a pretty fundamental question: at which places should you add types to your plain JavaScript? As we all know, TypeScript allows us to type variables and function signatures. Furthermore, there is the possibility to write "OOP-style" TypeScript using well-known constructs like classes and interfaces together with concepts like inheritance and polymorphism.
Let’s suppose we want to start using TypeScript in our existing JavaScript project. Where should we start using types?
First of all, I would highly recommend that you turn on the compiler option noImplicitAny in your .tsconfig. According to the comment in the boilerplate .tsconfig, this option

raise[s] [an] error on expressions and declarations with an implied ‚any‘ type.

Of course, the thing we as TypeScript developers hate the most is the any type. Once we come across e.g. a local variable of type any, we will not get any further information about it and autocompletion or refactoring features don’t work – we are basically back to the stone age of plain JavaScript. With the noImplicitAny option turned on, the compiler so to speak warns us where type inference fails to do its job and points us directly to the places in our code where we need to add types.
When googling something like "TypeScript tutorial", nearly every introduction starts by explaining the syntax for typing a variable. You usually encounter something like this:

let message: string = "Hello World";

One of the very few exceptions to this rule is the official quickstart from the TypeScript team itself – it seems like they know what they are doing 😉
I would suggest: don’t annotate your variables with types. You don’t need to. Instead, use the powerful builtin type inference to your advantage. This is trivial for primitive types like in the example above.

let message = "Hello World";

This is semantically exactly the same as the previous line, since TypeScript infers that the variable message is of type string.
So don’t type variables explicitly, but rather type function parameters and return values and use some of the OOP constructs if required. However, there is one exception to this rule: Literal types.
I makes sense to type these explicitly if you don’t want type widening to happen.

How to think about function signatures

Consider the following example:

interface Address {
  street: string;
  city: string;
}

interface Customer {
  name: string;
  dateOfBirth: Date;
  addresses: Address[];
}

function getAddressesByCity(city: string, customer: Customer) {
  return customer.addresses.filter(address => address.city === city);
}

Since we initially plan to call the function getAllAddressesByCity in the context of a customer, we require the function to always take a customer as an input. However, the function really doesn’t operate on a customer, but rather on an Address array. Think about the plain JavaScript which the TypeScript compiler emits. It looks like this:

function getAllAddressesInCity(city, customer) {
  return customer.addresses.filter(function(address) {
    return address.city === city;
  });
}

Since none of the TypeScript types exist at run time, the compiled function doesn’t care at all if the object we pass it is really a customer or if it is something completely different as long as it has an array of objects each containing at least a city property. So why don’t we generalize our initial implementation of the getAllAddressesByCity function a bit.

function getAllAddressesInCity<T extends { addresses: Customer["addresses"] }>(
  city: string,
  input: T
) {
  return input.addresses.filter(address => address.city === city);
}

Not limiting the inputs artificially, but rather describing what the function could in principle do, results in two distinct advantages. Of course, this style of coding leads to more reusability throughout your code base. E.g. we could easily use getAllAddressesByCity for vendors as well (assuming that they can indeed have multiple addresses).
The second benefit arises when it comes to testing our new function. Using our initial implementation we would have to mock a whole Customer object, even if the only thing we really need is an array of Addresses. With our more general implementation we can work with much smaller and therefore more manageable mocks. However, this second advantage becomes less important, when using a mocking framework like typemoq.
So in a nutshell, describe functions in terms of what they CAN do, instead of what they SHOULD do. This makes functions more reusable and mocking much easier.

Enums vs. Const Enums vs. Union Types

Chances are you are using enums or const enums to organize a collection of related values you use throughout your codebase. If so, consider replacing your enums with union types. The following table compares enums, const enums and union types.

Type Runtime Artifact Opaque
enum number Yes No
const enum number No No
string enum string Yes Yes
const string enum string No Yes
literal union type arbitrary No No

As you can see, one big advantage of literal union types is, that you can use them with arbitrary types (e.g. with object literals). Additionally they don’t have a runtime artifact, which saves precious bundle size when developing frontends. The fact that they are not opaque means that you can e.g. directly assign the literal string "Country" to a variable of type GenreUnion – if using a (const) string enum, you would have to write GenreEnum.Country instead:

enum GenreEnum {
  Country = "Country",
  Western = "Western"
}
type GenreUnion = "Country" | "Western";

let westernFromEnum: GenreEnum = "Western"; //Type '"Western"' is not assignable to type 'GenreEnum'.
let westernFromUnion: GenreUnion = "Western";

Note that if using a modern code editor like VS Code, you don’t have to worry about fat-fingering the strings – you get autocompletion for them, and even if a typo does happen, you get an immediate compiler error.

Of course, there are some edge cases where you might need to use regular enums (e.g. you can’t iterate over union types), but most of the time, literal union types work just as well and provide the discussed advantages.

Immutability in TypeScript

Let’s suppose we want an Address object to be immutable. The first thing which comes to mind could be to use the const keyword for local variables like this:

const address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Mutation of object properties is possible.

However, according to the specification,

[const] are like let declarations but, as their name implies, their value cannot be changed once they are bound. In other words, they have the same scoping rules as let, but you can’t re-assign to them.

So the const keyword does not help us with creating an immutable object, it merely prevents us from reassigning another object to the address object.
A viable approach for making an object immutable would be to mark all its properties as readonly like this:

interface Address {
  readonly street: string;
  readonly city: string;
}

const address: Address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

If you don’t want your Addresses to be immutable always, a cleaner solution would be to use the built-in utility type Readonly, so that you don’t have to create a whole new interface (or class or type):

interface Address {
  street: string;
  city: string;
}

const address: Readonly<Address> = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

Starting with TypeScript 3.4 there is a third option for achieving immutability for objects: const assertions.

// Type '{ readonly street: "Elsenheimerstraße 53", readonly city: "Munich" }'
const address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
} as const;

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

Note that additionally the type of address is now extremely specific, since no type widening (e.g. no going from "Munich" to string) takes place.

Take-aways

  • Take advantage of the powerful type inference and only type variables explicitly if really necessary
  • Describe functions in terms of what they can do, instead of what they should do
  • Use literal union types instead of enums
  • const is not enough for creating an immutable object

Surely, you’ve heard the fairytales from microservices and monoliths. Or on a similar note, the tales about distributed (big) balls of mud from people like Simon Brown.


Usually these posts point out what goes wrong and how unexperienced teams go for a "hype-driven/tunnel-vision architecture". But how do you actually cut microservices? How do you design interfaces? What are techniques to find weak points in your application or system architecture?
In this post I digest the intends and views of a talk on the software architecture summit in Munich.

Speakers

Herbert Dowalil @hdowalil on Twitter
Stefan Zörner @STefanZoerner on Twitter

Microservices vs. Monolith

The spectrum of architecture definitely isn’t as binary as most blog posts suggest it is. There are way more types, here is a short recap of some of the most famous ones.

Microservices

There are different definitions out there, but most of them share key points like a domain-driven modules cut, outstanding flexibility and network distribution. One (generally accepted) definition in a FAQ-style by Jimmy Bogard.

Self-contained Systems (SCS)

Similar to microservices, but usually bigger services and fewer in a system.

Deployment Monolith(aka. Modulith)

You have a valuable product, but it does not suite a SaaS business model? Most of your customers won’t offer a Kubernetes clusters or you have a hard time building a secure deployment pipeline? This does not stop good architecture! You can still cut your software in great modules and deploy it as one package.

Architecture Monolith

Such a monolith has no defined structure. It consists of modules, but they are referenced across the whole system. Looks like this:

SOA

A blueprint of doing application architecture to generate standard services showing actual business processes steps. The main benefit should’ve been the orchestration of services to full processes and composition of new processes.
While SOA might have failed for most, it is "survived by its offsprings" (Anne Thomas Manes: SOA is Dead; Long Live Services). For a comparison between SOA and microservices see this O’REILLY report.

All these types focus on cutting a big software block into modules. Microservices for example focus on the smallest autonomous boundaries, SOA on reusability and composition. Because of this, the following ideas ‚how to define modules‘ can be used for microservers, but don’t stop there. You can use them to define Java packages, C++ namespaces or create C# assemblies.

LET’S START

Often we start with question groups like "How many parts do I need?" and "Where do I cut?"

Sometimes we don’t even know if we should initially start with a monolith (Martin Fowler: MonolithFirst) or with microservices (Stefan Tilkov: Don’t start with a monolith). We can answer this particular question now: it doesn’t really matter when designing modules and interfaces. When you need a push for microservices think about upcoming complexity. The speakers phrased it nicely:

Modularization makes complexity manageable. Investment at beginning makes complexity manageable at the end.

For the first questions, we follow an enhanced version of the SOLID criterias by Robert C. Martin. Herbert Dowalil calls them "5C". Such principles sound nice, but what does strong cohesion or lose coupling mean?

"If you can’t measure it – you can’t manage it"

The key here is using "old school" metrics like cyclomatic complexity or average relative visibility. That’s it; here are the 5C:

Cut

Cut your modules in a way where they only have one concern/a single responsibility. Try to maximize cohesion inside a module.
Metrics: e.g. LCOM4, Relational-Cohersion, cyclomatic complexity

Conceal

Hide the internals of a module inside. Nothing outside the module needs to know about the technologies used or any work (changes) done inside.
Metrics: e.g. low relative visibility, low average relative visibility and low global relative visibility

Contract

Design small, specific and easy to understand interfaces between modules.
Metrics: e.g. Depth-of-Inherence

Connect

Explicitly declare every connections between modules.
Metrics: e.g. RACD and NCCD from John Lakos. Stability from the software package metrics.

Construct

Build new modules by connecting modules together. Move from a lower level of modules to a bigger view on the system. For higher level, modules follow the same patterns as for lower level ones.
Metics: e.g. automated tools like ArchUnit, Sonorgraph or Teamscale

For a deeper dive, see the "Architektur Spicker" (GER only) and follow @Herbert Dowalil on Twitter for updates on his new book.

Once you’ve cut your modules you can decide if you want to distribute them over the network. Pros can be:

  • Independent technological decisions:
    This reduces macro architecture from your software and lets you use specialized tools and languages for each module (like .NET for a web API and python for the underlying data science or spring boot for your user handling and c++ for a calculation core).
  • Technology roll-over:
    You can upgrade your technology components step by step. An upgrade from the Java or Node.js runtime might improve performance and eliminate security threads or you can change a module from an outdated version to the newest one (Angular.js to Angular). This can be useful when a system runs longer than anticipated.
  • Developer autonomy:
    Every part of your software can be developed and deployed independently. This shortens feedback cycles and enables developers to evaluate ideas faster.

While this sounds awesome (and it is!), there are also downsides:

  • Troubleshooting and debugging:
    Following calls and processes through your distributed software is harder than debugging a single component. Tracing makes troubleshooting easier, but replicating an error state across your whole application can be hard.
  • Complicated Ops:
    Operating your software can be way harder. For example, there might be issues with creating secure pipelines and authorization concepts through company boarders.
  • Consistency:
    There are ways to build around eventual consistency and distribution and sacrifice strong consistency. Examples would be the 2 phase commit (but you shouldn’t) or the saga pattern. While these are working and are proven successful, the initial set up is harder and they certainly don’t simplify troubleshooting.

As usual, this trade-off needs to be evaluated individually.

Take-aways

Modularization is hard. If you decide to go with microservices or any other way, doesn’t really matter in this context. You can distribute your modules to enforce principles, but if you cut your modules wrong, things will only get harder.
Don’t be afraid of "old school" or "university-style" metrics. Identify metrics with your team. Collaboratively search for weak points in your software (architecture). Enforce selected metrics, but never let a metric break the build and never dictate metrics top down!

LITERATURE and FOLLOWUP READING

Das Microservice-Praxisbuch by Eberhard Wolff

Arc42

Software systems architecture by Nick Rozanski and Eoin Woods

Microservice Patterns by Chirs Richards

[Modulith First Der angemessene weg zu microservices]() by Herbert Dowalil

Architektur Spicker #8

Microservice FAQ by Jimmy Bogard

SOA is dead; long live services by Anne Thomas Manes

Introduction

Information is key to the success of cyber criminals. It is the driver that enables them to destroy, steal and extort. Cyber criminals are great detectives. They unite scraps of information from various sources into a nefarious plan.
“What’s the big deal?”, you may ask. You are the big deal, because you could be their next target. Your online presence puts you at risk. Reducing your personal online content is an important weapon in the fight against cyber criminals. Weiterlesen

What distinguishes a junior developer from a senior and the senior from a software architect? This is a commonly asked question and there are plenty of very good sources out there. One argument can be found on every blog post about this topic: a junior mostly makes small decisions and consumes knowledge. With the level of seniority, the level of decision making and knowledge sharing increases. This is why we often find people like "advocates", "fellows" or "heroes" on conferences and summits.
I went to one of these summits (Munich Software Architecture Summit) and want to share my experiences about the sessions and talks here. Let’s start with the key note of the first day. Weiterlesen