Skip to main content
  1. Learn center
  2. Software development
  3. Posts
  4. Migrating Cacoo to microservices

Migrating Cacoo to microservices

PostsSoftware development
Cacoo Staff

Cacoo Staff

July 06, 2020

Since its beta release in 2009, Cacoo had been developed and deployed as a monolithic application. The challenges posed by architectures centered on a monolith are well known in the software industry:

  • a monolith is harder to update as even small changes may unpredictably ripple through the codebase
  • swapping libraries in and out always comes with the risk of breaking things
  • new hires need more time to become productive
  • technical debt builds up faster and swamps refactoring efforts
  • the application is much harder to scale while keeping it cost-effective

With the earliest versions of Cacoo, we experienced all of these in one way or another. But it wasn’t until a few years ago — when the web framework on which Cacoo was built, Seasar2, ceased to be actively maintained — that we started considering alternative architectures. Switching to a new library would have required us to change the entire codebase anyways. So we took the opportunity to redesign our architecture from the ground up and chose microservices to do it.

Why microservices?

What specifically pushed us toward microservices is the distributed nature of our team: we have members in Fukuoka, New York, and Amsterdam. With microservices, each component can live in its own repository, so it’s easier to assign ownership and set up efficient Git workflows. This, in turn, enables working on the same project simultaneously across time zones, reduces merge conflicts, and speeds up build times and test cycles. Our front-end developers no longer need to wait for a full build before trying out a simple UI fix.

Furthermore, we can easily decouple different parts of the system even further since each component runs independently. We defined a solid set of specifications and API contracts to abstract away the internals of each service. By doing so, we eliminated implementation issues with implementation details cease to matter, and made it easier to map and understand the data flow between services. Our developers now have more freedom to choose the best languages, libraries, and tools for their job; and we can roll out code refactorings and experimental features with minimal impact on other components.

A new tech stack

At Nulab, we value creativity and learning. The overhaul of Cacoo’s architecture was the perfect opportunity to try out new patterns and introduce more modern technology to our stack. We chose Kubernetes to manage our microservices, Golang to develop them, gRPC/protobuffers to design the interfaces, and RabbitMQ for asynchronous messaging.

Kubernetes

Kubernetes is a popular open-source container orchestration platform, which has become the de-facto standard in this area. The adoption of Kubernetes in Cacoo was gradual. At first, we containerized the old monolith application by running it on Amazon ECS, to gain flexibility in terms of deployment methods and monitoring.

At the same time, we started porting the Cacoo core engine to microservices written in Go. We split the functionalities in several passes, refining the inclusion criteria each time. For example, we identified two main categories, the Cacoo Editor — the UI you use to create and view your diagrams — and the Cacoo Dashboard — the page you use to organize your folders and manage your settings. We further split the Cacoo Dashboard functions into logically consistent groups, e.g., folder management, diagram management, subscriptions and payments, integrations, etc. These became our first core microservices. We use Docker to build the containers, and then we deploy them to Kubernetes.

Currently, we have two Kubernetes clusters — one for production and one for development. We use namespaces to further separate the production cluster into two environments, the actual production and beta, which we use for the last round of live testing before a big release.

We use Horizontal Pod Autoscaler for Pod autoscaling and Cluster Autoscaler to autoscale hosts on which the Kubernetes clusters operate.

Horizontal scaling — i.e., adding more machines with few resources, instead of adding more resources to one machine — to meet sudden load spikes is one of the main benefits of a distributed architecture. As microservices are by design limited in scope, adding one more instance is cheap, both in terms of computing resources, and in terms of human resources, since Kubernetes can be configured to automatically adjust the number of service replicas based on the actual application load.

A typical use case for horizontal scaling is an asynchronous task queue. If the consumers of this queue aren’t fast enough, unprocessed tasks may build up in the queue, potentially becoming a bottleneck. Adding one more consumer intuitively helps processing more tasks in the same amount of time, thus increasing the throughput of the system.

Go, gRPC and Protobuf

Today at Cacoo, we develop the majority of our backend services in Go. The main drive for using Go is that it compiles fast, has a small runtime, and binaries are self-contained. It also integrates nicely with other technologies we use in our containerized infrastructure such as gRPC — an RPC (Remote Procedure Call) framework we use to implement most of our server-side code — and protobuffers — a data serialization framework.

In particular, gRPC uses protobuffers as its definition language. The Cacoo codebase includes a repository with only .proto files describing the services and message templates. This also has the advantage of documenting and versioning our API structure without jumping through extra hoops. Our automated build pipelines then generate the actual boilerplate Go or Java code from these files, which we import as libraries in both client and server implementations.

Example of a gRPC protobuffer definition – ./greeter.proto

The Go code automatically generated from it – ./greeter.pb.go

RabbitMQ

RabbitMQ is a messaging broker for implementing asynchronous event-driven business logic. It plays an important role in distributed architectures. It minimizes codependency between microservices and improves the separation of concerns. After migrating Cacoo’s core functions to microservices, several workflows still relied on the monolith, but, with RabbitMQ, we avoided any tight coupling with the legacy code.

Later down the line, RabbitMQ also proved a valuable solution when implementing the event-driven system that powers Cacoo’s in-app notifications, webhooks, and emails.

Going forward

The new microservice architecture delivered on its promises. Our team members can work with a greater degree of autonomy. They can write the code that matters faster, without the ghost of legacy code lingering on new projects. They are free to make whatever choice they feel is the best one at the time.

On the other side, the new architecture comes with its own peculiar challenges.

Working out where the cut-off lines are between services is an ongoing process. Generally, we try to keep services as small as possible but large enough to avoid splitting operations that need to hit the database transactionally.

Also, microservices are prone to code duplication, so we must find the right balance between copy-pasting and depending too much on shared libraries.

Tracing and debugging is also harder when a single frontend request may travel between 4-5 services before returning. Currently, we adopt the opentracing standard, which has good support for Go, and Zipkin as the GUI interface to inspect request lifecycles across separate backend components.

Microservices also require more DevOps effort for things like managing and monitoring the containers, setting up automated pipelines, and maintaining configuration files.

Overall, microservices are an excellent fit for us. They helped us solve many pain points, improved our development processes and the cohesion of our team, and allowed us to learn new patterns. Although the migration from the legacy monolith is still a work in progress, we already have a solid, scalable, flexible architecture to power the application millions of users open every day.

Keywords

Related

Subscribe to our newsletter

Learn with Nulab to bring your best ideas to life