Testing microservices. Review of Building Microservices.

A few weeks ago I came across an awesome book “Building Microservices” by Sam Newman. Author covers a lot of useful topics about microservices such as testing, deployment, monitoring etc. So, I decided to write my review for the most interesting parts for me and take out notable quotes from this books.

Background

“A key driver to ensuring we can release our software frequently is based on the idea that we release small changes as soon as they are ready.”

There are a lot approaches how to test monolithic application. However, distributed system bring new impediments. Every change within one microservice can impact on other microservices and eventually breaks the system. How to be sure that microservice is ready for release and the whole system won’t be broken? Which tests should be written and which test strategies should be used?

Solution

Author describes 3 levels of tests:

Unit tests
These are tests that typically test a single function or method call. The tests generated as a side effect of test-driven design (TDD). Done right, they are very, very fast, and on modern hardware you could expect to run many thousands of these in less than a minute. The prime goal of these tests is to give us very fast feedback about whether our functionality is good.

Integration Tests (Service)
The reason we want to test a single service by itself is to improve the isolation of the test to make finding and fixing problems faster. To achieve this isolation, we need to stub out all external collaborators so only the service itself is in scope

End-to-End
End-to-end tests are tests run against your entire system. These tests cover a lot of production code. So when they pass, you feel good: you have a high degree of confidence that the code being tested will work in production

Microservices Testing
Sam explains the idea of testing pyramid: “As you go up the pyramid, the test scope increases, as does our confidence that the functionality being tested works. On the other hand, the feedback cycle time increases as the tests take longer to run, and when a test fails it can be harder to determine which functionality has broken.”

But how to implement end-to-end tests? We need to deploy multiple services together and then run a test against all of them. Usually, there is a separate application (extra microservice) which contains e2e tests. The system is a black-box for such tests. And I will suggest to use BDD scenarios and appropriate frameworks (e.g. cucumber, jbehave etc).

Microservices Testing Types

 

In that case, business provides BDD scenarios, QA team writes BDD tests and runs them against the black-box. Thereby developers can change the structure of system in future without impact on these tests.

However, there is a warning from author – “Be careful with amount of end-to-end tests. Show me a codebase where every new story results in a new end-to-end test, and I’ll show you a bloated test suite that has poor feedback cycles and huge overlaps in test coverage.”

Useful links

  1. Building Microservices by Sam Newman
  2. http://stackoverflow.com/a/7876055

 

download

microservices and integration

Integration technologies for microservices. Review of Building Microservices.

A few weeks ago I came across an awesome book “Building Microservices” by Sam Newman. Author covers a lot of useful topics about microservices such as testing, deployment, monitoring etc. So, I decided to write my review for the most interesting parts for me and take out notable quotes from this books.

Background

The next useful topic is communication between microservices. Authors describes common requirements for ideal integration technology, compares asynchronous and synchronous communication, considers different protocols such as RPC, REST and ActiveMQ etc

Solutions

Requirements

There are some generic requirements for integration technology:

  1. Avoid breaking changes – if a microservice adds new fields to data it sends out, existing consumers shouldn’t be impacted. Here we can recollect the Postel’s law “Be liberal in what you accept, and conservative in what you send”.
  2. Technology agnostic – means avoiding integration technology that dictates what technology stacks we can use.
  3. Simple for consumers – if you decide to use REST there are a lot of different libraries for different frameworks and languages. Or as another solution you can provide a client library for your APIs.
  4. Hide Internal implementation – any integration technology that pushes us to expose internal representation detail should be avoided

Integraion via Shared Database

The most commonly used type of integration is Shared Database. But Sam says a good phrase “Database integration makes it easy for services to share data, but does nothing about sharing behavior. Because of that, author insists to avoid database integration at all costs.”

Integraion via REST

Strongly consider REST as a good starting point for request/response integration. There are a lot of  frameworks that help us create RESTFul web services.

Integraion via Event-based

Author emphasizes the following benefits –  “Event-based model provides highly decoupled collaboration. The business logic is not centralized into core brains, but instead pushed out more evenly to the various collaborators. The client that emits an event doesn’t have any way of knowing who or what will react to it, which also means that you can add new subscribers to these events without the client ever needing to know.”

Libraries

A good example is well-known Netflix company. Netflix’s libraries handle service discovery, failure modes, logging, and other aspects that aren’t actually about the nature of the service itself.

“Decide whether or not you are going to insist on the client library being used, or if you’ll allow people using different technology stacks to make calls to the underlying API.”

download

Benefits of Microservices. Review of Building Microservices.

A few weeks ago I came across an awesome book “Building Microservices” by Sam Newman. Author covers a lot of useful topics about microservices such as testing, deployment, monitoring etc. So, I decided to write my review for the most interesting parts for me.

Background

In the first part author describes what are microservices and advantages of using microservices-based architecture comparing to monolithic  application.

Solution

Author defines 7 most important benefits of using microservices, which I’d like to split into 3 different groups such as benefits for development, for delivery the product and extra benefits:

Microservices

Here is a short description for each benefits:

  1. Technology Diversity OR pick the right tool for each job. If one part of our system needs to improve its performance, we might decide to use a different technology stack that is better able to achieve the performance levels required.
  2. Productive Development Teams – smaller teams working on smaller codebases tend to be more productive
  3. Fast and Simple Deployment – make a change to a single service and deploy it independently of the rest of the system. As a result we can get our new functionality out to customers faster.
  4. Optimized Scaling – scale those services only which need scaling
  5. Resilience and fault-tolerance – If one component of a system fails, the rest of the system can carry on working
  6. Reusability – your service could be consumed in different ways for different purposes.
  7. Replaceability OR forget the phrase “it’s too big and risky a job”. It’s much easier to replace small services with a better implementation.

 

Useful Links

  1. Building Microservices by Sam Newman

 

Maven archetype for Java EE 7 microservices-based application

Java EE fits perfectly for microservices-based application. The most useful features are:

  1. RESTful API using JAX-RS
  2. JSON and XML support using JAX-B
  3. Context and Dependency Injection using EJB
  4. Database support using JPA
  5. Integration and end-to-end testing using Arquillian
  6. The same war file can be deployed to Glassfish, Payara and Wildfly containers
  7. A really thin war archive since all dependencies are provided by container

I use a simple maven archetype for generating Java EE microservice.

#!/usr/bin/env bash
mvn archetype:generate \
-DarchetypeGroupId=com.agritsik.maven.archetypes \
-DarchetypeArtifactId=javaee7-micro \
-DarchetypeVersion=1.0-beta-1
# mvn test -Parquillian-glassfish
# OR
# mvn test -Parquillian-wildfly

As a result you will get a maven project with two predefined maven profiles which allow you to run integration and end-to-end tests within embedded glassfish and wildfly containers.

Here is a maven archetype and an example of the microservice which contains RESTful API, DB layer plus docker container for quick start.

maven-logo-black-on-white

How to create a maven java project

If you need to create a maven project, you can generate it using maven  “mavenarchetypequickstart”. Here is an example how to use this arhetype:

#!/usr/bin/env bash
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=com.agritsik.samples.app \
-DartifactId=test-app
# mvn package && java -cp target/test-app-1.0-SNAPSHOT.jar com.agritsik.samples.app.App
# Output example: Hello World!

As a result you will get the following structure of your project:

Screen Shot 2016-04-23 at 6.46.27 PM

Default structure for the maven project

 

How to wait for another docker container startup

There is a popular situation when you have an application in one docker container and that application uses mysql (or any other database, rabbitmq etc) from another container. And you need to start your app when all other containers are up and running and listening on the concrete port.

The most straightforward solution is using netcat utility which allows reading from and writing to network connections using TCP and UDP protocol. Here is an example:

# note, netcat utility should be installed in docker container
while ! nc -z DB 3306; do sleep 3; done
# DB is available here, so we can start our applicaiton
# java -jar /app.jar

Where ‘-z’ option specifies that nc should just scan for listening daemons, without sending any data to them

container-docker-blue-whale

Is a docker container up and running?

How to remove all docker containers and images

Here is a simple but at the same time a useful .sh script which I use in all my projects for cleaning up my docker containers and unused docker images.

#!/usr/bin/env bash
# Remove all stopped containers
docker rm -v $(docker ps -a -q)
# Remove all untagged images
docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

The very first command removes all containers. Note that docker ps command displays only running containers. If you add “-a”  param, docker ps command returns all stopped and running containers. Using “-q” param we ask docker to display container IDs only.

The second command removes all untagged (i.e. replaced by the image with the same version) images.

container-docker-blue-whale

How to remove docker containers and images?

How to check your RESTful Web Service via curl or wget commands

How to check your RESTful Web Service if you don’t have an access via browser or you just like command line? This is a frequent issue when you develop or deploy RESTful applications. There are 2 popular tools for it: curl and wget.

curl – is a tool to transfer data from or to a server. It supports a lot of different protocols such as HTTP, HTTPS, FTP, IMAP, POP3, SCP, SMTP, TELNET and more…
Wget – just the non-interactive network downloader. It supports HTTP, HTTPS, and FTP protocols

I use both of them and suggest the following commands for testing web services:

#!/usr/bin/env bash
# returns response body and headers
curl -i http://localhost:8080/app/resources/countries
# returns response body and headers in verbose mode, useful for debugging
curl -v http://localhost:8080/app/resources/countries
# returns response body and headers
wget -qSO - http://localhost:8080/app/resources/countries
view raw rest-check.sh hosted with ❤ by GitHub

If you need more details regarding these tools there is a good comparison curl vs wget.

How to clear glassfish cache. Helps to resolve deployment exception: Inconsistent Module State

There is a small .sh script which I use for removing glassfish cache. It helps me to resolve glassfish error – “Exception while loading the app : Error in linking security policy for test-app-war — Inconsistent Module State”


#!/usr/bin/env bash
rm -rf $GLASSFISH_HOME/glassfish/domains/domain1/generated/*
rm -rf $GLASSFISH_HOME/glassfish/domains/domain1/osgi-cache/*
rm -rf $GLASSFISH_HOME/glassfish/domains/domain1/applications/*

Do not forget to stop glassfish server before…

glassfish_logo

How to remove glassfish cache?

How to setup glassfish 4.1 with mysql, realm and java mail via command line only in 10 steps

For development and CI process I prefer docker containers. But I don’t use docker in production. So, how to setup glassfish with mysql, configure connection pool and jdbc resource, configure realm, java mail etc via command line only.

  1. Lets create linux user with home directory /home/glassfish/ for glassfish

    useradd glassfish
    passwd glassfish
    
  2. Download glassfish server glassfish-4.1.zip and extract it into glassfish home directory
    cd /home/glassfish/
    curl -o  http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip glassfish-4.1.zip
    unzip glassfish-4.1.zip -d glassfish-4.1
    cd glassfish-4.1
    
  3. Download mysql driver into lib directory
    curl http://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.34/mysql-connector-java-5.1.34.jar -o glassfish/lib/mysql-connector-java-5.1.34.jar
    
  4. Change admin password and create passwordfile which will save time in future 🙂

    touch ../glassfish-password.txt
    echo "AS_ADMIN_PASSWORD=<new-password>" > ../glassfish-password.txt
    sh bin/asadmin change-admin-password --user=admin
    sh bin/asadmin start-domain
    
  5. Enable remote administration

    sh bin/asadmin enable-secure-admin --passwordfile=../glassfish-password.txt
    sh bin/asadmin stop-domain && sh bin/asadmin start-domain
    
  6. Create jdbc connection pool and resource for your database.

    sh bin/asadmin create-jdbc-connection-pool --restype=javax.sql.DataSource --datasourceclassname=com.mysql.jdbc.jdbc2.optional.MysqlDataSource --property url="jdbc\\:mysql\\://localhost\\:3306/mydb?zeroDateTimeBehavior\\=convertToNull&useUnicode\\=true&characterSetResults\\=utf8&characterEncoding\\=utf8":user=myuser:password=mypass jdbc/app-cp --passwordfile=../glassfish-password.txt
     
    sh bin/asadmin create-jdbc-resource --connectionpoolid jdbc/app-cp jdbc/app --passwordfile=../glassfish-password.txt
    
  7. Create realm configuration.

    sh bin/asadmin create-auth-realm --classname com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm --property="jaas-context=jdbcRealm:datasource-jndi=jdbc\\/app:user-table=v_user_role:user-name-column=login:password-column=password:group-table=v_user_role:group-name-column=group_name:digest-algorithm=SHA-256" jdbc-realm --passwordfile=../glassfish-password.txt
    
  8. Create Java Mail resource

    sh bin/asadmin  create-javamail-resource --mailhost localhost --mailuser admin\@yourdomain\.com --fromaddress admin\@yourdomain\.com --property mail.smtp.port=25 mail/yourmail --passwordfile=../glassfish-password.txt
    
  9. It’s strongly recommend to disable auto reload and auto redeploy settings for production instance. You can accomplish it by disabling corresponding options on “Domain/Application Configuration” Page.

  10. Just restart glassfish and you are ready to deploy your app

    sh bin/asadmin stop-domain && sh bin/asadmin start-domain