Continuous Delivery with OpenShift and Jenkins: A/B Testing

One of the reason you could decide to use OpenShift instead of some other containerized platforms (for example Kubernetes) is out-of-the-box support for continuous delivery pipelines. Without proper tools the process of releasing software in your organization may be really time-consuming and painful. The quickness of that process becoming especially important if you deliver software to production frequently. Currently, the most popular use case for it is microservices-based architecture, where you have many small, independent applications.
CI/CD on OpenShift is built around Jenkins. OpenShift provides a verified Jenkins container for building continuous delivery pipelines and also scales the pipeline execution through on-demand provisioning of Jenkins slaves in containers. Jenkins is still the leading automation server that provides many plugins that support building and deploying. One of that plugins is OpenShift Jenkins Pipeline (DSL) Plugin, which is by default enabled on predefined Jenkins template available inside Openshift services catalog. There is not the only one plugin enabled on Openshift Jenkins image. In fact, Openshift comes with default set of installed plugins on Jenkins, which are required for building application from source code and interaction with the cluster. That’s a very useful feature.
We can implement some more advanced deployment strategies on OpenShift: Blue/Green Deployment or A/B Testing. A/B deployments imply running minimum two versions of the application code or application configuration at the same time for testing or experimentation purposes. In this article, I’m going to describe an implementation of such A/B Deployment on OpenShift using Jenkins declarative pipelines and OpenShift Routes.

Running OpenShift

For the test purposes you can run a single-node OpenShift instance locally via Minishift or create a free account on OpenShift Online. The process of installation and configuration of Minishift instance has been already described in some of my previous articles, for example Quick guide to deploying Java apps on OpenShift. OpenShift Online account has limitation on resources quota. You can use 2GB of RAM and 4 cores of CPU inside a single project (only one is allowed). Those limits are generally enough for our example.

Running Jenkins

You can easily run Jenkins on OpenShift by selecting template Jenkins in Service Catalog.

jenkins-openshift-1

You just need to select the name of target project. All other properties may have default value. It is just worth to consider changing memory limit if you have free account on OpenShift Online.

jenkins-openshift-2

Sample Application

Our sample application code snippet is as usual available on GitHub: https://github.com/piomin/sample-spring-kotlin-microservice.git. That is a simple Spring Boot application written in Kotlin, that exposes a REST API for a single custom object management with Swagger documentation, and some monitoring endpoints available under path /actuator/*. Swagger definition including not only information about API, but also application version taken from pom.xml and Git commit details taken from git.properties. The same information is also available under /actuator/info endpoint.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
            .apiInfo(apiInfo(version))
            .select()
            .apis(RequestHandlerSelectors.any())
            .paths{ it.equals("/persons")}
            .build()
            .useDefaultResponseMessages(false)
            .forCodeGeneration(true)
    }	
}

Thanks to that implementation you will be able to easily check out the version of application deployed on OpenShift. It is useful during tests of our sample A/B Deployment pipeline. Each time I deliver new version of application to OpenShift I’m going to increase version number stored inside pom.xml starting from 1.0.

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.2.RELEASE</version>
</parent>
<groupId>pl.piomin.services</groupId>
<artifactId>sample-spring-kotlin-microservice</artifactId>
<version>1.0</version>

Deploying application

Before starting development of Jenkins pipeline we will perform an initial deploy of our application on OpenShift. To do that we just need to run the following command using OpenShift client. I’m using S2I builder image provided by fabric8. By default, the name of application is the same as a repository name. It can be overridden using --name parameter. In the command visible below I have overridden to shorter name sample-app.

$ oc new-app fabric8/s2i-java~https://github.com/piomin/sample-spring-kotlin-microservice.git --name sample-app

Then we should expose our application outside using OpenShift route by executing command oc expose:

$ oc expose svc sample-app

After running that command our project on OpenShift contains a single instance of Jenkins master and a single instance on the sample application as shown below.

jenkins-openshift-3

Step 1: Deploying and tagging previous version of application

After creating an initial deployment of our sample application we may proceed to building Jenkins pipeline. I’m using OpenShift Online, so my instance of Jenkins is available under URL: https://console.pro-eu-west-1.openshift.com. Each Jenkins pipeline is divided into stages. The first stage of our pipeline is responsible for tagging the old image of application and deploying it as new application under the new name containing version number. The version number is taken from OpenShift deployment number (1). This version is used for tagging latest version of our application image (2). Basing on tagged version of image we are creating new deployment under new name containing deployment version as a suffix (3). Finally, we are waiting while the new deployment is succesfully finished (4).

stage('Deploy Previous') {
  steps {
    script {
      openshift.withCluster() {
        openshift.withProject() {
          def appName = "sample-app"
          def ver = openshift.selector('dc', appName).object().status.latestVersion //(1)
          println "Version: ${ver}"
          env.VERSION = ver
          openshift.tag("${appName}:latest", "${appName}:${ver}") //(2)
          def dcNew = openshift.newApp("--image-stream=piomin-cicd/${appName}:${ver}", "--name=${appName}-v${ver}").narrow('dc') //(3)
          def verNew = dcNew.object().status.latestVersion
          println "New deployment: ${verNew}"
          def rc = openshift.selector('rc', "sample-app-v${ver}-1")
          timeout(5) { //(4)
            rc.untilEach(1) {
              def rcMap = it.object()
              return (rcMap.status.replicas.equals(rcMap.status.readyReplicas))
            }
          }
        }
      }
    }
  }
}

Step 2: Building and Deploying Latest Version

Before running the sample pipeline we should increase the version of our application in pom.xml. The version number after a change is 1.1.

jenkins-openshift-5

The second stage of our pipeline Build and Deploy Latest is responsible for deploying the newest version of application by running build from source code. First, we need to find the concrete build (1) and start it (2). Within the build OpenShift checkout the newest version of code stored in branch master, run Maven build command and build image containing application fat JAR. Finally, it rollouts deployment with latest version of image. The pipeline waits until the build will be succesfully finished.

stage('Build and Deploy Latest') {
  steps {
    script {
      openshift.withCluster() {
        openshift.withProject() {
          def appName = "sample-app"
          def bc = openshift.selector('bc', appName) //(1)
          bc.startBuild() //(2)
          def builds = bc.related("builds")
          timeout(5) { //(3)
            builds.untilEach(1) {
              return (it.object().status.phase == "Complete")
            }
          }
        }
      }
    }
  }
}

After that stage the situation inside our OpenShift projects looks like as shown below. We have the newest version of application under deployment sample-app with number #2 and previous version of application under deployment sample-app-1.

jenkins-openshift-3

The newest image has been pushed, the older one is tagged with version taken from deployment as shown below.

jenkins-openshift-4

Step 3: Updating Route to Enable A/B Testing

A/B deployment may be easily realized using OpenShift route. Once we have deployed the newest version of out application and the previous version under new deployment we should update the route sample-app to include the second service as alternative service for the route. After finding the right route (1) we should add alternateBackends field that contains the list of alternate services (2). The name of service is determine by the previous sample-app deployment version. After modifying the object we just need to apply the current configuration (3).

stage('Set A-B Route') {
  steps {
    script {
      openshift.withCluster() {
        openshift.withProject() {
          def route = openshift.selector("routes", "sample-app") //(1)
          println "Route: ${route}"
          def routeObj = route.object()
          println "Route: ${routeObj}"
          routeObj.spec.alternateBackends = []
          routeObj.spec.alternateBackends[0] = ["kind": "Service","name": "sample-app-v${env.VERSION}", "weight": 100] //(2)
          openshift.apply(routeObj) //(3)
        }
      }
    }
  }
}
Here’s current route definition visible in OpenShift console:

<img class="alignnone size-full wp-image-7154" src="https://piotrminkowski.files.wordpress.com/2019/05/jenkins-openshift-6.png" alt="jenkins-openshift-6" width="946" height="514" />

Step 4: Disabling A/B Testing

This is the last stage of our pipeline. It waits for input confirmation (1) before proceeding. After confirmation it disables A/B Testing feature for the route by setting alternateBackends to null (3) and applying configuration (4). Finally, we delete the deployment with previous version of application (5).

stage('Disabling A/B Testing') {
  steps {
    script {
      input message: "Continue ?" //(1)
      openshift.withCluster() {
        openshift.withProject() {
          def route = openshift.selector("routes", "sample-app") //(2)
          println "Route: ${route}"
          def routeObj = route.object()
          println "Route: ${routeObj}"
          routeObj.spec.alternateBackends = null //(3)
          openshift.apply(routeObj) //(4)
          openshift.selector("dc", "sample-app-${env.VERSION}").delete() //(5)
        }
      }
    }
  }
}

Testing

Once you have started the pipeline it prepared your environment for A/B Testing. Now, it is waiting for confirmation, which removes previous version of application and set the single target service inside route. So, before confirmation you can try A/B Testing by calling endpoint /actuator/info exposed within the route. My route is available under URL http://sample-app-piomin-cicd.e4ff.pro-eu-west-1.openshiftapps.com/actuator/info.

jenkins-openshift-7

Endpoint /actuator/info prints details about Maven version and git commit.

{
  "git":{
    "commit":{
      "time":"2019-05-17T13:36:35Z",
      "id":"7d985a2"
    },
	"branch":"master"
  },
  "build":{
    "version":"1.1",
	"artifact":"sample-spring-kotlin-microservice",
	"name":"sample-spring-kotlin-microservice",
	"group":"pl.piomin.services",
	"time":"2019-05-17T13:38:41.728Z"
  }
}
Advertisements

Logging with Spring Boot and Elastic Stack

In this article I’ll introduce my library for logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are:

  • Logging all incoming HTTP requests and outgoing HTTP responses with full body
  • Integration with Elastic Stack through Logstash using logstash-logback-encoder library
  • Possibility for enabling logging on a client-side for most commonly used components in Spring Boot application: RestTemplate and OpenFeign
  • Generating and propagating correlationId across all communication within a single API endpoint call
  • Calculating and storing execution time for each request
  • A library should be auto-configurable – you don’t have to do anything more than including it as a dependency to your application to make it work

Motivation

I guess that after reading the preface to that article you may ask why I decided to build such a library, while Spring Boot has such features. But the question is if it really has these features? It may be quite surprisingly, but the answer is no. While you may easily log HTTP request using some built-in Spring components like CommonsRequestLoggingFilter, you don’t any out-of-the-box mechanism for logging response body. Of course you may implement your own custom solution based Spring HTTP interceptor (HandlerInterceptorAdapter) or filter (OncePerRequestFilter), but that is no so simple as you might think. The second option is to use Zalando Logbook, which is an extensible Java library to enable complete request and response logging for different client-side and server-side technologies. It is very interesting library dedicated especially for logging HTTP requests and responses, that provides many customization options and supports different clients. So, for more advanced you may always use this library.
My goal is to create a simple library that not only log requests and responses, but provides auto-configuration for sending these logs to Logstash and correlating them. It will also generate automatically some valuable statistics like time of request processing. All such values should be sent to Logstash. Let’s proceed to the implementation.

Implementation

Let’s start with dependencies. We need some basic Spring libraries, which are included to spring-web, and spring-context that provides some additional annotations. For integration with Logstash we use logstash-logback-encoder library. Slf4j contains abstraction for logging, while javax.servlet-api for HTTP communication. Commons IO is not required, but it offers some useful methods for manipulating input and output streams.

<properties>
	<java.version>11</java.version>
	<commons-io.version>2.6</commons-io.version>
	<javax-servlet.version>4.0.1</javax-servlet.version>
	<logstash-logback.version>5.3</logstash-logback.version>
	<spring.version>5.1.6.RELEASE</spring.version>
	<slf4j.version>1.7.26</slf4j.version>
</properties>
<dependencies>
	<dependency>
		<groupId>org.springframework</groupId>
		<artifactId>spring-context</artifactId>
		<version>${spring.version}</version>
	</dependency>
	<dependency>
		<groupId>org.springframework</groupId>
		<artifactId>spring-web</artifactId>
		<version>${spring.version}</version>
	</dependency>
	<dependency>
		<groupId>net.logstash.logback</groupId>
		<artifactId>logstash-logback-encoder</artifactId>
		<version>${logstash-logback.version}</version>
	</dependency>
	<dependency>
		<groupId>javax.servlet</groupId>
		<artifactId>javax.servlet-api</artifactId>
		<version>${javax-servlet.version}</version>
		<scope>provided</scope>
	</dependency>
	<dependency>
		<groupId>commons-io</groupId>
		<artifactId>commons-io</artifactId>
		<version>${commons-io.version}</version>
	</dependency>
	<dependency>
		<groupId>org.slf4j</groupId>
		<artifactId>slf4j-api</artifactId>
		<version>${slf4j.version}</version>
	</dependency>
</dependencies>

The first step is to implement HTTP request and response wrappers. We have to do it, because it is not possible to read HTTP stream twice. If you would like to log request or response body, you first have to read input stream before processing or out stream before returning it to the client. Spring provides an implementation of HTTP request and response wrappers, but for unknown reasons they support only some specific use cases like content type application/x-www-form-urlencoded. Because we usually use application/json content type in the communication between RESTful applications Spring ContentCachingRequestWrapper and ContentCachingResponseWrapper won’t be useful here.
Here’s my implementation of HTTP request wrapper. This can be done in various ways. This is just one of them:

public class SpringRequestWrapper extends HttpServletRequestWrapper {

    private byte[] body;

    public SpringRequestWrapper(HttpServletRequest request) {
        super(request);
        try {
            body = IOUtils.toByteArray(request.getInputStream());
        } catch (IOException ex) {
            body = new byte[0];
        }
    }

    @Override
    public ServletInputStream getInputStream() throws IOException {
        return new ServletInputStream() {
            public boolean isFinished() {
                return false;
            }

            public boolean isReady() {
                return true;
            }

            public void setReadListener(ReadListener readListener) {

            }

            ByteArrayInputStream byteArray = new ByteArrayInputStream(body);

            @Override
            public int read() throws IOException {
                return byteArray.read();
            }
        };
    }
}

The same thing has to be done for output stream. This implementation is a little bit more complicated:

public class SpringResponseWrapper extends HttpServletResponseWrapper {

	private ServletOutputStream outputStream;
	private PrintWriter writer;
	private ServletOutputStreamWrapper copier;

	public SpringResponseWrapper(HttpServletResponse response) throws IOException {
		super(response);
	}

	@Override
	public ServletOutputStream getOutputStream() throws IOException {
		if (writer != null) {
			throw new IllegalStateException("getWriter() has already been called on this response.");
		}

		if (outputStream == null) {
			outputStream = getResponse().getOutputStream();
			copier = new ServletOutputStreamWrapper(outputStream);
		}

		return copier;
	}

	@Override
	public PrintWriter getWriter() throws IOException {
		if (outputStream != null) {
			throw new IllegalStateException("getOutputStream() has already been called on this response.");
		}

		if (writer == null) {
			copier = new ServletOutputStreamWrapper(getResponse().getOutputStream());
			writer = new PrintWriter(new OutputStreamWriter(copier, getResponse().getCharacterEncoding()), true);
		}

		return writer;
	}

	@Override
	public void flushBuffer() throws IOException {
		if (writer != null) {
			writer.flush();
		}
		else if (outputStream != null) {
			copier.flush();
		}
	}

	public byte[] getContentAsByteArray() {
		if (copier != null) {
			return copier.getCopy();
		}
		else {
			return new byte[0];
		}
	}

}

I moved the implementation out ServletOutputStream wrapper into the separated class:

public class ServletOutputStreamWrapper extends ServletOutputStream {

	private OutputStream outputStream;
	private ByteArrayOutputStream copy;

	public ServletOutputStreamWrapper(OutputStream outputStream) {
		this.outputStream = outputStream;
		this.copy = new ByteArrayOutputStream();
	}

	@Override
	public void write(int b) throws IOException {
		outputStream.write(b);
		copy.write(b);
	}

	public byte[] getCopy() {
		return copy.toByteArray();
	}

	@Override
	public boolean isReady() {
		return true;
	}

	@Override
	public void setWriteListener(WriteListener writeListener) {

	}
}

Because we need to wrap both HTTP request stream and response stream before processing we should use HTTP filter for that. Spring provides their own implementation of HTTP filter. Out filter is extending it, and uses custom request and response wrappers to log payloads. Additionally it generates and sets X-Request-ID, X-Correlation-ID headers, and request processing time.

public class SpringLoggingFilter extends OncePerRequestFilter {

    private static final Logger LOGGER = LoggerFactory.getLogger(SpringLoggingFilter.class);
    private UniqueIDGenerator generator;

    public SpringLoggingFilter(UniqueIDGenerator generator) {
        this.generator = generator;
    }

    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws ServletException, IOException {
        generator.generateAndSetMDC(request);
        final long startTime = System.currentTimeMillis();
        final SpringRequestWrapper wrappedRequest = new SpringRequestWrapper(request);
        LOGGER.info("Request: method={}, uri={}, payload={}", wrappedRequest.getMethod(),
                wrappedRequest.getRequestURI(), IOUtils.toString(wrappedRequest.getInputStream(),
                wrappedRequest.getCharacterEncoding()));
        final SpringResponseWrapper wrappedResponse = new SpringResponseWrapper(response);
        wrappedResponse.setHeader("X-Request-ID", MDC.get("X-Request-ID"));
        wrappedResponse.setHeader("X-Correlation-ID", MDC.get("X-Correlation-ID"));
        chain.doFilter(wrappedRequest, wrappedResponse);
        final long duration = System.currentTimeMillis() - startTime;
        LOGGER.info("Response({} ms): status={}, payload={}", value("X-Response-Time", duration),
                value("X-Response-Status", wrappedResponse.getStatus()),
                IOUtils.toString(wrappedResponse.getContentAsByteArray(), wrappedResponse.getCharacterEncoding()));
    }
}

Auto-configuration

Once we have finished an implementation of wrappers and HTTP filter, we may prepare auto-configuration for our library. The first step is to create @Configuration that contains all the required beans. We have to register our custom HTTP filter SpringLoggingFilter, the logger appender for integration with Logstash and RestTemplate with HTTP client interceptor:

@Configuration
public class SpringLoggingAutoConfiguration {

	private static final String LOGSTASH_APPENDER_NAME = "LOGSTASH";

	@Value("${spring.logstash.url:localhost:8500}")
	String url;
	@Value("${spring.application.name:-}")
	String name;

	@Bean
	public UniqueIDGenerator generator() {
		return new UniqueIDGenerator();
	}

	@Bean
	public SpringLoggingFilter loggingFilter() {
		return new SpringLoggingFilter(generator());
	}

	@Bean
	public RestTemplate restTemplate() {
		RestTemplate restTemplate = new RestTemplate();
		List<ClientHttpRequestInterceptor> interceptorList = new ArrayList<ClientHttpRequestInterceptor>();
		restTemplate.setInterceptors(interceptorList);
		return restTemplate;
	}

	@Bean
	public LogstashTcpSocketAppender logstashAppender() {
		LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
		LogstashTcpSocketAppender logstashTcpSocketAppender = new LogstashTcpSocketAppender();
		logstashTcpSocketAppender.setName(LOGSTASH_APPENDER_NAME);
		logstashTcpSocketAppender.setContext(loggerContext);
		logstashTcpSocketAppender.addDestination(url);
		LogstashEncoder encoder = new LogstashEncoder();
		encoder.setContext(loggerContext);
		encoder.setIncludeContext(true);
		encoder.setCustomFields("{\"appname\":\"" + name + "\"}");
		encoder.start();
		logstashTcpSocketAppender.setEncoder(encoder);
		logstashTcpSocketAppender.start();
		loggerContext.getLogger(Logger.ROOT_LOGGER_NAME).addAppender(logstashTcpSocketAppender);
		return logstashTcpSocketAppender;
	}

}

The configuration set inside library has to be loaded by Spring Boot. Spring Boot checks for the presence of a META-INF/spring.factories file within your published jar. The file should list your configuration classes under the EnableAutoConfiguration key:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
pl.piomin.logging.config.SpringLoggingAutoConfiguration

Integration with Logstash

Integration with Logstash is realized through auto-configured logging appender. We can override Logstash destination URL by setting property spring.logstash.url in application.yml file;

spring:
  application:
    name: sample-app
  logstash:
    url: 192.168.99.100:5000

To enable all the features described in this article in your application you just need to include my library to the dependencies:

<dependency>
	<groupId>pl.piomin</groupId>
	<artifactId>spring-boot-logging</artifactId>
	<version>1.0-SNAPSHOT</version>
</dependency>

Before running your application you should start Elastic Stack tools on your machine. The best way to do that is through Docker containers. But first let’s create Docker network to enable communication between containers via container name.

$ docker network create es

Now, let’s start a single node instance of Elasticsearch exposed on port 9200. I use version6.7.2 of Elastic Stack tools :

$ docker run -d --name elasticsearch --net es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.7.2

When running Logstash we need to provide an additional configuration that contains input and output definitions. We will start TCP input with JSON codec, which is not enabled by default. Elasticsearch URL is set as an output. It will also create an index containing the name of application.

input {
  tcp {
    port => 5000
    codec => json
  }
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "micro-%{appname}"
  }
}

Now we can start Docker container with Logstash. It is exposed on port 5000 and reads configuration from logstash.conf file:

docker run -d --name logstash --net es -p 5000:5000 -v ~/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:6.7.2

Finally, we can run Kibana, which used just for displaying logs:

$ docker run -d --name kibana --net es -e "ELASTICSEARCH_URL=http://elasticsearch:9200" -p 5601:5601 docker.elastic.co/kibana/kibana:6.7.2

After starting my sample application that uses spring-boot-logging library the logs from POST requests are displayed in Kibana as shown below:

logging-1

Each entry with response log contains X-Correlation-ID, X-Request-ID, X-Response-Time and X-Response-Status headers.

logging-2

Summary

My Spring logging library is available on GitHub in the repository https://github.com/piomin/spring-boot-logging.git. I’m still working on it, so any feedback or suggestions are very welcome. This library is dedicated for use in microservices-based architecture, where your applications may be launched in many instances inside containers. In this model, storing logs in the files does not have any sense. That’s why integration with Elastic Stack is very important.
But the most important feature of this library is to log HTTP request/response with full body and some additional information to this log like correlation id or request processing time. Library is really simple, small and everything is done out-of-the-box after including to your application.

Micronaut Tutorial: Security

This is the third part of my tutorial to Micronaut Framework. This time we will discuss the most interesting Micronaut security features. I have already described core mechanisms for IoC and dependency injection in the first part of my tutorial, and I have also created a guide to building simple REST server-side application in the second part. For more details you may refer to:

Security is as essential part of every web application. Easily configurable, built-in web security mechanisms is something what every single modern micro-framework must have. It is no different with Micronaut. In this part of my tutorial you will learn how to:

  • Build custom authentication provider
  • Configure and test basic authentication for your HTTP API
  • Secure your HTTP API using JSON Web Tokens
  • Enable communication over HTTPS

Enabling security

To enable security for Micronaut application you should first include the following dependency into your pom.xml:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-security</artifactId>
</dependency>

The next step is to enable security feature through application properties:

micronaut:
  security:
    enabled: true

Setting the property micronaut.security.enabled to true causes enabling security for all the existing controllers. Because, we already have the controller, that has been used as an example for the previous part of tutorial, we should disable security for it. To do that I have annotated with @Secured(SecurityRule.IS_ANONYMOUS). It allows anonymous access to all endpoints implemented inside controller.

@Controller("/persons")
@Secured(SecurityRule.IS_ANONYMOUS)
@Validated
public class PersonController { ... }

Basic Authentication Provider

Once you enabled Micronaut security, Basic Auth is enabled by default. All you need to do is to implement your custom authentication provider. It has to implement AuthenticationProvider interface. In fact, you just need to verify username and password, which are both passed inside HTTP Authorization header. Our sample authentication provider uses configuration properties as user repository. Here’s the fragment of application.yml file that contains list of user passwords and assigned roles:

credentials:
  users:
    smith: smith123
    scott: scott123
    piomin: piomin123
    test: test123
  roles:
    smith: ADMIN
    scott: VIEW
    piomin: VIEW
    test: ADMIN

The configuration properties are injected into UsersStore configuration bean which is annotated with @ConfigurationProperties. User passwords are stored inside users map, while roles inside roles map. They are both annotated with @MapFormat and have username as a key.

@ConfigurationProperties("credentials")
public class UsersStore {

	@MapFormat
	Map<String, String> users;
	@MapFormat
	Map<String, String> roles;

	public String getUserPassword(String username) {
		return users.get(username);
	}

	public String getUserRole(String username) {
		return roles.get(username);
	}
}

Finally, we may proceed to the authentication provider implementation. It injects UsersStore bean that contains list of users with passwords and roles. The overridden method should return UserDetails object. The username and password are automatically decoded from base64 taken from Authentication header and bind to fields identity and secret in AuthenticationRequest method parameter. If input password is the same as stored password it returns UserDetails object with roles, otherwise throws an exception.

@Singleton
public class UserPasswordAuthProvider implements AuthenticationProvider {

    @Inject
    UsersStore store;

    @Override
    public Publisher<AuthenticationResponse> authenticate(AuthenticationRequest req) {
        String username = req.getIdentity().toString();
        String password = req.getSecret().toString();
        if (password.equals(store.getUserPassword(username))) {
            UserDetails details = new UserDetails(username, Collections.singletonList(store.getUserRole(username)));
            return Flowable.just(details);
        } else {
            return Flowable.just(new AuthenticationFailed());
        }
    }
}

Secured Controller

Now, we may create our sample secure REST controller. The following controller is just a copy of previously described controller PersonController, but it also contains some Micronaut Security annotation. Through @Secured(SecurityRule.IS_AUTHENTICATED) used on the whole controller it is available only for succesfully authenticated users. This annotation may be overridden on the method level. The method for adding new person is available only for user having ADMIN role.

@Controller("/secure/persons")
@Secured(SecurityRule.IS_AUTHENTICATED)
public class SecurePersonController {

	List<Person> persons = new ArrayList<>();

	@Post
	@Secured("ADMIN")
	public Person add(@Body @Valid Person person) {
		person.setId(persons.size() + 1);
		persons.add(person);
		return person;
	}

	@Get("/{id:4}")
	public Optional<Person> findById(@NotNull Integer id) {
		return persons.stream()
				.filter(it -> it.getId().equals(id))
				.findFirst();
	}

	@Version("1")
	@Get("{?max,offset}")
	public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
		return persons.stream()
				.skip(offset == null ? 0 : offset)
				.limit(max == null ? 10000 : max)
				.collect(Collectors.toList());
	}

	@Version("2")
	@Get("?max,offset")
	public List<Person> findAllV2(@NotNull Integer max, @NotNull Integer offset) {
		return persons.stream()
				.skip(offset == null ? 0 : offset)
				.limit(max == null ? 10000 : max)
				.collect(Collectors.toList());
	}

}

To test Micronaut security features used in our controller we will create JUnit test class containing three methods. All these methods use Micronaut HTTP client for calling target endpoints. It provides basicAuth method, that allows you to easily pass user credentials. The first test method testAdd verifies positive scenario of adding new person. The test user smith has ADMIN role, which is required for calling this HTTP endpoint. In contrast, method testAddFailed calls the same HTTP endpoint, but with different user scott, which has VIEW role. We expect that HTTP 401 is returned by the endpoint. The same user scott has an access to GET endpoints, so we expect that test method testFindById is finished with success.

@MicronautTest
public class SecurePersonControllerTests {

	@Inject
	EmbeddedServer server;

	@Test
	public void testAdd() throws MalformedURLException {
		HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
		Person person = new Person();
		person.setFirstName("John");
		person.setLastName("Smith");
		person.setAge(33);
		person.setGender(Gender.MALE);
		person = client.toBlocking()
				.retrieve(HttpRequest.POST("/secure/persons", person).basicAuth("smith", "smith123"), Person.class);
		Assertions.assertNotNull(person);
		Assertions.assertEquals(Integer.valueOf(1), person.getId());
	}

	@Test
	public void testAddFailed() throws MalformedURLException {
		HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
		Person person = new Person();
		person.setFirstName("John");
		person.setLastName("Smith");
		person.setAge(33);
		person.setGender(Gender.MALE);
		Assertions.assertThrows(HttpClientResponseException.class,
				() -> client.toBlocking().retrieve(HttpRequest.POST("/secure/persons", person).basicAuth("scott", "scott123"), Person.class),
				"Forbidden");
	}

	@Test
	public void testFindById() throws MalformedURLException {
		HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
		Person person = client.toBlocking()
				.retrieve(HttpRequest.GET("/secure/persons/1").basicAuth("scott", "scott123"), Person.class);
		Assertions.assertNotNull(person);
	}
}

Enable HTTPS

Our controller is secured, but not the HTTP server. Micronaut by default starts server with disabled SSL. However, it supports HTTPS out of the box. To enable HTTPS support you should first set property micronaut.ssl.enabled to true. By default Micronaut with HTTPS enabled starts on port 8443, but you can override it using property micronaut.ssl.port.
We will enable HTTPS only for the single JUnit test class. To do that we first create file src/test/resources/ssl.yml with the following configuration:

micronaut:
  ssl:
    enabled: true
    buildSelfSigned: true

Micronaut simplifies SSL configuration build for test purposes. It turns out, we don’t have to generate any keystore or certificate if we use property micronaut.ssl.buildSelfSigned. Otherwise you would have to generate keystore by yourself. It is not difficult, if you are creating self-signed certificate. You may use openssl or keytool for that. Here’s the appropriate keytool command for generating keystore, however you should point out that it is recommended tool by Micronaut, which recommend using openssl:

$ keytool -genkey -alias server -keystore server.jks

If you decide to generate self-signed certificate by yourself you have configure them:

micronaut:
  ssl:
    enabled: true
    keyStore:
      path: classpath:server.keystore
      password: 123456
      type: JKS

The last step is to create JUnit test that uses configuration provided in file ssl.yml.

@MicronautTest(propertySources = "classpath:ssl.yml")
public class SecureSSLPersonControllerTests {

	@Inject
	EmbeddedServer server;
	
	@Test
	public void testFindById() throws MalformedURLException {
		HttpClient client = HttpClient.create(new URL(server.getScheme() + "://" + server.getHost() + ":" + server.getPort()));
		Person person = client.toBlocking()
				.retrieve(HttpRequest.GET("/secure/persons/1").basicAuth("scott", "scott123"), Person.class);
		Assertions.assertNotNull(person);
	}
	
	// other tests ...

}

JWT Authentication

To enable JWT token based authentication we first need to include the following dependency into pom.xml:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-security-jwt</artifactId>
</dependency>

Token authentication is enabled by default through TokenConfigurationProperties properties (micronaut.security.token.enabled). However, we should enable JWT based authentication by setting property micronaut.security.token.jwt.enabled to true. This change allows us to use JWT authentication for our sample application. We also need to be able to generate authentication token used for authorization. To do that we should enable /login endpoint and set some configuration properties for JWT token generator. In the following fragment of application.yml I set HMAC with SHA-256 as hash algorithm for JWT signature generator:

micronaut:
  security:
    enabled: true
    endpoints:
      login:
        enabled: true
    token:
      jwt:
        enabled: true
        signatures:
          secret:
            generator:
              secret: pleaseChangeThisSecretForANewOne
              jws-algorithm: HS256

Now, we can call endpoint POST /login with username and password in JSON body as shown below:

$ curl -X "POST" "http://localhost:8100/login" -H 'Content-Type: application/json; charset=utf-8' -d '{"username":"smith","password":"smith123"}'
{
	"username": "smith",
	"roles": [
		"ADMIN"
	],
	"access_token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImV4cCI6MTU1NjIwMTYyMCwiaWF0IjoxNTU2MTk4MDIwfQ.by0Dx73QIZeF4MDM4A5nHgw8xm4haPJjsu9z45psQrY",
	"refresh_token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImlhdCI6MTU1NjE5ODAyMH0.2BrdZzuvJNymZlOv56YpUPHYLDdnVAW5UXXNuz3a7xU",
	"token_type": "Bearer",
	"expires_in": 3600
}

The value of field access_token returned in the response should be passed as bearer token in the Authorization header of requests sent to HTTP endpoints. We can any endpoint, for example GET /persons

$ curl -X "GET" "http://localhost:8100/persons" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImV4cCI6MTU1NjIwMTYyMCwiaWF0IjoxNTU2MTk4MDIwfQ.by0Dx73QIZeF4MDM4A5nHgw8xm4haPJjsu9z45psQrY"

We can easily test automatically the scenario described above. I have created UserCredentials and UserToken objects for serializing request and deserializing response from /login endpoint. The token retrieved from response is then passed as bearer token by calling bearerAuth method on Micronaut HTTP client instance.

@MicronautTest
public class SecurePersonControllerTests {

	@Inject
	EmbeddedServer server;
	
	@Test
	public void testFindByIdUsingJWTToken() throws MalformedURLException {
		HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
		UserToken token = client.toBlocking().retrieve(HttpRequest.POST("/login", new User Credentials("scott", "scott123")), UserToken.class);
		Person person = client.toBlocking()
				.retrieve(HttpRequest.GET("/secure/persons/1").bearerAuth(token.getAccessToken()), Person.class);
		Assertions.assertNotNull(person);
	}
}

Source Code

We were using the same repository as for two previous parts of my Micronaut tutorial: https://github.com/piomin/sample-micronaut-applications.git.

Micronaut Tutorial: Server Application

In this part of my tutorial to Micronaut framework we are going to create simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article Micronaut Tutorial: Beans and Scopes.

Assuming we have a basic knowledge about core mechanisms of Micronaut we may proceed to the key part of that framework and discuss how to build simple microservice application exposing REST API over HTTP.

Embedded Server

First, we need to include dependency to our pom.xml responsible for running embedded server during application startup. By default, Micronaut starts on Netty server, so we only need to include the following dependency:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>

Assuming, we have the following main class defined, we only need to run it:

public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

By default Netty server runs on port 8080. You may override it to force the server to run on a specific port by setting the following property in your application.yml or bootstrap.yml. You can also set the value of this property to -1 to run the server on randomly generated port.

micronaut:
  server:
    port: 8100

Creating Web Application

If you are already familiar with Spring Boot you should not have any problems with building a simple REST server-side application using Micronaut. The approach is almost identical. We just have to create controller class and annotate it with @Controller. Micronaut supports all HTTP method types. You will probably use: @Get, @Post, @Delete, @Put or @Patch. Here’s our sample controller class that implements methods for adding new Person object, finding all persons or a single person by id:

@Controller("/persons")
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id}")
    public Optional<Person> findById(Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get
    public List<Person> findAll() {
        return persons;
    }

}

Request variables are resolved automatically and bind to the variable with the same name. Micronaut populates methods arguments from URI variables like /{variableName} and GET query parameters like ?paramName=paramValue. If the request contains JSON body you should annotate it with @Body. Our sample controller is very simple. It does not perform any input data validation. Let’s change it.

Validation

To be able to perform HTTP requests validation we should first include the following dependencies to our pom.xml:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-validation</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut.configuration</groupId>
	<artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

Validation in Micronaut is based on JSR-380, also known as Bean Validation 2.0. We can use javax.validation annotations such as @NotNull, @Min or @Max. Micronaut uses implementation provided by Hibernate Validator, so even if don’t use any JPA in your project, you have to include micronaut-hibernate-validator to your dependencies. After that we may add a validation to our model class using some javax.validation annotations. Here’s Person model class with validation. All the fields are required: firstName and lastName cannot be blank, id cannot be greater than 10000, age cannot be lower than 0.

public class Person {

    @Max(10000)
    private Integer id;
    @NotBlank
    private String firstName;
    @NotBlank
    private String lastName;
    @PositiveOrZero
    private int age;
    @NotNull
    private Gender gender;

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public Gender getGender() {
        return gender;
    }

    public void setGender(Gender gender) {
        this.gender = gender;
    }
	
}

Now, we need to modify the code of our controller. First, it needs to be annotated with @Validated. Also @Body parameter of POST method should be annotated with @Valid. The REST method argument may also be validated using JSR-380 annotation. Alternatively, we may configure validation using URI templates. The annotation @Get("/{id:4}") indicates that a variable can contain 4 characters max (is lower than 10000) or a query parameter is optional as shown here: @Get("{?max,offset}").
Here’s the current implementation of our controller. Besides validation, I have also implemented pagination for findAll based on offset and limit optional parameters:

@Controller("/persons")
@Validated
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(@Body @Valid Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id:4}")
    public Optional<Person> findById(@NotNull Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get("{?max,offset}")
    public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
        return persons.stream()
                .skip(offset == null ? 0 : offset)
                .limit(max == null ? 10000 : max)
                .collect(Collectors.toList());
    }

}

Since we have finished the implementation of our controller, it is a right time to test it.

Testing with embedded server

We have already discussed testing with Micronaut in the first part of my tutorial. The only difference in comparison to those tests is a necessity of running embedded server and call endpoint via HTTP. To do that we have to include the dependency with Micronaut HTTP client:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-client</artifactId>
	<scope>test</scope>
</dependency>

We should also inject an instance of embedded server in order to be able to detect its address (for example if a port number is generated automatically):

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;
	
	// tests implementation ...
	
}

We are building Micronaut HTTP Client programmatically by calling static method create. It is also possible to obtain a reference to HttpClient by annotating it with @Client.
The following test implementation is based on JUnit 5. I have provided the positive test for all the exposed endpoints and one negative scenario with not valid input data (age field lower than zero). Micronaut HTTP client can be used in both asynchronous non-blocking mode and synchronous blocking mode. In that case we force it to work in blocking mode.

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;

    @Test
    public void testAdd() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(33);
        person.setGender(Gender.MALE);
        person = client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertEquals(Integer.valueOf(1), person.getId());
    }

    @Test
    public void testAddNotValid() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        final Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(-1);
        person.setGender(Gender.MALE);

        Assertions.assertThrows(HttpClientResponseException.class,
                () -> client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class),
                "person.age: must be greater than or equal to 0");
    }

    @Test
    public void testFindById() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = client.toBlocking().retrieve(HttpRequest.GET("/persons/1"), Person.class);
        Assertions.assertNotNull(person);
    }

    @Test
    public void testFindAll() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person[] persons = client.toBlocking().retrieve(HttpRequest.GET("/persons"), Person[].class);
        Assertions.assertEquals(1, persons.length);
    }

}

We have already built the simple web application that exposes some methods over REST API, validates input data and includes JUnit API tests. Now, we may discuss some more advanced, interesting Micronaut features. First of them is built-in support for API versioning.

API versioning

Since 1.1, Micronaut supports API versioning via a dedicated @Version annotation. To test this feature we will add new version of findAll method to our controller class. The new version of this method requires to set input parameters max and offset, which were optional for the first version of the method.

@Version("1")
@Get("{?max,offset}")
public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
	return persons.stream()
			.skip(offset == null ? 0 : offset)
			.limit(max == null ? 10000 : max)
			.collect(Collectors.toList());
}

@Version("2")
@Get("?max,offset")
public List<Person> findAllV2(@NotNull Integer max, @NotNull Integer offset) {
	return persons.stream()
			.skip(offset == null ? 0 : offset)
			.limit(max == null ? 10000 : max)
			.collect(Collectors.toList());
}

Versioning feature is not enabled by default. To do that, you need to set property micronaut.router.versioning.enabled to true in application.yml. We will also set default version to 1, which is compatible with tests created in the previous section that does not use versioning feature:

micronaut:
  router:
    versioning:
      enabled: true
      default-version: 1

Micronaut automatic versioning is supported by declarative HTTP client. To create such client we need to define interface that contains signature of target server-side method, and is annotated with @Client. Here’s declarative client interface responsible only for communicating with version 2 of findAll method:

@Client("/persons")
public interface PersonClient {

    @Version("2")
    @Get("?max,offset")
    List<Person> findAllV2(Integer max, Integer offset);

}

The PersonClient declared above may be injected into the test and used for calling API method exposed by server-side application:

@Inject
PersonClient client;

@Test
public void testFindAllV2() {
	List<Person> persons = client.findAllV2(10, 0);
	Assertions.assertEquals(1, persons.size());
}

API Documentation with Swagger

Micronaut provides built-in support for generating Open API / Swagger YAML documentation at compilation time. We can customize produced documentation using standard Swagger annotations. To enable this support for our application we should add the following swagger-annotations dependency to pom.xml, and enable annotation processing for micronaut-openapi module inside Maven compiler plugin configuration:

<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>
...
<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-compiler-plugin</artifactId>
	<version>3.7.0</version>
	<configuration>
		<source>${jdk.version}</source>
		<target>${jdk.version}</target>
		<compilerArgs>
			<arg>-parameters</arg>
		</compilerArgs>
		<annotationProcessorPaths>
			<path>
				<groupId>io.micronaut</groupId>
				<artifactId>micronaut-inject-java</artifactId>
				<version>${micronaut.version}</version>
			</path>
			<path>
				<groupId>io.micronaut.configuration</groupId>
				<artifactId>micronaut-openapi</artifactId>
				<version>${micronaut.version}</version>
			</path>
		</annotationProcessorPaths>
	</configuration>
	...
</plugin>

We have to include some basic information to the generated Swagger YAML like application name, description, version number or author name using @OpenAPIDefinition annotation:

@OpenAPIDefinition(
	info = @Info(
		title = "Sample Application",
		version = "1.0",
		description = "Sample API",
		contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
	)
)
public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

Micronaut generates Swagger file basing on title and version fields inside @Info annotation. In that case our YAML definition file is available under name sample-application-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  static-resources:
    swagger:
	  paths: classpath:META-INF/swagger
	  mapping: /swagger/**

Assuming our application is running on port 8100 Swagger definition is available under the path http://localhost:8100/swagger/sample-application-1.0.yml. You can call this endpoint and copy the response to any Swagger editor as shown below.

micronaut-6

Management and Monitoring Endpoints

Micronaut provides some built-in HTTP endpoints used for management and monitoring. To enable them for the application we first need to include the following dependency:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>

There are no endpoints exposed by default outside application. If you would like to expose them all you should set property endpoints.all.enabled to true. Alternatively, you can enable or disable the single endpoint just by using its id instead of all in the name of property. Also, some of built-in endpoints require authentication, and some not. You may enable/disable it for all endpoints using property endpoints.all.enabled. The following configuration inside application.yaml enables all built-in endpoints except stop endpoints using for graceful shutdown of application, and disables authentication for all the enabled endpoints:

endpoints:
  all:
    enabled: true
    sensitive: false
  stop:
    enabled: false

You may use one of the following:

  • GET /beans – returns information about the loaded bean definitions
  • GET /info – returns static information from the state of the application
  • GET /health – exposes “healthcheck”
  • POST /refresh – it is refresh the application state, all the beans annotated with @Refreshable will be reloaded
  • GET /routes – returns information about URIs exposed by the application
  • GET /logger – returns information about the available loggers
  • GET /caches – returns information about the caches
  • POST /stop – it shuts down the application server

Summary

In this tutorial you have learned how to:

  • Build a simple application that exposes some HTTP endpoints
  • Validate input data inside controller
  • Test your controller with JUnit 5 on embedded Netty using Micronaut HTTP client
  • Use built-in API versioning
  • Generate Swagger API documentation automatically
  • Using build-in management and monitoring endpoints

The first part of my tutorial is available here: https://piotrminkowski.wordpress.com/2019/04/15/micronaut-tutorial-beans-and-scopes/. It uses the same repository as the current part: https://github.com/piomin/sample-micronaut-applications.git.

Micronaut Tutorial: Beans and Scopes

Micronaut is a relatively new JVM-based framework. It is especially designed for building modular, easy testable microservice applications. Micronaut is heavily inspired by Spring and Grails frameworks, which is not a surprise, if we consider it has been developed by the creators of Grails framework. It is based on Java’s annotation processing, IoC (Inversion of Control) and DI (Dependency Injection).

Micronaut implements the JSR-330 (java.inject) specification for dependency injection. It supports constructor injection, field injection, JavaBean and method parameter injection. In this part of tutorial I’m going to give some tips on how to:

  • define and register beans in the application context
  • use built-in scopes
  • inject configuration to your application
  • automatically test your beans during application build with JUnit 5

Prequirements

Before we proceed to the development we need to create sample project with dependencies. Here’s the list of Maven dependencies used in this the application created for this tutorial:

<dependencies>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-inject</artifactId>
	</dependency>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-runtime</artifactId>
	</dependency>
	<dependency>
		<groupId>io.micronaut</groupId>
		<artifactId>micronaut-inject-java</artifactId>
		<scope>provided</scope>
	</dependency>
	<dependency>
		<groupId>ch.qos.logback</groupId>
		<artifactId>logback-classic</artifactId>
		<version>1.2.3</version>
		<scope>runtime</scope>
	</dependency>
	<dependency>
		<groupId>io.micronaut.test</groupId>
		<artifactId>micronaut-test-junit5</artifactId>
		<scope>test</scope>
	</dependency>
</dependencies>

We will use the newest stable version of Micronaut – 1.1.0:

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>io.micronaut</groupId>
			<artifactId>micronaut-bom</artifactId>
			<version>1.1.0</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

The sample application source code is available on Github in the repository https://github.com/piomin/sample-micronaut-applications.git.

Scopes

Micronaut provides 6 built-in scopes for beans. Following JSR-330 additional scopes can be added by defining a @Singleton bean that implements the CustomScope interface. Here’s the list of built-in scopes:

    • Singleton – singleton pattern for bean
    • Prototype – a new instance of the bean is created each time it is injected. It is default scope for bean
    • ThreadLocal – is a custom scope that associates a bean per thread via a ThreadLocal
    • Context – a bean is created at the same time as the ApplicationContext
    • Infrastructure – the @Context bean cannot be replaced
    • Refreshable – a custom scope that allows a bean’s state to be refreshed via the /refresh endpoint
Thread Local

Two of that scopes are really interesting. Let’s begin from @ThreadLocal scope. That’s something what is not available for beans in Spring. We can associate bean with thread using single annotation.
How it works? First, let’s define the bean with @ThreadLocal scope. It holds the single value in the field correlationId. The main function of this bean is to pass the same id between different singleton beans within a single thread. Here’s our sample bean:

@ThreadLocal
public class MiddleService {

    private String correlationId;

    public String getCorrelationId() {
        return correlationId;
    }

    public void setCorrelationId(String correlationId) {
        this.correlationId = correlationId;
    }

}

Every singleton bean injects our bean annotated with @ThreadLocal. There are 2 sample singleton beans defined:

@Singleton
public class BeginService {

    @Inject
    MiddleService service;

    public void start(String correlationId) {
        service.setCorrelationId(correlationId);
    }

}

@Singleton
public class FinishService {

    @Inject
    MiddleService service;

    public String finish() {
        return service.getCorrelationId();
    }

}
Testing

Testing with Micronaut and JUnit 5 is very simple. We have already included micronaut-test-junit5 dependency to our pom.xml. Now, we only have to annotate test class with @MicronautTest.
Here’s our test. I have run 20 threads which uses @ThreadLocal through BeginService and FinishService singletons. Each thread sets randomly generated correlation id and checks if those two singleton beans use the same correlationId.

@MicronautTest
public class ScopesTests {

    @Inject
    BeginService begin;
    @Inject
    FinishService finish;

    @Test
    public void testThreadLocalScope() {
        final Random r = new Random();
        ExecutorService executor = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 20; i++) {
            executor.execute(() -> {
                String correlationId = "abc" + r.nextInt(10000);
                begin.start(correlationId);
                Assertions.assertEquals(correlationId, finish.finish());
            });
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
        }
        System.out.println("Finished all threads");
    }
	
}
Refreshable

The @Refreshable is another interesting scope offered by Micronaut. You can refresh the state of such bean by calling HTTP endpoint /refresh or by publishing RefreshEvent to application context. Because we don’t use HTTP server the second option is a right choice for us. First, let’s define bean with @Refreshable scope. It inject value from configuration property and returns it:

@Refreshable
public class RefreshableService {

    @Property(name = "test.property")
    String testProperty;

    @PostConstruct
    public void init() {
        System.out.println("Property: " + testProperty);
    }

    public String getTestProperty() {
        return testProperty;
    }

}

To test it we should first replace the value of test.property. After injecting ApplicationContext into the test we may add new property source programmatically by calling method addPropertySource. Because this type of property has a higher loading priority than the properties from application.yml it would be overridden. Now, we just need to publish new refresh event to context, and call method from sample bean one more time:

@Inject
ApplicationContext context;
@Inject
RefreshableService refreshable;

@Test
public void testRefreshableScope() {
	String testProperty = refreshable.getTestProperty();
	Assertions.assertEquals("hello", testProperty);
	context.getEnvironment().addPropertySource(PropertySource.of(CollectionUtils.mapOf("test.property", "hi")));
	context.publishEvent(new RefreshEvent());
	try {
		Thread.sleep(1000);
	} catch (InterruptedException e) {
		e.printStackTrace();
	}
	testProperty = refreshable.getTestProperty();
	Assertions.assertEquals("hi", testProperty);
}

Beans

In the previous section we have already been defining simple beans with different scopes. Micronaut provides some more advanced features, that can be used while defining new beans. You can create conditional beans, define replacement for existing beans or different methods of injecting configuration into the bean.

Conditions and Replacements

In order to define conditions for newly created bean we need to annotate it with @Requires. Micronaut offers many possibilities of defining configuration requirements. You will always use the same annotation, but the different field for each option. You can require:
You can require:

  • the presence of one more classes – @Requires(classes=...)
  • the absence of one more classes – @Requires(missing=...)
  • the presence one or more beans – @Requires(beans=...)
  • the absence of one or more beans – @Requires(missingBeans=...)
  • a property with an optional value – @Requires(property="...")
  • a property to not be part of the configuration – @Requires(missingProperty="...")
  • the presence of one of more files in the file system – @Requires(resources="...")
  • the presence of one of more classpath resources – @Requires(resources="...")

And some others. Now, let’s consider the simple sample including some selected conditional strategies. Here’s the class that requires the property test.property to be available in the environment.

@Prototype
@Requires(property = "test.property")
public class TestPropertyRequiredService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

Here’s another bean definition. It requires that property test.property2 is not available in the environment. The following bean is being replaced by the another bean through annotation @Replaces(bean = TestPropertyRequiredValueService.class).

@Prototype
@Requires(missingProperty = "test.property2")
@Replaces(bean = TestPropertyRequiredValueService.class)
public class TestPropertyNotRequiredService {

    public String getTestProperty() {
        return "None";
    }
    
}

Here’s the last sample bean declaration. There is one interesting option related to conditional beans depended from property. You can require the property to be a certain value, not be a certain value, and use a default in those checks if its not set. Also, the following bean is replacing TestPropertyNotRequiredService bean.

@Prototype
@Requires(property = "test.property", value = "hello", defaultValue = "Hi!")
public class TestPropertyRequiredValueService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

The result of the following test is predictable:

@Inject
TestPropertyRequiredService service1;
@Inject
TestPropertyNotRequiredService service2;
@Inject
TestPropertyRequiredValueService service3;

@Test
public void testPropertyRequired() {
	String testProperty = service1.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("hello", testProperty);
}

@Test
public void testPropertyNotRequired() {
	String testProperty = service2.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("None", testProperty);
}

@Test
public void testPropertyValueRequired() {
	String testProperty = service3.getTestProperty();
	Assertions.assertNotNull(testProperty);
	Assertions.assertEquals("hello", testProperty);
}
Application Configuration

Configuration in Micronaut takes inspiration from both Spring Boot and Grails, integrating configuration properties from multiple sources directly into the core IoC container. Configuration can by default be provided in either Java properties, YAML, JSON or Groovy files. There are 7 levels of priority for property sources (for comparison – Spring Boot provides 17 levels):

  1. Command line arguments
  2. Properties from SPRING_APPLICATION_JSON
  3. Properties from MICRONAUT_APPLICATION_JSON
  4. Java System Properties
  5. OS environment variables
  6. Enviroment-specific properties from application-{environment}.{extension}
  7. Application-specific properties from application.{extension}

One of more interesting option related to Micronaut configuration is @EachProperty and @EachBean. Both these annotations are used for defining multiple instances of bean, each with their own distinct configuration.
In order to show you the sample use case for those annotations, we should first imagine that we are building simple client-side load balancer that connects with multiple instances of the service. The configuration is available under property test.url.* and contains only target URL:

@EachProperty("test.url")
public class ClientConfig {

    private String name;
    private String url;

    public ClientConfig(@Parameter String name) {
        this.name = name;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

Assuming we have the following configuration properties, Micronaut creates three instances of our configuration under the names: client1, client2 and client3.

test:
  url:
    client1.url: http://localhost:8080
    client2.url: http://localhost:8090
    client3.url: http://localhost:8100

Using @EachProperty annotation was only the first step. We also ClientService responsible for performing interaction with the target service.

public class ClientService {

    private String url;

    public ClientService(String url) {
        this.url = url;
    }

    public String connect() {
        return url;
    }
}

The ClientService is still not registered as a bean, since it is not annotated. Our goal is to inject three beans ClientConfig containing the distinct configuration, and register three instances of ClientService bean. That’s why we will define bean factory with the method annotated with @EachBean. In Micronaut, factory usually allows you to register the bean, which is not a part of your codebase, but it is also useful in that case.

@Factory
public class ClientFactory {

    @EachBean(ClientConfig.class)
    ClientService client(ClientConfig config) {
        String url = config.getUrl();
        return new ClientService(url);
    }
}

Finally, we may proceed to the test. We have injected all the three instances of ClientService. Each of them contains configuration injected from different instance of ClientConfig bean. If you won’t set any qualifier, Micronaut injects the bean containing configuration defined first. For injecting another instances of bean we should use qualifier, which is the name of configuration property.

@Inject
ClientService client;
@Inject
@Named("client2")
ClientService client2;
@Inject
@Named("client3")
ClientService client3;

@Test
public void testClient() {
	String url = client.connect();
	Assertions.assertEquals("http://loalhost:8080", url);
	url = client2.connect();
	Assertions.assertEquals("http://loalhost:8090", url);
	url = client3.connect();
	Assertions.assertEquals("http://loalhost:8100", url);
}

Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

To make our test relevant we will gather the statistics for the two almost identical applications. Of course, the only difference between will be in the frameworks we used for building it. Our sample application is very simple. It exposes some endpoints with in-memory CRUD operations for a single entity. It also exposes info and health endpoints, and also Swagger API with all endpoints auto-generated documentation.

The sample application performance will be tested on JDK 11. We will use Yourkit for profiling and monitoring memory usage after startup and during load testing, and Gatling for building performance API tests. First, let’s perform a short overview of our sample application.

Source Code

I have implemented very simple in-memory repository bean that add new object into the list and provides find method for searching object by id generated during add method.

public class PersonRepository {

    List<Person> persons = new ArrayList<>();

    public Person add(Person person) {
        person.setId(persons.size()+1);
        persons.add(person);
        return person;
    }

    public Person findById(Long id) {
        Optional<Person> person = persons.stream().filter(a -> a.getId().equals(id)).findFirst();
        if (person.isPresent())
            return person.get();
        else
            return null;
    }

    public List<Person> findAll() {
        return persons;
    }

}

The repository bean is injected into controller. Controller exposes two HTTP methods. First of them (POST) is used for adding new object, while the second (GET) for searching it by id. Here’s controller implementation inside Spring Boot application:

@RestController
@RequestMapping("/persons")
public class PersonsController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonsController.class);

    @Autowired
    PersonRepository repository;

    @PostMapping
    public Person add(@RequestBody Person person) {
        LOGGER.info("Person add: {}", person);
        return repository.add(person);
    }

    @GetMapping("/{id}")
    public Person findById(@PathVariable("id") Long id) {
        LOGGER.info("Person find: id={}", id);
        return repository.findById(id);
    }

    @GetMapping
    public List<Person> findAll() {
        LOGGER.info("Person find");
        return repository.findAll();
    }

}

Here’s the similar implementation for Micronaut:

To implement REST endpoints, healthcheck and Swagger API we need to include some dependencies. Here’s the list of dependencies for Spring Boot:

<parent>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-parent</artifactId>
	<version>2.1.4.RELEASE</version>
</parent>
<groupId>pl.piomin.services</groupId>
<artifactId>sample-app</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
	<java.version>11</java.version>
	<maven.compiler.source>${java.version}</maven.compiler.source>
	<maven.compiler.target>${java.version}</maven.compiler.target>
</properties>
<dependencies>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-web</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-actuator</artifactId>
	</dependency>
	<dependency>
		<groupId>io.springfox</groupId>
		<artifactId>springfox-swagger2</artifactId>
		<version>2.9.2</version>
	</dependency>
</dependencies>

Here’s the similar list of dependencies required for Micronaut:

<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
	<groupId>io.micronaut</groupId>
	<artifactId>micronaut-inject-java</artifactId>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>io.swagger.core.v3</groupId>
	<artifactId>swagger-annotations</artifactId>
</dependency>
<dependency>
	<groupId>ch.qos.logback</groupId>
	<artifactId>logback-classic</artifactId>
	<version>1.2.3</version>
	<scope>runtime</scope>
</dependency>

I also had to provide some additional configuration in application.yml to enable Swagger and healthchecks:

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false

Starting Application

First, let’s start our application. I use Intellij for that. The sample application built basing on Spring Boot starts around 6-7 seconds. The following start take up exactly 6.344.

performance-1

The similar application built on top of Micronaut starts around 3-4 seconds. The following start take up exactly 3.463 as shown below. However, I had to disable environment deduction when I started application behind corporate proxy by setting VM option -Dmicronaut.cloud.platform=BARE_METAL. I think that startup time for both applications is really ok.

performance-7

Here’s the graph that illustrates difference in startup time between Spring Boot and Micronaut.

performance-sum-1

Building Application

We will also check out the size of application fat JAR. To do that you should build the application using mvn clean install command. For Spring Boot we used two standard starters: Web, Actuator, and library Swagger SpringFox. As a result of this there are more than 50 libraries included. Of course, we could made some exclusions or do not use starters, but I have chosen the simplest way to built an application. The fat JAR has a size of 24.2 MB.
The similar application based on Micronaut is much smaller. The fat JAR has a size of 12.1 MB. I have included more libraries in pom.xml, and finally there were 37 libraries included.
Spring Boot includes more libraries on the standard configuration, but on the other hand it has more features and auto-configuration than Micronaut.

Here’s the graph that illustrates difference in size of target JAR between Spring Boot and Micronaut.

performance-sum-2

Memory Management

Just after startup Spring Boot application has allocated 305 MB for heap and 81 MB for non-heap. I haven’t set any memory limit using Xmx or any other option. In heap, 8 MB has been consumed by old gen, 60 MB by eden space, and 15 MB by survivor. Most of non-heap were consumed by metaspace – 52 MB.
After running performance load test heap allocation increased to 369 MB, and non-heap to 87 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-2

Just after startup Micronaut application has allocated 254 MB for heap and 51 MB for non-heap. I haven’t set any memory limit using Xmx or any other option – the same as for Spring Boot application. In heap, 2.5 MB has been consumed by old gen, 20 MB by eden space, and 7 MB by survivor. Most of non-heap were consumed by metaspace – 35 MB.
After running performance load test heap allocation has not changed, and non-heap increased to 63 MB. Here’s the screen that illustrates CPU and RAM usage before and during performance test.

performance-8

Here’s heap memory usage comparison between Spring Boot and Micronaut just after startup.

performance-sum-3

And non-heap.

performance-sum-4

Performance Tests

I used Gatling for building performance load tests. This tool allows you to create test scenarios in Scala. We are generating 40k sample requests sent simultaneously by 20 threads. Here’s the test implemented for POST method.

class SimpleTest extends Simulation {

  val scn = scenario("AddPerson").repeat(2000, "n") {
    exec(http("Persons-POST")
      .post("http://localhost:8080/persons")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"name":"Test${n}","gender":"MALE","age":100}"""))
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

Here’s the test implemented for GET method.

class SimpleTest2 extends Simulation {

  val scn = scenario("GetPerson").repeat(2000, "n") {
    exec(http("Persons-GET")
      .get("http://localhost:8080/persons/${n}")
      .check(status.is(200)))
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, TimeUnit.MINUTES))

}

The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1176.

performance-3

The following screen shows the histogram with response time percentiles over time.

performance-5

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1428.

performance-4

The following screen shows the histogram with response time percentiles over time.

performance-6

Now, we are running the same Gatling load test for Micronaut application. The result of performance test for POST /persons method is visible on the picture below. An average number of requests processed during one second is 1290.

performance-9

The following screen shows the histogram with response time percentiles over time.

performance-11

The result of performance test for GET /persons/{id} method is visible on the picture below. An average number of requests processed during one second is 1538.

performance-10

The following screen shows the histogram with response time percentiles over time.

performance-12

There are no big difference in processing time between Spring Boot and Micronaut. It’s possible that small differences in time are not related with framework, but rather with base server. By default, Spring Boot uses Tomcat, while Micronaut uses Netty.

The Future of Spring Cloud Microservices After Netflix Era

If somebody would ask you about Spring Cloud, the first thing that comes into your mind will probably be Netflix OSS support. Support for such tools like Eureka, Zuul or Ribbon is provided not only by Spring, but also by some other popular frameworks used for building microservices architecture like Apache Camel, Vert.x or Micronaut. Currently, Spring Cloud Netflix is the most popular project being a part of Spring Cloud. It has around 3.2k stars on GitHub, while the second best has around 1.4k. Therefore, it is quite surprising that Pivotal has announced that most of Spring Cloud Netflix modules are entering maintenance mode. You can read more about in the post published on the Spring blog by Spencer Gibb https://spring.io/blog/2018/12/12/spring-cloud-greenwich-rc1-available-now.
Ok, let’s perform a short summary of that changes. Starting from Spring Cloud Greenwich Release Train Netflix OSS Archaius, Hystrix, Ribbon and Zuul are entering maintenance mode. It means that there won’t be any new features to these modules, and Spring Cloud team will perform only some bug fixes and fix security issues. The maintenance mode does not include Eureka module, which is still supported.
The explanation of these changes is pretty easy. Especially for two of them. Currently, Ribbon and Hystrix are not actively developed by Netflix, although they are still deployed at scale. Additionally, Hystrix has been already superseded by the new solution for telemetry called Atlas. The situation with Zuul is not such obvious. Netflix has announced open sourcing of Zuul 2 on May 2018. New version of Zuul gateway is built on top of Netty server, and includes some improvements and new features. You can read more about them on Netflix blog https://medium.com/netflix-techblog/open-sourcing-zuul-2-82ea476cb2b3. Despite that decision taken by Netflix cloud team, Spring Cloud team has abandoned development of Zuul module. I can only guess that it was caused by the earlier decision of starting new module inside Spring Cloud family dedicated especially for being an API gateway in the microservices-based architecture – Spring Cloud Gateway.
The last piece of that puzzle is Eureka – a discovery server. It is still developed, but the situation is also interesting here. I will describe that in the next part of this article.
All these news have inspired me to take a look on the current situation of Spring Cloud and discuss some potential changes in the future. As an author of Mastering Spring Cloud book I’m trying to follow an evolution of that project to stay current. It’s also worth mentioning that we are have microservices inside my organization – of course built on top of Spring Boot and Spring Cloud using such modules like Eureka, Zuul and Ribbon. In this article, I would like to discuss some potential … for such popular microservices patterns like service discovery, distributed configuration, client-side load balancing and API gateway.

Service Discovery

Eureka is the only one important Spring Cloud Netflix module that has not been moved to the maintenance mode. However, I would not say that it is actively developed. The last commit in the repository maintained by Netflix is from 11th January. Some time ago they have started working on Eureka 2, but it seems these works has been abandoned or they just have postponed open sourcing the newest version code to the future. Here https://github.com/Netflix/eureka/tree/2.x you can find an interesting comment about it: “The 2.x branch is currently frozen as we have had some internal changes w.r.t. to eureka2, and do not have any time lines for open sourcing of the new changes.”. So, we have two possibilities. Maybe, Netflix will decide to open source those internal changes as a version 2 of Eureka server. It is worth to remember that Eureka is a battle proven solution used at Scale by Netflix directly, and probably by many other organizations through Spring Cloud.
The second option is to choose another discovery server. Currently, Spring Cloud supports discovery based on various tools: ZooKeeper, Consul, Alibaba Nacos, Kubernetes. In fact, Kubernetes is based on etcd. Support for etcd is also being developed by Spring Cloud, but it is still in the incubation stage, and it is not known if it will be ever promoted to the official release train. In my opinion, there one leader amongst these solutions – HashiCorp’s Consul.
Consul is now described as a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. It can be used as a discovery server or a key/value store in your microservices-based architecture. The integration with Consul is implemented by Spring Cloud Consul project. To enable Consul client for your application you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

By default, Spring tries to connect with Consul on the address localhost:8500. If you need to override this address you should set the appropriate properties inside application.yml:

spring:  
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

You can easily test this solution with local instance of Consul started as the Docker container:

$ docker run -d --name consul -p 8500:8500 consul

As you see Consul discovery implementation with Spring Cloud is very easy – the same as for Eureka. Consul has one undoubted advantage over Eureka – it is continuously maintained and developed by HashiCorp. Its popularity is growing fast. It is a part of biggest HashiCorp ecosystem, which includes Vault, Nomad and Terraform. In contrast to Eureka, Consul can be used not only for service discovery, but also as a configuration server in your microservices-based architecture.

Distributed Configuration

Netflix Archaius is an interesting solution for managing externalized configuration in microservices architecture. Although it offers some interesting features like dynamic and typed properties or support for dynamic data sources such as URLs, JDBC or AWS DynamoDB, Spring Cloud has also decided to move it to the maintenance mode. However, a popularity of Spring Cloud Archaius was limited, due to existence of similar project fully created by Pivotal team and community – Spring Cloud Config. Spring Cloud Config supports multiple source repositories including Git, JDBC, Vault or simple files. You can find many examples of using this project for providing distributed configuration for your microservices in my previous posts. Today, I’m not going to talk about it. We will discuss an alternative solution – also supported by Spring Cloud.
As I have mentioned in the end of previous section Consul can also be used as a configuration server. If you use Eureka as a discovery server, using Spring Cloud Config as a configuration server is a natural choice, because Eureka simply does not provide such features. This is not the case if you decide to use Consul. Now it makes sense to choose between two solutions: Spring Cloud Consul Config and Spring Cloud Config. Of course, both of them have their advantages and disadvantages. For example, you can easily build a cluster with Consul nodes, while with Spring Cloud Config you must rely on external discovery.
Now, let’s see how to use Spring Cloud Consul for managing external configuration in your application. To enable it on the application side you just need to include the following dependency to your Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-consul-config</artifactId>
</dependency>

The same as for service discovery, If you would like to override some default client settings you need to set properties spring.cloud.consul.*. However, such a configuration must provided inside bootstrap.yml.

spring:  
  application:
    name: callme-service
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500

The name of property source created on Consul should be the same as the application name provided in bootstrap.yml inside config folder. You should create key server.port with value 0, to force Spring Boot to generate listening port number randomly. Supposing you need to set application default listening port you should the following configuration.

spring-cloud-1

When enabling dynamic port number generation you also need to override application instance id to be unique across a single machine. These feature is required if you are running multiple instances of a single service in the same machine. We will do it for callme-service, so we need to override the the property spring.cloud.consul.discovery.instance-id with our value as shown below.

spring-cloud-4

Then, you should see the following log on your application startup.

spring-cloud-3

API Gateway

The successor of Spring Cloud Netflix Zuul is Spring Cloud Gateway. This project has been started around two years ago, and now is the second most popular Spring Cloud project with 1.4k stars on GitHub. It provides an API Gateway built on top of the Spring Ecosystem, including: Spring 5, Spring Boot 2 and Project Reactor. It is running on Netty, and does not work with traditional servlet container like Tomcat or Jetty. It allows to define routes, predicates and filters.
API gateway, the same as every Spring Cloud microservice may be easily integrated with service discovery based on Consul. We just need to include the appropriate dependencies inside pom.xml. We will use the latest development version of Spring Cloud libraries – 2.2.0.BUILD-SNAPSHOT. Here’s the list of required dependencies:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-config</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-gateway</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

The gateway configuration will also be served by Consul. Because, we have pretty more configuration settings than for sample microservices, we will store it as YAML file. To achieve that we should create YAML file available under path /config/gateway-service/data on Consul Key/Value. The configuration visible below enables service discovery integration and defines routes to the downstream services. Each route contains name of the target service under which it is registered in service discovery, matching path and rewrite path used for call endpoint exposed by the downstream service. The following configuration is load on startup by our API gateway:

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
        - id: caller-service
          uri: lb://caller-service
          predicates:
            - Path=/caller/**
          filters:
            - RewritePath=/caller/(?.*), /$\{path}
        - id: callme-service
          uri: lb://callme-service
          predicates:
            - Path=/callme/**
          filters:
            - RewritePath=/callme/(?.*), /$\{path}

Here’s the same configuration visible on Consul.

spring-cloud-2

The last step is to force gateway-service to read configuration stored as YAML. To do that we need to set property spring.cloud.consul.config.format to YAML. Here’s the full configuration provided inside bootstrap.yml.

spring:
  application:
    name: gateway-service
  cloud:
    consul:
      host: 192.168.99.100
      config:
        format: YAML

Client-side Load Balancer

In version 2.2.0.BUILD-SNAPSHOT of Spring Cloud Commons Ribbon is still the main auto-configured load balancer for HTTP clients. Although Spring Cloud team has announced that Spring Cloud Load Balancer will be the successor of Ribbon, we currently won’t find many informations about that project in documentation and on the web. We may expect that the same as for Netflix Ribbon any configuration will be transparent for us, especially if we use discovery client. Currently, spring-cloud-loadbalancer module is a part of Spring Cloud Commons project. You may include it directly to your application by declaring the following dependency in pom.xml:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-loadbalancer</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
</dependency>

For the test purposes it is worth to exclude some Netflix modules included together with <code>spring-cloud-starter-consul-discovery</code> starter. Now, we are sure that Ribbon is not used in background as load balancer. Here’s the list of exclusions I set for my sample application:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
	<version>2.2.0.BUILD-SNAPSHOT</version>
	<exclusions>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-netflix-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-archaius</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-core</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-httpclient</artifactId>
		</exclusion>
		<exclusion>
			<groupId>com.netflix.ribbon</groupId>
			<artifactId>ribbon-loadbalancer</artifactId>
		</exclusion>
	</exclusions>
</dependency>

Treat my example just as a playground. Certainly the targeted approach is going to be much easier. First, we should annotate our main or configuration class with @LoadBalancerClient. As always, the name of client should be same as the name of target service registered in registry. The annotation should also contain the class with client configuration.

@SpringBootApplication
@LoadBalancerClients({
	@LoadBalancerClient(name = "callme-service", configuration = ClientConfiguration.class)
})
public class CallerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallerApplication.class, args);
	}

	@Bean
	RestTemplate template() {
		return new RestTemplate();
	}

}

Here’s our load balancer configuration class. It contains the declaration of a single @Bean. I have chosen RoundRobinLoadBalancer type.

public class ClientConfiguration {

	@Bean
	public RoundRobinLoadBalancer roundRobinContextLoadBalancer(LoadBalancerClientFactory clientFactory, Environment env) {
		String serviceId = clientFactory.getName(env);
		return new RoundRobinLoadBalancer(serviceId, clientFactory
				.getLazyProvider(serviceId, ServiceInstanceSupplier.class), -1);
	}

}

Finally, here’s the implementation of caller-service controller. It uses LoadBalancerClientFactory directly to find list of available instances of callme-service. Then it selects a single instance, get its host and port, and sets in as an target URL.

@RestController
@RequestMapping("/caller")
public class CallerController {

	@Autowired
	Environment environment;
	@Autowired
	RestTemplate template;
	@Autowired
	LoadBalancerClientFactory clientFactory;

	@GetMapping
	public String call() {
		RoundRobinLoadBalancer lb = clientFactory.getInstance("callme-service", RoundRobinLoadBalancer.class);
		ServiceInstance instance = lb.choose().block().getServer();
		String url = "http://" + instance.getHost() + ":" + instance.getPort() + "/callme";
		String callmeResponse = template.getForObject(url, String.class);
		return "I'm Caller running on port " + environment.getProperty("local.server.port")
				+ " calling-> " + callmeResponse;
	}

}

Summary

The following picture illustrates the architecture of sample system. We have two instances of callme-service, a single instance of caller-service, which uses Spring Cloud Balancer to find the list of available instances of callme-service. The ports are generated dynamically. The API gateway is hiding the complexity of our system from external client. It is available on port 8080, and is forwarding requests to the downstream basing on request context path.

spring-cloud-1.png

After starting, all the microservices you should be registered on your Consul node.

spring-cloud-7

Now, you can try to endpoint exposed by caller-service through gateway: http://localhost:8080/caller. You should something like that:

spring-cloud-6

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-cloud-microservices-future.git.