Part 2: Microservices security with OAuth2

I have been writing about security with OAuth2 in some articles before. This article is the continuation of samples previously described in the following posts:

Today I’m going to show you more advanced sample than before, where all authentication and OAuth2 data is stored on database. We also find out how to secure microservices, especially considering an inter-communication between them with Feign client. I hope this article will provide a guidance and help you with designing and implementing secure solutions with Spring Cloud. Let’s begin.

There are four services running inside our sample system, what is visualized on the figure below. There is nothing unusual here. We have a discovery server where our sample microservices account-service and customer-service are registered. Those microservices are both protected with OAuth2 authorization. Authorization is managed by auth-server. It stores not only OAuth2 tokens, but also users authentication data. The whole process is implemented using Spring Security and Spring Cloud libraries.

oauth2-1

1. Start database

All the authentication credentials and tokens are stored in MySQL database. So, the first step is to start MySQL. The most comfortable way to achieve it is through a Docker container. The command visible below in addition to starting database also creates schema and user oauth2.

docker run -d --name mysql -e MYSQL_DATABASE=oauth2 -e MYSQL_USER=oauth2 -e MYSQL_PASSWORD=oauth2 -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -p 33306:3306 mysql

2. Configure data source in application

MySQL is now available on port host 192.168.99.100 if you run Docker on Windows and port 33306. Datasource properties should be set in application.yml of auth-server. Spring Boot is also able to run some SQL scripts on selected datasource after an application startup. It’s good news for us, because we have to create some tables on the schema dedicated for OAuth2 process.

spring:
  application:
    name: auth-server
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/oauth2?useSSL=false
    username: oauth2
    password: oauth2
    driver-class-name: com.mysql.jdbc.Driver
    schema: classpath:/script/schema.sql
    data: classpath:/script/data.sql

3. Create schema in MySQL

Despite appearances, it is not so simple to find the SQL script with tables that needs to be created when using Spring Security for OAuth2. Here’s that script, which is available under /src/main/resources/script/schema.sql in auth-server module. We have to create six tables:

  • oauth_client_details
  • oauth_client_token
  • oauth_access_token
  • oauth_refresh_token
  • oauth_code
  • oauth_approvals
drop table if exists oauth_client_details;
create table oauth_client_details (
  client_id VARCHAR(255) PRIMARY KEY,
  resource_ids VARCHAR(255),
  client_secret VARCHAR(255),
  scope VARCHAR(255),
  authorized_grant_types VARCHAR(255),
  web_server_redirect_uri VARCHAR(255),
  authorities VARCHAR(255),
  access_token_validity INTEGER,
  refresh_token_validity INTEGER,
  additional_information VARCHAR(4096),
  autoapprove VARCHAR(255)
);
drop table if exists oauth_client_token;
create table oauth_client_token (
  token_id VARCHAR(255),
  token LONG VARBINARY,
  authentication_id VARCHAR(255) PRIMARY KEY,
  user_name VARCHAR(255),
  client_id VARCHAR(255)
);

drop table if exists oauth_access_token;
CREATE TABLE oauth_access_token (
  token_id VARCHAR(256) DEFAULT NULL,
  token BLOB,
  authentication_id VARCHAR(256) DEFAULT NULL,
  user_name VARCHAR(256) DEFAULT NULL,
  client_id VARCHAR(256) DEFAULT NULL,
  authentication BLOB,
  refresh_token VARCHAR(256) DEFAULT NULL
);

drop table if exists oauth_refresh_token;
CREATE TABLE oauth_refresh_token (
  token_id VARCHAR(256) DEFAULT NULL,
  token BLOB,
  authentication BLOB
);

drop table if exists oauth_code;
create table oauth_code (
  code VARCHAR(255), authentication LONG VARBINARY
);
drop table if exists oauth_approvals;
create table oauth_approvals (
    userId VARCHAR(255),
    clientId VARCHAR(255),
    scope VARCHAR(255),
    status VARCHAR(10),
    expiresAt DATETIME,
    lastModifiedAt DATETIME
);

4. Add some test data to database

There is also the second SQL script /src/main/resources/script/data.sql with some insert commands for the test purpose. The most important thing is to add some client id/client secret pairs.

INSERT INTO `oauth_client_details` (`client_id`, `client_secret`, `scope`, `authorized_grant_types`, `access_token_validity`, `additional_information`) VALUES ('account-service', 'secret', 'read', 'authorization_code,password,refresh_token,implicit', '900', '{}');
INSERT INTO `oauth_client_details` (`client_id`, `client_secret`, `scope`, `authorized_grant_types`, `access_token_validity`, `additional_information`) VALUES ('customer-service', 'secret', 'read', 'authorization_code,password,refresh_token,implicit', '900', '{}');
INSERT INTO `oauth_client_details` (`client_id`, `client_secret`, `scope`, `authorized_grant_types`, `access_token_validity`, `additional_information`) VALUES ('customer-service-write', 'secret', 'write', 'authorization_code,password,refresh_token,implicit', '900', '{}');

5. Bulding Authorization Server

Now, the most important thing in this article – authorization server configuration. The configuration class should be annotated with @EnableAuthorizationServer. Then we need to overwrite some methods from extended AuthorizationServerConfigurerAdapter class. The first important thing here is to set the default token storage to a database by providing bean JdbcTokenStore with default data source as a parameter. Although all tokens are now stored in a database we still want to generate them in JWT format. That’s why the second bean JwtAccessTokenConverter has to be provided in that class. By overriding different configure methods inherited from the base class we can set a default storage for OAuth2 client details and require authorization server to always verify the API key submitted in HTTP headers.

@Configuration
@EnableAuthorizationServer
public class OAuth2Config extends AuthorizationServerConfigurerAdapter {

   @Autowired
   private DataSource dataSource;
   @Autowired
   private AuthenticationManager authenticationManager;

   @Override
   public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
	  endpoints.authenticationManager(this.authenticationManager).tokenStore(tokenStore())
		   .accessTokenConverter(accessTokenConverter());
   }

   @Override
   public void configure(AuthorizationServerSecurityConfigurer oauthServer) throws Exception {
	  oauthServer.checkTokenAccess("permitAll()");
   }

   @Bean
   public JwtAccessTokenConverter accessTokenConverter() {
	  return new JwtAccessTokenConverter();
   }

   @Override
   public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
	  clients.jdbc(dataSource);
   }

   @Bean
   public JdbcTokenStore tokenStore() {
	  return new JdbcTokenStore(dataSource);
   }

}

The main OAuth2 grant type, which is used in the current sample is Resource owner credentials grant. In that type of grant client application sends user login and password to authenticate against OAuth2 server. A POST request sent by the client contains the following parameters:

  • grant_type – with the value ‘password’
  • client_id – with the client’s ID
  • client_secret – with the client’s secret
  • scope – with a space-delimited list of requested scope permissions
  • username – with the user’s username
  • password – with the user’s password

The authorization server will respond with a JSON object containing the following parameters:

  • token_type – with the value ‘Bearer’
  • expires_in – with an integer representing the TTL of the access token
  • access_token – the access token itself
  • refresh_token – a refresh token that can be used to acquire a new access token when the original expires

Spring application provides a custom authentication mechanism by implementing UserDetailsService interface and overriding its method loadUserByUsername. In our sample application user credentials and authorities are also stored in the database, so we inject UserRepository/code> bean to the custom UserDatailsService class.

@Component("userDetailsService")
public class UserDetailsServiceImpl implements UserDetailsService {

    private final Logger log = LoggerFactory.getLogger(UserDetailsServiceImpl.class);

    @Autowired
    private UserRepository userRepository;

    @Override
    @Transactional
    public UserDetails loadUserByUsername(final String login) {

        log.debug("Authenticating {}", login);
        String lowercaseLogin = login.toLowerCase();

        User userFromDatabase;
        if(lowercaseLogin.contains("@")) {
            userFromDatabase = userRepository.findByEmail(lowercaseLogin);
        } else {
            userFromDatabase = userRepository.findByUsernameCaseInsensitive(lowercaseLogin);
        }

        if (userFromDatabase == null) {
            throw new UsernameNotFoundException("User " + lowercaseLogin + " was not found in the database");
        } else if (!userFromDatabase.isActivated()) {
            throw new UserNotActivatedException("User " + lowercaseLogin + " is not activated");
        }

        Collection<GrantedAuthority> grantedAuthorities = new ArrayList<>();
        for (Authority authority : userFromDatabase.getAuthorities()) {
            GrantedAuthority grantedAuthority = new SimpleGrantedAuthority(authority.getName());
            grantedAuthorities.add(grantedAuthority);
        }

        return new org.springframework.security.core.userdetails.User(userFromDatabase.getUsername(), userFromDatabase.getPassword(), grantedAuthorities);
    }

}

That’s practically all what should be written about auth-service module. Let’s move on to the client microservices.

6. Bulding microservices

The REST API is very simple. It does nothing more than returning some data. However, there is one interesting thing in that implementation. That is preauthorization based on OAuth token scope, which is annotated on the API methods with @PreAuthorize("#oauth2.hasScope('read')").

@RestController
public class AccountController {

   @GetMapping("/{id}")
   @PreAuthorize("#oauth2.hasScope('read')")
   public Account findAccount(@PathVariable("id") Integer id) {
	  return new Account(id, 1, "123456789", 1234);
   }

   @GetMapping("/")
   @PreAuthorize("#oauth2.hasScope('read')")
   public List<Account> findAccounts() {
	  return Arrays.asList(new Account(1, 1, "123456789", 1234), new Account(2, 1, "123456780", 2500),
		new Account(3, 1, "123456781", 10000));
   }

}

Preauthorization is disabled by default. To enable it for API methods we should use @EnableGlobalMethodSecurity annotation. We should also declare that such a preauthorization would be based on OAuth2 token scope.

@Configuration
@EnableResourceServer
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class OAuth2ResourceServerConfig extends GlobalMethodSecurityConfiguration {

    @Override
    protected MethodSecurityExpressionHandler createExpressionHandler() {
        return new OAuth2MethodSecurityExpressionHandler();
    }

}

7. Feign client with OAuth2

The API method findAccounts implemented in AccountController is invoked by customer-service through a Feign client.

@FeignClient(name = "account-service", configuration = AccountClientConfiguration.class)
public interface AccountClient {

   @GetMapping("/")
   List<Account> findAccounts();

}

If you call account service endpoint via Feign client you get the following exception.

feign.FeignException: status 401 reading AccountClient#findAccounts(); content:{"error":"unauthorized","error_description":"Full authentication is required to access this resource"}

Why? Of course, account-service is protected with OAuth2 token authorization, but Feign client does not send an authorization token in the request header. That aproach may be customized by defining custom configuration class for Feign client. It allows to declare a request interceptor. In that case we can use an implementation for OAuth2 provided by OAuth2FeignRequestInterceptor from Spring Cloud OAuth2 library. We prefer password

public class AccountClientConfiguration {

   @Value("${security.oauth2.client.access-token-uri}")
   private String accessTokenUri;
   @Value("${security.oauth2.client.client-id}")
   private String clientId;
   @Value("${security.oauth2.client.client-secret}")
   private String clientSecret;
   @Value("${security.oauth2.client.scope}")
   private String scope;

   @Bean
   RequestInterceptor oauth2FeignRequestInterceptor() {
	  return new OAuth2FeignRequestInterceptor(new DefaultOAuth2ClientContext(), resource());
   }

   @Bean
   Logger.Level feignLoggerLevel() {
	  return Logger.Level.FULL;
   }

   private OAuth2ProtectedResourceDetails resource() {
	  ResourceOwnerPasswordResourceDetails resourceDetails = new ResourceOwnerPasswordResourceDetails();
	  resourceDetails.setUsername("piomin");
	  resourceDetails.setPassword("piot123");
	  resourceDetails.setAccessTokenUri(accessTokenUri);
	  resourceDetails.setClientId(clientId);
	  resourceDetails.setClientSecret(clientSecret);
	  resourceDetails.setGrantType("password");
	  resourceDetails.setScope(Arrays.asList(scope));
	  return resourceDetails;
   }

}

8. Testing

Finally, we may perform some tests. Let’s build a sample project using mvn clean install command. If you run all the services with the default settings they would be available under addresses:

The test method is visible below. We use OAuth2RestTemplate with ResourceOwnerPasswordResourceDetails to perform resource owner credentials grant operation and call GET /{id} API method from customer-service with OAuth2 token send in the request header.

	@Test
	public void testClient() {
        ResourceOwnerPasswordResourceDetails resourceDetails = new ResourceOwnerPasswordResourceDetails();
        resourceDetails.setUsername("piomin");
        resourceDetails.setPassword("piot123");
        resourceDetails.setAccessTokenUri("http://localhost:9999/oauth/token");
        resourceDetails.setClientId("customer-service");
        resourceDetails.setClientSecret("secret");
        resourceDetails.setGrantType("password");
        resourceDetails.setScope(Arrays.asList("read"));
        DefaultOAuth2ClientContext clientContext = new DefaultOAuth2ClientContext();
        OAuth2RestTemplate restTemplate = new OAuth2RestTemplate(resourceDetails, clientContext);
        restTemplate.setMessageConverters(Arrays.asList(new MappingJackson2HttpMessageConverter()));
        final Customer customer = restTemplate.getForObject("http://localhost:8083/{id}", Customer.class, 1);
        System.out.println(customer);
	}

 

Advertisements

Spring Cloud Apps Memory Management

Today’s topic is about memory management in Java as well as about microservices architecture. The inspiration to write this post was the situation when our available memory on test environment for the Spring Cloud based applications was exhausted. Without going into the details what was the cause of such a situation, the problem related with memory consumption by monolith based architecture in comparison with microservices is obvious. For example, supposing we have quite a large monolithic application, often 1GB or 2 GB of RAM will be enough for it, especially if we are talking about not a production environment. If we divide this application into 20 or 30 independent microservices it is hard to expect that the RAM will still remain around 1GB or 2GB. Especially if we use Spring Cloud 🙂

When running sample microservices I will use an earlier example prepared for the purpose of the one of previous articles, which is available on GitHub. I’m going to launch three microservices. First, for service discovery which uses Netflix Eureka server and two simple microservices which provide REST API, communicate with each other and register themselves in discovery server. I will not limit in any way the memory usage by those applications.

Like you see in the figure below those three running microservices have occupied about 1.5GB RAM memory on my computer. This is not the best message considering that we are dealing with very simple applications which do not even have a data persistence layer. The lowest RAM usage is for discovery service and the biggest for customer service which initializes declarative feign client for invoking account service API. Before making that screen I send some test requests to every microservice and run Eureka web console.

micro-ram-1

A lot about memory usage is shown on the charts visible below made using JProfiler. As we see most of such a memory usage is affected by heap, in comparison to non-heap it does take up much space.

micro-ram-2

micro-ram-3

Of course, the first obvious question is whether we need as much space on the heap to run our microservice application. The answer is no, we do not. Now, let’s take a brief look at how the memory management process takes place in Java 8.

We can devide JVM memory into two different parts: Heap and Non-Heap. I have already mentioned a little about Heap. As you could see on the graphs above the heap commited size for our microservices was really big (~600MB). In turn, JVM Memory consists of Young Generation and Old Generation. All the newly created objects are located in the Young Generation. When young generation is filled, garbage collection (Minor GC) is performed. To be more precise, those objects are located in the part of Young Generation which is called Eden Space. Minor GC moves all still used objects from Eden Space into Survivor 0. The same process is performed for Survivor 0 and Survivor 1 spaces. All objects that survived many cycles of GC, are moved to the Old Generation memory space. For removing objects from there is Major GC process is responsible. So, these are the most important information about Java Heap. In order to better understand the figure below. The memory limits for Java Heap can be set with the following parameters during running java -jar command:

  • -Xms – initial heap size when JVM starts
  • -Xmx – maximum heap size
  • -Xmn – size of the Young Generation, rest of the space goes for Old Generation

jvm memory

The second part of the JVM Memory, looking at the graphs above slightly less important from our point of view, is Non-Heap. Non-Heap consists of the following parts:

  • Thread Stacks – space for all running threads. The maximum thread size can be set using -Xss parameter.
  • Metaspace – it replaced PermGem, which was in Java 7 the part of JVM Heap. In Metaspace there are located all classes and methods load by application. Looking at the number of libraries included for Spring Cloud we won’t save much memory here. Metaspace size can be managed by setting -XX:MetaspaceSize and -XX:MaxMetaspaceSize parameters.
  • Code Cache – this is the space for native code (like JNI) or Java methods that are compiled into native code by JIT (just-in-time) compiler. The maximum size is determined by setting -XX:ReservedCodeCacheSize parameter.
  • Compressed Class Space – the maximum memory reserved for compressed class space is set with -XX:CompressedClassSpaceSize
  • Direct NIO Buffers

To put it more simply, Heap is for objects and Non-Heap is for classes. As you can imagine we can end up with the situation when non-heap is larger than heap for our application. First, let’s run our service discovery with the parameters below. In my opinion these are the lowest values if you are starting Eureka with embedded Tomcat on Spring Boot.

-Xms16m -Xmx32m -XX:MaxMetaspaceSize=48m -XX:CompressedClassSpaceSize=8m -Xss256k -Xmn8m -XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=8m -XX:MaxDirectMemorySize=16m

If we are running microservice with REST API and Eureka, Feign and Ribbon clients we need to increase values a little.

-Xms16m -Xmx48m -XX:MaxMetaspaceSize=64m -XX:CompressedClassSpaceSize=8m -Xss256k -Xmn8m -XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=8m -XX:MaxDirectMemorySize=16m

Here are charts from JProfiler for the settings above and Customer service. The difference is in starting and requests processing time. The application is working slower in comparison with earlier settings (or rather lack of them :)). Well, I wouldn’t set such a parameters in production mode. Treat them rather as a minimum requirements for service discovery and microservice apps.

micro-ram-5

micro-ram-6

The current total memory usage is as follows. It is still the biggest for Customer service and the lowest for Discovery.

micro-ram-4

I have also tried to run Discovery application using different web containers. You can easily change web container by including in your pom.xml file the dependencies visible below.

For Jetty.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-jetty</artifactId>
</dependency>

For Undertow.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-undertow</artifactId>
</dependency>

The best result was for Undertow (116MB), second place for Tomcat (122MB) and third for Jetty (128MB). This tests were performed only for Eureka server without registering there any microservices.

Microservices Configuration With Spring Cloud Config

Preface

Although every microservice instance is an independent unit, we usually manage them from one central location. We are talking about watching the application logs (Kibana), metrics ans statistics (Zipkin, Grafana), instance monitoring and configuration management. I’m going to say a little more about configuration management with Spring Cloud Config framework.

Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. With the Config Server you have a central place to manage external properties for applications across all environments.

The concept of using configuration server inside microservices architecture is visualized on the figure below. The configuration is stored in the version control system (in the most cases it is Git) as a YAML or properties files. Spring Cloud Config Server pulls configuration from VCS and exposes it as RESTful endpoints. Configuration server registers itself at a discovery service. Every microservice application connects to registration service to discover an address of configuration server using its name. Then it invokes REST endpoint to download the newest configuration settings on startup.

config-server

Sample application

Sample application source code is available on GitHub. For the purpose of this example, I also created a repository for storing configuration files, which is available here. Let’s begin from configuration server. To enable configuration server and its registration in the discovery service we have to add following dependencies into pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>

In the application main class we should add the following annotations.

@SpringBootApplication
@EnableConfigServer
@EnableDiscoveryClient
public class ConfigServer {

	public static void main(String[] args) {
		SpringApplication.run(ConfigServer.class, args);
	}

}

The last thing to do is to define configuration in application.yml. I set default port, application name (for discovery) and Git repository address and credentials. Spring Cloud Config Server by default makes a clone of the remote git repository and if the local copy gets dirty it cannot update the local copy from remote repository.  To solve this problem I set a force-pull property to force Spring Cloud Config Server pull from remote repository every time a new request is incoming.

server:
  port: ${PORT:9999}

spring:
  application:
    name: config-server
  cloud:
    config:
      server:
        git:
          uri: https://github.com/piomin/sample-config-repo.git
          force-pull: true
          username: ${github.username}
          password: ${github.password}

It’s everything that had to be done on the server side. If you run your Spring Boot application it should be visible in discovery service as config-server. To enable interaction with config server on the client side we should add one dependency in pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

According to theory we should not have basic configuration defined in application.yml file but in bootstrap.yml. Why we need have anything there? At least application has to know discovery server address to be able to invoke configuration server. In addition, we can override default parameters for configuration invoking, such as config server discovery name (the default is configserver), configuration name, profile and label. By default microservice tries to detect configuration with name equal to ${spring.application.name}, label equal to ‘master’ and profiles read from ${spring.profiles.active} property.

spring:
  application:
    name: account-service
  cloud:
    config:
      discovery:
        enabled: true
        serviceId: config-server
      name: account
      profile: development
      label: develop

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  instance:
    leaseRenewalIntervalInSeconds: 1
    leaseExpirationDurationInSeconds: 2

The further part of the application configuration is located in the dedicated repository in account-development.yml file. Application tries to find this file in ‘develop’ branch. Such a file is cloned by configuration server and exposed in all the following REST endpoints:
/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties

If you call in your web browser our example configuration available under first endpoint http://localhost:9999/account/development/develop you should see full configuration in JSON format, where properties are available inside propertySources. Let me say some words about account-service configuration. Here’s YAML file where I set server port, mongo database connection settings, ribbon client configuration and specific application settings – the list of test accounts.

server:
  port: ${PORT:2222}

spring:
  data:
    mongodb:
      host: 192.168.99.100
      port: 27017
      username: micro
      password: micro

ribbon:
  eureka:
    enabled: true

test:
  accounts:
    - id: 1
      number: '0654321789'
      balance: 2500
      customerId: 1
    - id: 2
      number: '0654321780'
      balance: 0
      customerId: 1
    - id: 3
      number: '0650981789'
      balance: 12000
      customerId: 2

Before running application you should start mongo database.

docker run -d --name mongo -p 27017:27017 mongodb

All the find endpoints can be switched to connect mongodb repository or test accounts repository read form remote configuration by passing parameter ‘true’ in the end of each REST path. Test data is read from configuration file which is stored under ‘test’ key.

@Repository
@ConfigurationProperties(prefix = "test")
public class TestAccountRepository {

	private List<Account> accounts;

	public List<Account> getAccounts() {
		return accounts;
	}

	public void setAccounts(List<Account> accounts) {
		this.accounts = accounts;
	}

	public Account findByNumber(String number) {
		return accounts.stream().filter(it -> it.getNumber().equals(number)).findFirst().get();
	}

}

Dynamic configuration reload

Ok, now our application configuration is loaded from server on startup. But let’s imagine we need to dynamically reload it without application restart. It is also possible with Spring Cloud Config. To enable this feature we need to add a dependency on the spring-cloud-config-monitor library and activate the Spring Cloud Bus. In the presented sample I used AMQP message broker RabbitMQ as cloud bus provider.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-monitor</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-bus-amqp</artifactId>
</dependency>

To enable monitor for configuration server set the following property in application.yml file.

spring:
  application:
    name: config-server
  cloud:
    config:
      server:
        monitor:
          github:
            enabled: true

Now we have /monitor endpoint available on config server. The library spring-cloud-starter-bus-amqp should also be added on the client side. Monitor endpoint can be invoked by webhook configured on Git repository manager like Github, Bitbucket or Gitlab. We can also easily simulate such a webhook by calling POST /monitor manually. For example GitHub command should has the header X-Github-Event: push and JSON body with changes information like {"commits": [{"modified": ["account-service.yml"]}]}.

Like I mentioned before for the sample we use RabbitMQ server. It can be launched using its docker image.

docker run -d --name rabbit -p 30000:5672 -p 30001:15672 rabbitmq:management

To override spring auto configuration for RabbitMQ put following lines in your configuration on the both client and server side.

spring:
  rabbitmq:
    host: 192.168.99.100
    port: 30000
    username: guest
    password: guest

I also have to modify a little client service configuration to make it works with push notifications. Now it looks like as you can see below. When I overrided default application name using spring.cloud.config.* properties the event RefreshRemoteApplicationEvent has not been reveived by account service.

spring:
  application:
    name: account-service
  cloud:
    config:
      discovery:
        enabled: true
        serviceId: config-server
      profile: default

To enable dynamic configuration refreshing add @RefreshScope annotation to Spring bean. I enabled refresh on the client’s side beans: AccountController and TestAccountRepository. Finally we can test our configuration.

1. I changed and committed one property inside account-service.yml, for example balance for test.accounts with id=1.

2. Then I called POST request on /monitor endpoint with payload {"commits": [{"modified": ["account-service.yml"]}]}

3. If account service received refresh event from configuration server you should see in your logs the following fragment:
Received remote refresh request. Keys refreshed [test.accounts[0].balance]

4. Now, you can invoke test endpoint for modified account number, for me it was http://localhost:2222/accounts/0654321789/true.

Conclusion

With the Config Server you have a central place to manage configuration for applications across all environments. You can take advantage of the benefits offered by VCS systems such as branching or versioning or define native support for local files. The configuration can be reloaded only at application startup or dynamically after each change committed in the VCS repository. Spring Cloud Config Server is available for discovery and can be autodetected by all microservices registered at register server like Eureka. There are several alternative mechanisms for automatic configuration management for Spring Boot applications like Spring Cloud Consul Config or Spring Cloud Zookeeper Config.

Monitoring Microservices With Spring Boot Admin

A few days ago I came across an article about Spring Boot Admin framework. It is a simple solution created to manage and monitor Spring Boot applications. It is based on endpoints exposed by Spring Boot Actuator. It is worth emphasizing that application only allows monitoring and does not have such capabilities like creating new instances, restarting, so it is not a competition for the solutions like Pivotal Cloud Foundry. More about this solution can be read in my previous article Spring Cloud Microservices at Pivotal Platform. Despite this, Spring Boot Admin seems to be an interesting enough to take a closer look on it.

If you have to manage the system consisting of multiple microservices you need to collect all relevant information in one place. This applies to the logs when we usually use ELK stack (Elasticsearch + Logstash + Kibana), metrics (Zipkin) and details about the status of all application instances, which are running right now. If you are interested in more details about ELK or Zipkin I recommend my previous article Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin.

If you already using Spring Cloud Discovery I’ve got good news for you. Although Spring Boot Admin was created by Codecentric company, it fully integrates with Spring Cloud including the most popular service registration and discovery servers like Zookeeper, Consul and Eureka. It is easy to create your admin server instance. You just have to set up Spring Boot application and add annotation @EnableAdminServer into your main class.

@SpringBootApplication
@EnableDiscoveryClient
@EnableAdminServer
@EnableAutoConfiguration
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

}

In the sample application available as usual on GitHub, we enabled discovery from Eureka by adding annotation @EnableDiscoveryClient. There is no need to register admin service in Eureka, because we only need to collect information about all registered microservices. There is also a possibility to include Spring Boot Admin to your Eureka server instance, but admin context should be changed (property spring.boot.admin.context-path) to prevent clash with Eureka UI. Here’s application.yml configuration file for the sample with independent admin service.

eureka:
  client:
    registryFetchIntervalSeconds: 5
    registerWithEureka: false
    serviceUrl:
      defaultZone: ${DISCOVERY_URL:http://localhost:8761}/eureka/
  instance:
    leaseRenewalIntervalInSeconds: 10

management:
  security:
    enabled: false

Here is the list of dependencies included in pom.xml.

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-eureka</artifactId>
	</dependency>
	<dependency>
		<groupId>de.codecentric</groupId>
		<artifactId>spring-boot-admin-server</artifactId>
		<version>1.5.1</version>
	</dependency>
	<dependency>
		<groupId>de.codecentric</groupId>
		<artifactId>spring-boot-admin-server-ui</artifactId>
		<version>1.5.1</version>
	</dependency>
</dependencies>

Now you only need to build and run your server with java -jar admin-service.jar. UI dashboard is available under http://localhost:8080 as you on the figure below. Services are grouped by name and there is information how many instances of each microservice is running.

boot-admin-1

On the client side we have to add those two dependencies below. Spring Boot Actuator is required as a mentioned before, Jolokia library is used for more advanced features like JMX mbeans and log level management.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>org.jolokia</groupId>
	<artifactId>jolokia-core</artifactId>
</dependency>

To display information visible in the figure below like version, Git commit details below for each application we need to add two maven plugins into pom.xml. First of them will generate build-info.properties file with most important application info. Second includes git.properties file with all information about last commit. Result are available under Spring Boot Actuator info endpoint.

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<configuration>
		<mainClass>pl.piomin.microservices.account.Application</mainClass>
		<addResources>true</addResources>
	</configuration>
	<executions>
		<execution>
			<goals>
				<goal>build-info</goal>
				<goal>repackage</goal>
			</goals>
			<configuration>
				<additionalProperties>
					<java.target>${maven.compiler.target}</java.target>
					<time>${maven.build.timestamp}</time>
				</additionalProperties>
			</configuration>
		</execution>
	</executions>
</plugin>
<plugin>
	<groupId>pl.project13.maven</groupId>
	<artifactId>git-commit-id-plugin</artifactId>
	<configuration>
		<failOnNoGitDirectory>false</failOnNoGitDirectory>
	</configuration>
</plugin>

I created two microservices in the sample application account-service and customer-service. Run some instances of them on different ports with command java -jar -DPORT=[port] [service-name].jar. Information visible in Version and Info columns is taken from build-info.properties and git.properties files.

boot-admin-2

Here’s full list of parameters for account-service.

boot-admin-3-details

There also some other interesting features offered by Spring Boot Admin. In the Trace section we can browse HTTP requestes and responses history with date, status and method information. It could be filtered by path fragment.

boot-admin-1-trace

By adding Jolokia dependency we are able to view and change log level for every category in the Logging section.

boot-admin-5-logs

We can collect configuration details for every instance of microservice.

boot-admin-7-env

In the Journal tab there is list of status changes for all services monitored by Spring Boot Admin.

boot-admin-11-journal

Conclusion

Spring Boot Admin is an excellent tool for visualizing endpoints exposed by Spring Boot Actuator with healhchecks and application details. It has easy integration with Spring Cloud and can group all running instances of microservice by its name taken from Eureka (or some other registration and discovery servers) registry. However, I see a lack of the possibility for remote application restart. I think it would be quite easy to implement using a tool such as Ansible and the information displayed by the Spring Boot Actuator endpoints.

Spring Cloud Microservices at Pivotal Platform

Imagine you have multiple microservices running on different machines as multiple instances. It seems natural to think about the tools that helps you in the process of monitoring and managing all of them. If we add that our microservices are created based on the Spring Cloud framework obviously seems we should look at the Pivotal platform. Here is figure with platform’s architecture download from the main Pivotal’s site.

PVDI-Microservices-Architecture

Although Pivotal Platform can run applications written in many languages it has the best support for Spring Cloud Services and Netflix OSS tools like you can see in the figure above. From the possibilities offered by Pivotal we can take advantage of three ways.

Pivotal Cloud Foundry – solution can be ran on public IaaS or private cloud like AWS, Google Cloud Platform, Microsoft Azure, VMware vSphere, OpenStack.

Pivotal Web Services – hosted cloud-native platform available at pivotal.io site.

PCF Dev – the instance which can be run locally as a a single virtual machine. It offers the opportunity to develop apps using an offline environment which basic services installed like Spring Cloud Services (SCS), MySQL, Redis databases and RabbitMQ broker. If you want to run it locally with SCS you need more than 6GB RAM free.

As a Spring Cloud Services there are available Circuit Breaker (Hystrix), Service Registry (Eureka) and standard Spring Configuration Server based on git configuration.

scs

That’s all I wanted to say about the theory. Let’s move on to practice. On the Pivotal website we have detailed materials on how to set it up, create and deploy a simple microservice based on Spring Cloud solutions. In this article I will try to present the essence collected from these descriptions based on one of my standard examples from the previous posts. As always sample source code is available on GitHub. If you are interested in detailed description of the sample application, microservices and Spring Cloud read my previous articles:

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

If you have a lot of free RAM you can install PCF Dev on your local workstation. You need to have Virtual Box installed. Then download and install Cloud Foundry Command Line Interface (CF CLI) and PCF Dev. All is described here. Finally you can run command below and take a small break for coffee. Virtual machine needs to downloaded and started.

cf dev start -s scs

For those who do not have RAM enough (like me) there is Pivotal Web Services platform. It is available here. Before use it you have to register on Pivotal site. The rest of the article is identical for both options.
In comparison to previous examples of Spring Cloud based microservices, we need to make some changes. There is one additional dependency inside every microservice’s pom.xml.

<properties>
	...
	<spring-cloud-services.version>1.4.1.RELEASE</spring-cloud-services.version>
	<spring-cloud.version>Dalston.RELEASE</spring-cloud.version>
</properties>

<dependencies>
	<dependency>
		<groupId>io.pivotal.spring.cloud</groupId>
		<artifactId>spring-cloud-services-starter-service-registry</artifactId>
	</dependency>
	...
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
		<dependency>
			<groupId>io.pivotal.spring.cloud</groupId>
			<artifactId>spring-cloud-services-dependencies</artifactId>
			<version>${spring-cloud-services.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We also use Maven Cloud Foundry plugin cf-maven-plugin for application deployment on Pivotal platform. Here is sample for account-service. We run two instances of that microservice with max memory 512MB. Our application name is piomin-account-service.

<plugin>
	<groupId>org.cloudfoundry</groupId>
	<artifactId>cf-maven-plugin</artifactId>
	<version>1.1.3</version>
	<configuration>
		<target>http://api.run.pivotal.io</target>
		<org>piotrminkowski</org>
		<space>development</space>
		<appname>piomin-account-service</appname>
		<memory>512</memory>
		<instances>2</instances>
		<server>cloud-foundry-credentials</server>
	</configuration>
</plugin>

Don’t forget to add credentials configuration into Maven settings.xml file.

<server>
	<id>cloud-foundry-credentials</id>
	<username>piotr.minkowski@gmail.com</username>
	<password>***</password>
</server>

Now, when building sample application we to append cf:push command.

mvn clean install cf:push

Here is circuit breaker implementation inside customer-service.

@Service
public class AccountService {

	@Autowired
	private AccountClient client;

	@HystrixCommand(fallbackMethod = "getEmptyList")
	public List<Account> getAccounts(Integer customerId) {
		return client.getAccounts(customerId);
	}

	List<Account> getEmptyList(Integer customerId) {
		return new ArrayList<>();
	}

}

There is randomly generated delay on the account’s service side, so 25% of calls circuit breaker should be activated.

@RequestMapping("/accounts/customer/{customer}")
public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
	logger.info(String.format("Account.findByCustomer(%s)", customerId));
	Random r = new Random();
	int rr = r.nextInt(4);
	if (rr == 1) {
		try {
			Thread.sleep(2000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
	return accounts.stream().filter(it -> it.getCustomerId().intValue() == customerId.intValue())
		.collect(Collectors.toList());
}

After successfully deploying application using Maven cf:push command we can go to Pivotal Web Services console available at https://console.run.pivotal.io/. Here are our two deployed services: two instances of piomin-account-service and one instance of piomin-customer-service.

pivotal-1

I have also activated Circuit Breaker and Service Registry from Marketplace.

pivotal-2

Every application need to be bound to service. To enable it select service, then expand Bound Apps overlap and select checkbox next to each service name.

pivotal-4

After this step applications needs to be restarted. It also can be be using web dashboard inside each service.

pivotal-5

Finally, all services are registered in Eureka and we can perform some tests using customer endpoint https://piomin-customer-service.cfapps.io/customers/{id}.

pivotal-4

Final words

With Pivotal solution we can easily deploy, scale and monitor our microservices. Deployment and scaling can be done using Maven plugin or via web dashboard. On Pivotal there are also available some services prepared especially for microservices needs like service registry, circuit breaker and configuration server. Pivotal is a competition for such solutions like Kubernetes which based on Docker containerization (more about this tools here). It is especially useful if you are creating a microservices based on Spring Boot and Spring Cloud frameworks.

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

Probably you read some articles about Hystrix and you know in what purpose it is used for. Today I would like to show you an example of exactly how to use it, which gives you the ability to combine with other tools from Netflix OSS stack like Feign and Ribbon. In this I assume that you have basic knowledge on topics such as microservices, load balancing, service discovery. If not I suggest you read some articles about it, for example my short introduction to microservices architecture available here: Part 1: Creating microservice using Spring Cloud, Eureka and Zuul. The code sample used in that article is also also used now. There is also sample source code available on GitHub. For the sample described now see hystrix branch, for basic sample master branch. 

Let’s look at some scenarios for using fallback and circuit breaker. We have Customer Service which calls API method from Account Service. There two running instances of Account Service. The requests to Account Service instances are load balanced by Ribbon client 50/50.

micro-details-1

Scenario 1

Hystrix is disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) and other instances (3). Ribbon read timeout is shorter than request max process time (4). This scenario also occurs with the default Spring Cloud configuration without Hystrix. When you call customer test method you sometimes receive full response and sometimes 500 HTTP error code (50/50).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 0 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 2

Hystrix is still disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) but enabled on other instances once (3). You always receive full response. If your request is received by instance with delayed response it is timed out after 1 second and then Ribbon calls another instance – in that case not delayed. You can always change MaxAutoRetries to positive value but gives us nothing in that sample.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 1 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 3

Here is not a very elegant solution to the problem. We set ReadTimeout on value bigger than delay inside API method (5000 ms).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 10000

feign:
  hystrix:
    enabled: false

Generally configuration from Scenario 2 and 3 is right, you always get the full response. But in some cases you will wait more than 1 second (Scenario 2) or more than 5 seconds (Scenario 3) and delayed instance receives 50% requests from Ribbon client. But fortunately there is Hystrix – circuit breaker.

Scenario 4

Let’s enable Hystrix just by removing feign property. There is no auto retries for Ribbon client (1) and its read timeout (2) is bigger than Hystrix’s timeout (3). 1000ms is also default value for Hystrix timeoutInMilliseconds property. Hystrix circuit breaker and fallback will work for delayed instance of account service. For some first requests you receive fallback response from Hystrix. Then delayed instance will be cut off from requests, most of them will be directed to not delayed instance.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(1)
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 2000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 1000 #(3)

Scenario 5

This scenario is a more advanced development of Scenario 4. Now Ribbon timeout (2) is lower than Hystrix timeout (3) and also auto retries mechanism is enabled (1) for local instance and for other instances (4). The result is same as for Scenario 2 and 3 – you receive full response, but Hystrix is enabled and it cuts off delayed instance from future requests.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 3 #(1)
  MaxAutoRetriesNextServer: 1 #(4)
  ReadTimeout: 1000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 10000 #(3)

I could imagine a few other scenarios. But the idea was just a show differences in circuit breaker and fallback when modifying configuration properties for Feign, Ribbon and Hystrix in application.yml.

Hystrix

Let’s take a closer look on standard Hystrix circuit breaker and  usage described in Scenario 4. To enable Hystrix in your Spring Boot application you have to following dependencies to pom.xml. Second step is to add annotation @EnableCircuitBreaker to main application class and also @EnableHystrixDashboard if you would like to have UI dashboard available.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix-dashboard</artifactId>
</dependency>

Hystrix fallback is set on Feign client inside customer service.

@FeignClient(value = "account-service", fallback = AccountFallback.class)
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

Fallback implementation is really simple. In this case I just return empty list instead of customer’s account list received from account service.

@Component
public class AccountFallback implements AccountClient {

	@Override
	public List<Account> getAccounts(Integer customerId) {
		List<Account> acc = new ArrayList<Account>();
		return acc;
	}

}

Now, we can perform some tests. Let’s start discovery service, two instances of account service on different ports (-DPORT VM argument during startup) and customer service. Endpoint for tests is /customers/{id}. There is also JUnit test class which sends multiple requests to this enpoint available in customer-service module pl.piomin.microservices.customer.ApiTest.

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts =  accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}

I enabled Hystrix Dashboard on account-service main class. If you would like to access it call from your web browser http://localhost:2222/hystrix address and then type Hystrix’s stream address from customer-service http://localhost:3333/hystrix.stream. When I run test that sends 1000 requests to customer service about 20 (2%) of them were forwarder to delayed instance of account service, remaining to not delayed instance. Hystrix dashboard during that test is visible below. For more advanced Hystrix configuration refer to its documentation available here.

hystrix-1

Testing Java Microservices

While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires different approach than tests designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect on all which are interacting with that service. Interaction between them is realized by messages. Usually these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are same as for monolith applications.

Unit tests – we are testing the smallest pieces of code, for example single method or component and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests – test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests – limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with Mongo database and storing there all customers. Mapped between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity. Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see test strategy for architecture showed in previous picture. For our tests we use embedded in-memory Mongo database and RESTful stubs generated with Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look on the big picture. We have four microservices interacting with each other like we see in the figure below. Spring Cloud Contract uses WireMock in the backgroud for recording and matching requests and responses. For testing purposes Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.

public interface CustomerRepository extends MongoRepository<Customer, String> {

	public Customer findByPesel(String pesel);
	public Customer findById(String id);

}

Here’s full controller code.

@RestController
public class CustomerController {

	@Autowired
	private AccountClient accountClient;
	@Autowired
	private ProductClient productClient;

	@Autowired
	CustomerRepository repository;

	protected Logger logger = Logger.getLogger(CustomerController.class.getName());

	@RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
	public Customer findById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = repository.findById(id);
		List<Account> accounts =  accountClient.getAccounts(id);
		logger.info(String.format("Customer.findById(): %s", accounts));
		customer.setAccounts(accounts);
		return customer;
	}

	@RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
	public Customer findWithProductsById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findWithProductsById(%s)", id));
		Customer customer = repository.findById(id);
		List<Product> products =  productClient.getProducts(id);
		logger.info(String.format("Customer.findWithProductsById(): %s", products));
		customer.setProducts(products);
		return customer;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.POST)
	public Customer add(@RequestBody Customer customer) {
		logger.info(String.format("Customer.add(%s)", customer));
		return repository.save(customer);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.PUT)
	public Customer update(@RequestBody Customer customer) {
		logger.info(String.format("Customer.update(%s)", customer));
		return repository.save(customer);
	}

}

To replace external Mongo database with embedded in-memory instance during automated tests we only have to add following dependency to pom.xml.

<dependency>
	<groupId>de.flapdoodle.embed</groupId>
	<artifactId>de.flapdoodle.embed.mongo</artifactId>
	<scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application seetings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
      host: localhost
      port: 27017
  logging:
    level:
      org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

To enable automated tests generation by Spring Cloud Contract we also have to add following plugin into pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>1.1.0.RELEASE</version>
	<extensions>true</extensions>
	<configuration>
			<packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
	</configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s base test class for account service tests. In our sample architecture account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

As opposed to the account service customer service consumes some services for collecting customer’s account and products. That’s why base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s sample contract for adding new account.

org.springframework.cloud.contract.spec.Contract.make {
  request {
    method 'POST'
    url '/accounts'
	body([
	  id: "1234567890",
          number: "12345678909",
          balance: 1234,
	  customerId: "123456789"
	])
	headers {
	  contentType('application/json')
	}
  }
response {
  status 200
  body([
    id: "1234567890",
    number: "12345678909",
    balance: 1234,
    customerId: "123456789"
  ])
  headers {
    contentType('application/json')
  }
 }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

	@Test
	public void validate_1_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
			assertThatJson(parsedJson).field("balance").isEqualTo(1234);
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
	}

	@Test
	public void validate_2_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
			assertThatJson(parsedJson).field("balance").isEqualTo(4675);
	}

	@Test
	public void validate_3_getAccounts() throws Exception {
		// given:
			MockMvcRequestSpecification request = given();

		// when:
			ResponseOptions response = given().spec(request)
					.get("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
			assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
			assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
			assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
	}

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside scenario1 catalog like we can see in the picture below. The number in every file’s prefix defines tests order. Second scenario has only one definition file and is also used in the customer service (findById API method). Third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define Continuous Integration pipeline for our microservices. Each of them should be build independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage we are updating our source code working for the newest version from repository. In the Build stage we are starting from checking out project version set inside pom.xml, then we build application using mvn clean install command. Finally, we are recording unit tests result using junit pipeline method. Same pipelines can be configured for all other microservices. In described sample all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside different repository with independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

    withMaven(maven: 'Maven') {

        stage ('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
        }

        stage ('Build') {
            def pom = readMavenPom file: 'pom.xml'
            def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
            env.pom_version = version
            print 'Build version: ' + version
            currentBuild.description = "v${version}"

            dir('account-service') {
                bat "mvn clean install -Dmaven.test.failure.ignore=true"
            }

            junit '**/target/surefire-reports/TEST-*.xml'
        }

    }

}

Here’s pipeline vizualization on Jenkins Management Dashboard.

account-pipeline