JavaEE MicroProfile with KumuluzEE

Preface

Enterprise Java seems to be a step back from the others when it comes to microservices architecture. Some weeks ago I took a part in Code Europe – the programming conference in Warsaw. One of the speakers was Ivar Grimstad who was talking about MicroProfile – an open initiative for optimizing Enterprise Java for a microservices architecture. This idea is very interesting, but at the moment it is rather at the beginning of the road.
However, while I was reading about the microprofile initiative I came across information about JavaEE framework developed by Slovenian company – KumuluzEE. The solution seemed to be interesting enough that I decided to take a closer look on it. Well, we can read on the web site that KumuluzEE is the Java Duke’s Choice Award Winner, so there is still a hope for JavaEE and microservices ūüôā

What’s KumuluzEE

Can KumuluzEE be a competitor for the Spring Cloud framework? He is certainly not as popular and advanced in the solutions for microservices like Spring Cloud, but has basic modules for service registration, discovery, distributed configuration propagation, circuit breaking, metrics and support for Docker and Kubernetes. It uses CDI on JBoss Weld container for dependency injection and Jersey as a REST API provider. Modules for configuration and discovery basing on Consul or etcd and they are rather on early stage of development (1.0.0-SNAPSHOT), but let’s try it out.

Preparation

I’ll show you sample application which consists of two independent microservices account-service and customer-service. Both of them exposes REST API and one of customer-service methods invokes method from account-service. Every microservice registers itself in Consul and is able to get configuration properties from Consul. Sample application source code is available on GitHub. Before we begin let’s start Consul instance using Docker container.

docker run -d --name consul -p 8500:8500 -p 8600:8600 consul

We should also add some KumuluzEE dependencies to Maven pom.xml.

<dependency>
	<groupId>com.kumuluz.ee</groupId>
	<artifactId>kumuluzee-core</artifactId>
</dependency>
<dependency>
	<groupId>com.kumuluz.ee</groupId>
	<artifactId>kumuluzee-servlet-jetty</artifactId>
</dependency>
<dependency>
	<groupId>com.kumuluz.ee</groupId>
	<artifactId>kumuluzee-jax-rs-jersey</artifactId>
</dependency>
<dependency>
	<groupId>com.kumuluz.ee</groupId>
	<artifactId>kumuluzee-cdi-weld</artifactId>
</dependency>

Service Registration

To enable service registration we should add one additional dependency to our pom.xml. I chose Consul as a registration and discovery server, but you can also use etcd (kumuluzee-discovery-consul).

<dependency>
	<groupId>com.kumuluz.ee.discovery</groupId>
	<artifactId>kumuluzee-discovery-consul</artifactId>
	<version>1.0.0-SNAPSHOT</version>
</dependency>

Inside application configuration file we should set discovery properties and server URL. For me it is 192.168.99.100.

kumuluzee:
  service-name: account-service
  env: dev
  version: 1.0.0
  discovery:
    consul:
      agent: http://192.168.99.100:8500
      hosts: http://192.168.99.100:8500
    ttl: 20
    ping-interval: 15

Here’s account microservice main class. As you probably guess annotation @RegisterService enables registration on server.

@RegisterService("account-service")
@ApplicationPath("v1")
public class AccountApplication extends Application {

}

We are starting application by running java -cp target/classes;target/dependency/* com.kumuluz.ee.EeApplication. Remember to override default port by setting environment property PORT. I started two instances of account and one of customer microservice.

kumuluzee-1

Service Discovery

Microservice customer exposes API, but also invokes API method from account-service, so it has to discover and connect this service. Maven dependencies and configuration settings are the same as for account-service. The only difference is the resource class. Here’s CustomerResource fragment where we are invoking enpoint GET /customer/{id}.

@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Path("customers")
@RequestScoped
public class CustomerResource {

	private List<Customer> customers;

	@Inject
	@DiscoverService(value = "account-service", version = "1.0.x", environment = "dev")
	private WebTarget target;

	...

	@GET
	@Path("{id}")
	@Log(value = LogParams.METRICS, methodCall = true)
	public Customer findById(@PathParam("id") Integer id) {
		Customer customer = customers.stream().filter(it -> it.getId().intValue() == id.intValue()).findFirst().get();
		WebTarget t = target.path("v1/accounts/customer/" + customer.getId());
		List<Account> accounts = t.request().buildGet().invoke(List.class);
		customer.setAccounts(accounts);
		return customer;
	}

}

There is one pretty cool thing in discovery with KumuluzEE. As you see in the @DiscoverService we can specify version and environment for account-service instance. Version and environment for microservice is read automatically from config.yml during registration in discovery server. So we can maintain many versions of single microservice and freely invoke them from other microservices. Requests are automatically load balanced between all microservices matches conditions from annotation @ServiceDiscovery.

We can also monitor metrics such as response time by declaring @Log(value = LogParams.METRICS, methodCall = true) on API method. Here’s log fragment for account-service.

2017-07-28 13:57:01,114 TRACE ENTRY[ METHOD ] Entering method. {class=pl.piomin.services.kumuluz.account.resource.AccountResource, method=findByCustomer, parameters=[1]}
2017-07-28 13:57:01,118 TRACE EXIT[ METHOD ] Exiting method. {class=pl.piomin.services.kumuluz.account.resource.AccountResource, method=findByCustomer, parameters=[1], response-time=3, result=[pl.piomin.services.kumuluz.account.model.Account@1eb26fe3, pl.piomin.services.kumuluz.account.model.Account@2dda41c5]}

Distributed configuration

To enable KumuluzEE Config include Consul implementation by adding the following dependency to pom.xml.

<dependency>
	<groupId>com.kumuluz.ee.config</groupId>
	<artifactId>kumuluzee-config-consul</artifactId>
	<version>1.0.0-SNAPSHOT</version>
</dependency>

I do not use Consul agent running on localhost, so I need to override some properties in config.yml. I also defined one configuration property blacklist

kumuluzee:
  config:
    start-retry-delay-ms: 500
    max-retry-delay-ms: 900000
    consul:
      agent: http://192.168.99.100:8500

rest-config:
  blacklist:

Here’s the class that loads configuration properties and enables dynamically updated on any change in configuration source by declaring @ConfigValue(watch = true) on property.

@ApplicationScoped
@ConfigBundle("rest-config")
public class AccountConfiguration {

	@ConfigValue(watch = true)
	private String blacklist;

	public String getBlacklist() {
		return blacklist;
	}

	public void setBlacklist(String blacklist) {
		this.blacklist = blacklist;
	}

}

We use configution property blacklist in the resource class for filtering all accounts by blacklisted ids.

@GET
@Log(value = LogParams.METRICS, methodCall = true)
public List<Account> findAll() {
	final String blacklist = ConfigurationUtil.getInstance().get("rest-config.blacklist").orElse("nope");
	final String[] ids = blacklist.split(",");
	final List<Integer> blacklistIds = Arrays.asList(ids).stream().map(it -> new Integer(it)).collect(Collectors.toList());
	return accounts.stream().filter(it -> !blacklistIds.contains(it.getId())).collect(Collectors.toList());
}

Configuration property should be defined in Consul UI Dashboard under KEY/VALUE tab. KumuluzEE enforces a certain format of key name. In this case it has to be environments/dev/services/account-service/1.0.0/config/rest-config/blacklist. You can update property value and test changes by invoking http://localhost:2222/v1/accounts.

kumuluzee-2

Final Words

Creating microservices with KumuluzEE is pretty easy. I showed you the main capabilities of this framework. KumulezEE has also modules for bircuit breaker with Hystrix, streaming with Apache Kafka and security with OAuth2/OpenID. I will keep a close eye on this library and I hope it will continue to be developed.

Code Quality with SonarQube

Source code quality analysis is an essential part of the Continuous Integration process. Together with automated tests it is the key element to deliver reliable software without many bugs, security vulnerabilities or performance leaks. Probably the best static code analyzer you can find on the market is SonarQube. It has a support for more than 20 programming languages. It can be easily integrated with the most popular Continuous Integration engines like Jenkins or TeamCity. Finally, it has many features and plugins which can be easily managed from extensive web dashboard.

However, before we proceed to discuss about the most powerful capabilities of this solution it is well worth to ask Why we do it? Would it be productive for us to force developers to focus on code quality? Probably most of us are programmers and we exactly know that everyone else expect from us to deliver code which meet business demands rather than looks nice ūüôā After all do we really want to break the build by not fulfilling not important rule like maximum line length – rather a little pleasure. On the other hand taking over source code from someone else who was not paying attention to any of good programming practice is also not welcome if you know what I mean. But be calm, SonarQube is the right solution for you. In this article I’ll to show you that carrying about high code quality can be a good fun and above all you can learn more how to develop better code, while other team members spend time on fixing their bugs ūüôā

Enough talk go to action. I suggest you to run your test instance of SonarQube using Docker. Here’s SonarQube run command. Then you can login to web dashboard available under http://192.168.99.100:9000 with admin/admin credentials.

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube

You are signed in to the web dashboard but there are no projects created yet. To perform source code scanning you should just run one command mvn sonar:sonar if you are using maven in the building process. Don’t forget to add SonarQube server address in settings.xml file as you on the fragment below.

<profile>
	<id>sonar</id>
	<activation>
		<activeByDefault>true</activeByDefault>
	</activation>
	<properties>
		<sonar.host.url>http://192.168.99.100:9000</sonar.host.url>
	</properties>
</profile>

When SonarQube analyse finishes you will see new project with the same name as maven artifact name with your code metrics and statistics. I created sample Spring Boot application where I tried to perform some most popular mistakes which impact on code quality. Source code is available on GitHub. The right module for analyse is named person-service. However, the code with many bugs and vulnerabilities is pushed to v0.1 branch. Master branch has a latest version with the corrections performed basing on SonarQube analyse what I’m going to describe on the next section of that article. Ok, let’s start analyse with mvn command. We can be surprised a little – the code analyse result for 0.1 version is rather not satisfying. Although I spend much time on making important mistakes SonarQube reported only some bugs and code smells were detected and quality gate status is ‘Passed’.

sonar-1

Let’s take a closer look on quality gates in SonarQube. Like I mentioned before we would not like to break the build by not fulfiling one or group of not very important rules. We can achieve it by creating quality gate. This is a set of requirements that tells us whether or not going to deployment with new version od project. There is default quality gate for Java but we can change its thresholds or create the new one. The default quality gate has thresholds set only for new code, so I decided to create the one for my sample application minimum test coverage set on 50 percent, unit test success detection and ratings basic on full code. Now, scanning result looks a little different ūüôā

sonar-3sonar-2

To enable scanning test coverage in SonarQube we should add jacoco plugin to maven pom.xml. During maven build mvn clean test -Dmaven.test.failure.ignore=true sonar:sonar the report would be automatically generated and uploaded to SonarQube.

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.9</version>
	<executions>
		<execution>
			<id>default-prepare-agent</id>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
		<execution>
			<id>default-report</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>report</goal>
			</goals>
		</execution>
	</executions>
</plugin>

The last change that has to be done before application rescan is to installing some plugins and enabling rules disabled by default. The list of all active and inactive rules can be displayed in Quality Profiles section. In the default profile for Java the are more than 400 rules available and 271 active on start. I suggest you install FindBugs and Checkstyle plugins. Those plugins has many additional rules for Java which can be activated for our profile. Now there are about 1.1k inactive rules in many categories. Which of them should be activated depends on you, you can activate them in the default profile, create your new profile or use one of predefined profile, which were automatically created by plugins we installed before.  In my opinion the best way to select right rules is to create simple project and check which rules are suitable for you. Then you can check out the detailed description and disable the rule if needed. After activating some rules provided by Checkstyle plugin I have a report with 5 bugs and 77 code smells. The most important errors are visible in the pictures below.

sonar-5sonar-6sonar-7

All issues reported by SonarQube can be easily reviewed using UI dashboard for each project in the Issue tab. We can also install plugin SonarLint which integrates with most popular IDEs like Eclipse or IntelliJ and all those issue will be displayed there. Now, we can proceed to fix errors. All changes which I performed to resolve issues can be display on GitHub repository from branches v0.1 to v0.6.¬†I resolved all problems except some checked exception warnings which I set to Resolved (Won’t fix). Those issues won’t be reported after next scans.

sonar-7

Finally my project looks as you could see in the picture below. All ratings have a score ‘A’, test coverage is greater that 60% and quality gate is ‘Passed’. Final person-service version is commited into master branch.

sonar-8

Like you see there are many rules which can be applied to your project during SonarQube scanning, but sometimes it would be not enough for your organization needs. In that case you may search for some additional plugins or create your own plugin with the rules that meet your specific requirements. In my sample available on GitHub there is module sonar-rules where I defined the rule checking whether all public classes have javadoc comments including @author field. To create SonarQube plugin add the following fragment to your pom.xml and change packaging type to sonar-plugin.

<plugin>
	<groupId>org.sonarsource.sonar-packaging-maven-plugin</groupId>
	<artifactId>sonar-packaging-maven-plugin</artifactId>
	<version>1.17</version>
	<extensions>true</extensions>
	<configuration>
		<pluginKey>piotjavacustom</pluginKey>
		<pluginName>PiotrCustomRules</pluginName>
		<pluginDescription>For test purposes</pluginDescription>
		<pluginClass>pl.piomin.sonar.plugin.CustomRulesPlugin</pluginClass>
		<sonarLintSupported>true</sonarLintSupported>
		<sonarQubeMinVersion>6.0</sonarQubeMinVersion>
	</configuration>
</plugin>

Here’s the class with custom rule definition. First we have to get a scanned class node (Kind.CLASS), a then process first comment (Kind.TRIVIA) in the class file. The rule parameters like name or priority are set inside @Role annotation.

@Rule(key = "CustomAuthorCommentCheck",
		name = "Javadoc comment should have @author name",
		description = "Javadoc comment should have @author name",
		priority = Priority.MAJOR,
		tags = {"style"})
public class CustomAuthorCommentCheck extends IssuableSubscriptionVisitor {

	private static final String MSG_NO_COMMENT = "There is no comment under class";
	private static final String MSG_NO_AUTHOR = "There is no author inside comment";

	private Tree actualTree = null;

	@Override
	public List<Kind> nodesToVisit() {
		return ImmutableList.of(Kind.TRIVIA, Kind.CLASS);
	}

	@Override
	public void visitTrivia(SyntaxTrivia syntaxTrivia) {
		String comment = syntaxTrivia.comment();
		if (syntaxTrivia.column() != 0)
			return;
		if (comment == null) {
			reportIssue(actualTree, MSG_NO_COMMENT);
			return;
		}
		if (!comment.contains("@author")) {
			reportIssue(actualTree, MSG_NO_AUTHOR);
			return;
		}
	}

	@Override
	public void visitNode(Tree tree) {
		if (tree.is(Kind.CLASS)) {
			actualTree = tree;
		}
	}

}

Before building and deploying plugin into SonarQube server it can be easily tested using junit. Inside the src/test/file directory we should place test data – java files which are scanned during junit test. For failure test we should also create file CustomAuthorCommentCheck_java.json in the /org/sonar/l10n/java/rules/squid/ directory with rule definition.

@Test
public void testOk() {
	JavaCheckVerifier.verifyNoIssue("src/test/files/CustomAuthorCommentCheck.java", new CustomAuthorCommentCheck());
}

@Test
public void testFail() {
	JavaCheckVerifier.verify("src/test/files/CustomAuthorCommentCheckFail.java", new CustomAuthorCommentCheck());
}

Finally, build maven project and copy generated JAR artifact from target directory to SonarQube docker container into $SONAR_HOME/extensions/plugins directory. Then restart your docker container.

docker cp target/sonar-plugins-1.0-SNAPSHOT sonarqube:/opt/sonarqube/extensions/plugins

After SonarQube restart your plugin’s rules are visible under Rules section.

sonar-4

The last thing to do is to run SonarQube scanning in the Continuous Integration process. SonarQube can be easily integrated with the most popular CI server – Jenkins. Here’s the fragment of Jenkins pipeline where we perform source code scanning and then waiting for quality gate result. If you interested in more details about Jenkins pipelines, Continuous Integration and Delivery read my previous post How to setup Continuous Delivery environment.

stage('SonarQube analysis') {
	withSonarQubeEnv('My SonarQube Server') {
		sh 'mvn clean package sonar:sonar'
	}
}
stage("Quality Gate") {
	timeout(time: 1, unit: 'HOURS') {
		def qg = waitForQualityGate()
		if (qg.status != 'OK') {
			error "Pipeline aborted due to quality gate failure: ${qg.status}"
		}
	}
}

Custom metrics visualization with Grafana and InfluxDB

If you need a solution for querying and visualizing time series and metrics probably your first choice will be Grafana. Grafana is a visualization dashboard and it can collect data from some different databases like MySQL, Elasticsearch and InfluxDB. At present it is becoming very popular to integrate with InfluxDB as a data source. This is a solution specifically designed for storing real-time metrics and events and is very fast and scalable for time-based data. Today, I’m going to show an example Spring Boot application of metrics visualization based on Grafana, InfluxDB and alerts using Slack communicator.

Spring Boot Actuator exposes some endpoint useful for monitoring and interacting with application. It also includes a metrics service with gauge and counter support. Gauge records a single value, counter records incremented or decremented value in all previous steps. The full list of basic metrics is available in Spring Boot documentation here and these are for example free memory, heap usage, datasource pool usage or thread information. We can also define our own custom metrics. To allow exporting such values into InfluxDB we need to declare bean @ExportMetricWriter. Spring Boot has not build-in metrics exporter for InfluxDB, so we have add influxdb-java library into pom.xml dependencies and define connection properties.

	@Bean
	@ExportMetricWriter
	GaugeWriter influxMetricsWriter() {
		InfluxDB influxDB = InfluxDBFactory.connect("http://192.168.99.100:8086", "root", "root");
		String dbName = "grafana";
		influxDB.setDatabase(dbName);
		influxDB.setRetentionPolicy("one_day");
		influxDB.enableBatch(10, 1000, TimeUnit.MILLISECONDS);

		return new GaugeWriter() {

			@Override
			public void set(Metric<?> value) {
				Point point = Point.measurement(value.getName()).time(value.getTimestamp().getTime(), TimeUnit.MILLISECONDS)
						.addField("value", value.getValue()).build();
				influxDB.write(point);
				logger.info("write(" + value.getName() + "): " + value.getValue());
			}
		};
	}

The metrics should be read from Actuator endpoint, so we should declare MetricsEndpointMetricReader bean.

	@Bean
	public MetricsEndpointMetricReader metricsEndpointMetricReader(final MetricsEndpoint metricsEndpoint) {
		return new MetricsEndpointMetricReader(metricsEndpoint);
	}

We can customize exporting process by declaring properties inside application.yml file. In the code fragment below there are two parameters: delay-millis which set metrics export interval to 5 seconds and includes, where we can define which metric should be exported.

spring:
  metrics:
    export:
      delay-millis: 5000
      includes: heap.used,heap.committed,mem,mem.free,threads,datasource.primary.active,datasource.primary.usage,gauge.response.persons,gauge.response.persons.id,gauge.response.persons.remove

To easily run Grafana and InfluxDB let’s use docker.

docker run -d --name grafana -p 3000:3000 grafana/grafana
docker run -d --name influxdb -p 8086:8086 influxdb

Grafana is available under default security credentials admin/admin. The first step is to create InfluxDB data source.

grafana-3
Now, we can create our new dashboard and add some graphs. Before it run Spring Boot sample application to export metrics some data into InfluxDB. Grafana has user friendly support for InfluxDB queries, where you can click the entire configuration and have a hint of syntax. Of course there is also a possibility of writing text queries, but not all of query language features are available.

grafana-4

Here’s the picture with my Grafana dashboard for metrics passed in includes property. On the second picture below you can see enlarged graph with average REST methods processing time.

grafana-1

grafana-2

We can always implement our custom service which generates metrics sent to InfluxDB. Spring Boot Actuator provides two classes for that purpose: CounterService and GaugeService. Below, there is example of GaugeService usage, where the random value between 0 and 100 is generated in 100ms intervals.

@Service
public class FirstService {

    private final GaugeService gaugeService;

    @Autowired
    public FirstService(GaugeService gaugeService) {
        this.gaugeService = gaugeService;
    }

    public void exampleMethod() {
    	Random r = new Random();
    	for (int i = 0; i < 1000000; i++) {
    		this.gaugeService.submit("firstservice", r.nextDouble()*100);
    		try {
			Thread.sleep(100);
			} catch (InterruptedException e) {
				e.printStackTrace();
			}
		}
    }

}

The sample bean FirstService is starting after application startup.

@Component
public class Start implements ApplicationListener<ContextRefreshedEvent> {

	@Autowired
	private FirstService service1;

	@Override
	public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
		service1.exampleMethod();
	}

}

Now, let’s configure alert notification using Grafana dashboard and Slack. This feature is available from 4.0 version. I’m going to define a threshold for statistics sent by FirstService bean. If you have already created graph for gauge.firstservice¬†(you need to add this metric name into includes property inside application.yml) go to edit section and then to Alert¬†tab. There you can define alerting condition by selecting aggregating function (for example avg, min, max), evaluation interval and threshold value. For my sample visible in the picture below I selected alerting when maximum value is bigger than 95 and conditions should be evaluated in 5 minute intervals.

grafana-5

After creating alert configuration we should define notification channel. There are some interesting supported notification types like email, Hip Chat, webhook or Slack. When configuring Slack notification we need to pass recipient’s address or channel name and incoming webhook URL. Then, add new notification for your alert sent to Slack in Notifications section.

grafana-6

I created dedicated channel #grafana for Grafana notification on my Slack account and attached incoming webhook to this channel by searching it in Channel Settings -> Add app or integration.

grafana-7

Finally, run my sample application and don’t forget to logout from Grafana Dashboard in case you would like to receive alert on Slack.

Serverless on AWS Lambda

Preface

Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless,¬†I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and then deploy and run that application implemented in, for example nodejs or Java. With serverless,¬†you do not have to write any commands in Linux.

What serverless is different from another very popular topic РMicroservices? To illustrate the difference serverless is often referred as nanoservices. For example, if we would like to create a microservice that provides API for CRUD operations on database table, then our APIs had several endpoints for searching (GET/{id}), updating (PUT), remowing (DELETE), inserting (POST) and maybe a few more for searching using different input criteria. According to serverless architecture, all of that endpoints would be independent functions created and deployed separately. While microservice can be built on an on-premise architecture, for example with Spring Boot, serverless is closely related to the cloud infrastructure.

Custom function implementation basing on cloud provider’s tools is really quick and easy. I’ll try to show it on sample functions deployed on AWS Amazon using AWS Lambda. Sample application source code is available on GitHub.

How it works

Here’s AWS Lambda solution description from Amazon site.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

aws-lambda

AWS Lambda is a compute platform for many application scenarios. It supports applications written in Node.js, Java, C# and Python. On the platform there are also some services available like DynamoDB РNoSQL database, Kinesis Рstreaming service, CloudWatch Рprovides monitoring and logs, Redshift Рdata warehouse solution, S3 Рcloud storage and API Gateway. Every event coming to those services can trigger calling of your Lambda function. You can also interact with those service using AWS Lambda SDK.

Preparation

Let’s finish with the theory, all of us the most like concretes ūüôā¬†First of all, we need to set up AWS account. AWS has web management console available here, but there is also command line client called AWS CLI, which can be downloaded here.¬†There are also some other tools through which we can share our functions on AWS. I will tell you about them later.¬†To be able to use them, including the command line client, we need to generate an access key. Go to web console and select My Security Credentials on your profile, then select Continue to Security Credentials and expand Access Keys. Create you new access key and save it on disc. There are to fields Access Key ID and Secret Access Key. If you would like to use AWS CLI first type aws configure and then you should provide those keys, default region and format (for example JSON or text).

You can use AWS CLI or even web console to deploy your Lambda Function on the cloud. However, I will present you other (in my opinion better :)) solutions. If you are using Eclipse for your development the best option is to download AWS Toolkit plugin. Now, I’m able to upload my function to AWS Lambda or even create or modify table on Amazon DynamoDB. After downloading Eclipse plugin you need to provide¬†Access Key ID and Secret Access Key.¬†You have AWS Management perspective available, where you can see all AWS staff including lambda function, DynamoDB tables, identity management or other service like S3, SNS or SQS. You can create special AWS Java Project or work with standard maven project. ¬†Just display project menu by clicking right button on the project and then select Amazon Web Services and Upload function to AWS Lambda

aws-lambda-deploy-1

After selecting Upload function to AWS Lambda… you should window visible below. You can choose region for your deployment (us-east-1 by default), IAM role and what is most the important – name of your lambda function. We can create new function or update the existing one.

Another interesting possibility for uploading function into AWS Lambda is maven plugin. With lambda-maven-plugin we can define security credentials and all definitions of our functions in JSON format. Here’s plugin declaration in pom.xml. The plugin can be invoked during maven project build mvn clean install lambda:deploy-lambda. Dependencies should be attached into the output JAR file – that’s why maven-shade-plugin is used during build.

<plugin>
	<groupId>com.github.seanroy</groupId>
	<artifactId>lambda-maven-plugin</artifactId>
	<version>2.2.1</version>
	<configuration>
		<accessKey>${aws.accessKey}</accessKey>
		<secretKey>${aws.secretKey}</secretKey>
		<functionCode>${project.build.directory}/${project.build.finalName}.jar</functionCode>
		<version>${project.version}</version>
		<lambdaRoleArn>arn:aws:iam::436521214155:role/lambda_basic_execution</lambdaRoleArn>
		<s3Bucket>lambda-function-bucket-us-east-1-1498055423860</s3Bucket>
		<publish>true</publish>
		<forceUpdate>true</forceUpdate>
		<lambdaFunctionsJSON>
			[
			{
			"functionName": "PostAccountFunction",
			"description": "POST account",
			"handler": "pl.piomin.services.aws.account.add.PostAccount",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 10
			},
			{
			"functionName": "GetAccountFunction",
			"description": "GET account",
			"handler": "pl.piomin.services.aws.account.find.GetAccount",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 30
			},
			{
			"functionName": "GetAccountsByCustomerIdFunction",
			"description": "GET accountsCustomerId",
			"handler": "pl.piomin.services.aws.account.find.GetAccountsByCustomerId",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 30
			}
			]
		</lambdaFunctionsJSON>
	</configuration>
</plugin>

Lambda functions implementation

I implemented sample AWS Lambda functions in Java. Here’s list of dependencies inside¬†pom.xml.

<dependencies>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-events</artifactId>
		<version>1.3.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-core</artifactId>
		<version>1.1.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-log4j</artifactId>
		<version>1.0.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-java-sdk-s3</artifactId>
		<version>1.11.152</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-java-sdk-lambda</artifactId>
		<version>1.11.152</version>
	</dependency>
</dependencies>

Every function is connecting to Amazon DynamoDB. There are two tables created for that sample:¬†account and customer. One customer could have more than one account and this assignment is realized through the customerId field in account table. AWS library for DynamoDB has ORM mapping mechanisms. Here’s Account entity definition. By using annotations we can declare table name, hash key, index and table attributes.

@DynamoDBTable(tableName = "account")
public class Account implements Serializable {

	private static final long serialVersionUID = 8331074361667921244L;
	private String id;
	private String number;
	private String customerId;

	public Account() {

	}

	public Account(String id, String number, String customerId) {
		this.id = id;
		this.number = number;
		this.customerId = customerId;
	}

	@DynamoDBHashKey(attributeName = "id")
	@DynamoDBAutoGeneratedKey
	public String getId() {
		return id;
	}

	public void setId(String id) {
		this.id = id;
	}

	@DynamoDBAttribute(attributeName = "number")
	public String getNumber() {
		return number;
	}

	public void setNumber(String number) {
		this.number = number;
	}

	@DynamoDBIndexHashKey(attributeName = "customerId", globalSecondaryIndexName = "Customer-Index")
	public String getCustomerId() {
		return customerId;
	}

	public void setCustomerId(String customerId) {
		this.customerId = customerId;
	}

}

In the described sample application there are five lambda functions:
PostAccountFunction – it receives Account object from request and insert it into the table
GetAccountFunction – find account by hash key id attribute
GetAccountsByCustomerId – find list of accounts by input customerId
PostCustomerFunction – it receives Customer object from request and insert it into the table
GetCustomerFunction – find customer by hash key id attribute

Every AWS Lambda function handler needs to implement RequestHandler interface with one method handleRequest. Here’s PostAccount handler class. It connects to DynamoDB using Amazon client and creates ORM mapper DynamoDBMapper, which saves input entity in database.

public class PostAccount implements RequestHandler<Account, Account> {

	private DynamoDBMapper mapper;

	public PostAccount() {
		AmazonDynamoDBClient client = new AmazonDynamoDBClient();
		client.setRegion(Region.getRegion(Regions.US_EAST_1));
		mapper = new DynamoDBMapper(client);
	}

	@Override
	public Account handleRequest(Account a, Context ctx) {
		LambdaLogger logger = ctx.getLogger();
		mapper.save(a);
		Account r = a;
		logger.log("Account: " + r.getId());
		return r;
	}

}

GetCustomer function not only interacts with DynamoDB, but also invokes GetAccountsByCustomerId function. Maybe this may not be the best example of the need to call another function, because it could directly retrieve data from the account table directly. But I wanted to separate the data layer from the function logic and jut show how invoking of another function works in AWS Lambda cloud.

public class GetCustomer implements RequestHandler<Customer, Customer> {

	private DynamoDBMapper mapper;
	private AccountService accountService;

	public GetCustomer() {
		AmazonDynamoDBClient client = new AmazonDynamoDBClient();
		client.setRegion(Region.getRegion(Regions.US_EAST_1));
		mapper = new DynamoDBMapper(client);

		accountService = LambdaInvokerFactory.builder() .lambdaClient(AWSLambdaClientBuilder.defaultClient())
				 .build(AccountService.class);
	}

	@Override
	public Customer handleRequest(Customer customer, Context ctx) {
		LambdaLogger logger = ctx.getLogger();
		logger.log("Account: " + customer.getId());
		customer = mapper.load(Customer.class, customer.getId());
		List<Account> aa = accountService.getAccountsByCustomerId(new Account(customer.getId()));
		customer.setAccounts(aa);
		return customer;
	}
}

AccountService is an interface. It uses @LambdaFunction annotation to declare name of invoked function in the cloud.

public interface AccountService {

	@LambdaFunction(functionName = "GetAccountsByCustomerIdFunction")
	List<Account> getAccountsByCustomerId(Account account);
}

API Configuration

I assume that you have already uploaded your Lambda functions. Now, you can go to AWS Web Console and see the full list of them in the AWS Lambda section. Every function can be tested by selecting item in the functions list and calling Test function action.

aws-lambda-3

If you did’t configure role permissions you probably got an error while trying to call your lambda function. I attached AmazonDynamoDBFullAccess policy to main lambda_basic_execution role for Amazon DynamoDB connection. Then I created new inline policy to enable invoking GetAccountsByCustomerIdFunction from other lambda function as you can see in the figure below. If you retry your tests now everything works fine.

aws-lambda-4

Well, now we are able to test our functions from AWS Lambda Web Test Console. But our main goal is to invoke them from outside client, for example REST client. Fortunately, there is a component called API Gateway which can be configured to proxy our HTTP requests from gateway to Lambda functions. Here’s figure with our API configuration, for example POST /customer is mapped to PostCustomerFunction, GET /customer/{id} is mapped to GetCustomerFunction etc.

aws-lambda-5

You can configure Models definitions and set them as input or output types for API.

{
"title": "Account",
"type": "object",
"properties": {
"id": {
"type": "string"
},
"number": {
"type": "string"
},
"customerId": {
"type": "string"
}
}
}

For GET request configuration is a little more complicated. We have to set mapping from path parameter into JSON object which is an input in Lambda functions. Select Integration Request element and then go to Body Mapping Templates section.

aws-lambda-6

Our API can also be exported as Swagger JSON definition. If you are not familiar with take a look on my previous article Microservices API Documentation with Swagger2.

aws-lambda-7

Final words

In my article I described the next steps illustrating how to create an API based on the AWS Lambda serverless solution. I showed the obvious advantages of this solution, such as no need for self-management of servers, the ability to easily deploy applications in the cloud, configuration and monitoring fully based on the solutions provided by the AWS Web Console. You can easily extend my sample with some other services, for example with Kinesis to enable data stream processing. In my opinion, serverless is the perfect solution for exposing simple APIs in the cloud.

Generating large PDF files using JasperReports

During the last ‘Code Europe’ conference in Warsaw appeared many topics related to microservices architecture. Several times I heard the conclusion that the best candidate for separation from monolith is service that generates PDF reports. It’s usually quite independent from the other parts of application. I can see a similar approach in my organization, where first microservice running in production mode was the one that generates PDF reports. To my surprise, the vendor which developed that microservice had to increase maximum heap size to 1GB on each of its instances. This has forced me to take a closer look at the topic of PDF reports generation process.
The most popular Java library for creating PDF files is JasperReports. During generation process, this library by default stores all objects in RAM memory. If such reports are large, this could be a problem my vendor encountered. Their solution, as I have mentioned before, was to increase the maximum size of Java heap ūüôā

This time, unlike usual, I’m going to start with the test implementation. Here’s simple JUnit test with 20 requests per second sending to service endpoint.

public class JasperApplicationTest {

	protected Logger logger = Logger.getLogger(JasperApplicationTest.class.getName());
	TestRestTemplate template = new TestRestTemplate();

	@Test
	public void testGetReport() throws InterruptedException {
		List<HttpStatus> responses = new ArrayList<>();
		Random r = new Random();
		int i = 0;
		for (; i < 20; i++) {
			new Thread(new Runnable() {
				@Override
				public void run() {
					int age = r.nextInt(99);
					long start = System.currentTimeMillis();
					ResponseEntity<InputStreamResource> res = template.getForEntity("http://localhost:2222/pdf/{age}", InputStreamResource.class, age);
					logger.info("Response (" +  (System.currentTimeMillis()-start) + "): " + res.getStatusCode());
					responses.add(res.getStatusCode());
					try {
						Thread.sleep(50);
					} catch (InterruptedException e) {
						e.printStackTrace();
					}
				}
			}).start();
		}

		while (responses.size() != i) {
			Thread.sleep(500);
		}
		logger.info("Test finished");
	}
}

In my test scenario I inserted about 1M records into the person table. Everything works fine during running test. Generated files had about 500kb size and 200 pages. All requests were succeeded and each of them had been processed about 8 seconds. In comparison with single request which had been processed 4 seconds it seems to be a good result. The situation with RAM is worse as you can see in the figure below. After generating 20 PDF reports allocated heap size increases to more than 1GB and used heap size was about 550MB. Also CPU usage during report generation increased to 100% usage. I could easily image generating files bigger than 500kb in the production mode…

jasper-1

In our situation we have two options. We can always add more RAM memory or … look for another choice ūüôā Jasper library comes with solution – Virtualizers. The virtualizer cuts the jasper report print into different files and save them on the hard drive and/or compress it. There are three types of virtualizers:
JRFileVirtualizer, JRSwapFileVirtualizer and JRGzipVirtualizer. You can read more about them here. Now, look at the figure below. Here’s illustration of memory and CPU usage for the test with JRFileVirtualizer. It looks a little better than the previous figure, but it does not knock us down ūüôā However, requests with the same overload as for the previous test take much longer – about 30 seconds. It’s not a good message, but at least the heap size allocation is not increases as fast as for previous sample.

jasper-2

Same test has been performed for JRSwapFileVirtualizer. The requests was average processed around 10 seconds. The graph illustrating CPU and memory usage is rather more similar to in memory test than JRFileVirtualizer test.

jasper-3

To see the difference between those three scenarios we have to run our application with maximum heap size set. For my tests I set -Xmx128m -Xms128m. For test with file virtualizers we receive HTTP responses with PDF reports, but for in memory tests the exception is thrown by the sample application: java.lang.OutOfMemoryError: GC overhead limit exceeded.

For testing purposes I created Spring Boot application. Sample source code is available as usual on GitHub. Here’s full list of Maven dependencies for that project.

<dependency>
	<groupId>net.sf.jasperreports</groupId>
	<artifactId>jasperreports</artifactId>
	<version>6.4.0</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<scope>runtime</scope>
</dependency>

Here’s application main class. There are @Bean declarations of file virtualizers and JasperReport which is responsible for template compilation from .jrxml file. To run application for testing purposes type java -jar -Xms64m -Xmx128m -Ddirectory=C:\Users\minkowp\pdf sample-jasperreport-boot.jar.

@SpringBootApplication
public class JasperApplication {

	@Value("${directory}")
	private String directory;

	public static void main(String[] args) {
		SpringApplication.run(JasperApplication.class, args);
	}

	@Bean
	JasperReport report() throws JRException {
		JasperReport jr = null;
		File f = new File("personReport.jasper");
		if (f.exists()) {
			jr = (JasperReport) JRLoader.loadObject(f);
		} else {
			jr = JasperCompileManager.compileReport("src/main/resources/report.jrxml");
			JRSaver.saveObject(jr, "personReport.jasper");
		}
		return jr;
	}

	@Bean
	JRFileVirtualizer fileVirtualizer() {
		return new JRFileVirtualizer(100, directory);
	}

	@Bean
	JRSwapFileVirtualizer swapFileVirtualizer() {
		JRSwapFile sf = new JRSwapFile(directory, 1024, 100);
		return new JRSwapFileVirtualizer(20, sf, true);
	}

}

There are three endpoints exposed for the tests:
/pdf/{age} – in memory PDF generation
/pdf/fv/{age} – PDF generation with JRFileVirtualizer
/pdf/sfv/{age} – PDF generation with JRSwapFileVirtualizer

Here’s method generating PDF report. Report is generated in fillReport static method from JasperFillManager. It takes three parameters as input: JasperReport which encapsulates compiled .jrxml template file, JDBC connection object and map of parameters. Then report is ganerated and saved on disk as a PDF file. File is returned as an attachement in the response.

	private ResponseEntity<InputStreamResource> generateReport(String name, Map<String, Object> params) {
		FileInputStream st = null;
		Connection cc = null;
		try {
			cc = datasource.getConnection();
			JasperPrint p = JasperFillManager.fillReport(jasperReport, params, cc);
			JRPdfExporter exporter = new JRPdfExporter();
			SimpleOutputStreamExporterOutput c = new SimpleOutputStreamExporterOutput(name);
			exporter.setExporterInput(new SimpleExporterInput(p));
			exporter.setExporterOutput(c);
			exporter.exportReport();

			st = new FileInputStream(name);
			HttpHeaders responseHeaders = new HttpHeaders();
			responseHeaders.setContentType(MediaType.valueOf("application/pdf"));
			responseHeaders.setContentDispositionFormData("attachment", name);
			responseHeaders.setContentLength(st.available());
		    return new ResponseEntity<InputStreamResource>(new InputStreamResource(st), responseHeaders, HttpStatus.OK);
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			fv.cleanup();
			sfv.cleanup();
			if (cc != null)
				try {
					cc.close();
				} catch (SQLException e) {
					e.printStackTrace();
				}
		}
		return null;
	}

To enable virtualizer during report generation we only have to pass one parameter to the map of parameters – instance of virtualizer object.

	@Autowired
	JRFileVirtualizer fv;
	@Autowired
	JRSwapFileVirtualizer sfv;
	@Autowired
	DataSource datasource;
	@Autowired
	JasperReport jasperReport;

	@ResponseBody
	@RequestMapping(value = "/pdf/fv/{age}")
	public ResponseEntity<InputStreamResource> getReportFv(@PathVariable("age") int age) {
		logger.info("getReportFv(" + age + ")");
		Map<String, Object> m = new HashMap<>();
		m.put(JRParameter.REPORT_VIRTUALIZER, fv);
		m.put("age", age);
		String name = ++count + "personReport.pdf";
		return generateReport(name, m);
	}

Template file report.jrxml is available under /src/main/resources directory. Inside queryString tag there is SQL query which takes age parameter in WHERE statement. There are also five columns declared all taken from SQL query result.

<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE jasperReport PUBLIC "//JasperReports//DTD Report Design//EN"    "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd">

<jasperReport xmlns="http://jasperreports.sourceforge.net/jasperreports"               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"               xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports    http://jasperreports.sourceforge.net/xsd/jasperreport.xsd"               name="report2" pageWidth="595" pageHeight="842"                columnWidth="555" leftMargin="20" rightMargin="20"               topMargin="20" bottomMargin="20">
    <parameter name="age" class="java.lang.Integer"/>
    <queryString>
        <![CDATA[SELECT * FROM person WHERE age = $P{age}]]>
    </queryString>
    <field name="id" class="java.lang.Integer" />
    <field name="first_name" class="java.lang.String" />
    <field name="last_name" class="java.lang.String" />
    <field name="age" class="java.lang.Integer" />
    <field name="pesel" class="java.lang.String" />

    <detail>
        <band height="15">

            <textField>
                <reportElement x="0" y="0" width="50" height="15" />

                <textElement textAlignment="Right" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.Integer">
                    <![CDATA[$F{id}]]>
                </textFieldExpression>
            </textField>       

            <textField>
                <reportElement x="100" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{first_name}]]>
                </textFieldExpression>
            </textField> 

            <textField>
                <reportElement x="200" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{last_name}]]>
                </textFieldExpression>
            </textField>               

            <textField>
                <reportElement x="300" y="0" width="50" height="15"/>
                <textElement textAlignment="Right" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.Integer">
                    <![CDATA[$F{age}]]>
                </textFieldExpression>
            </textField>

           <textField>
                <reportElement x="380" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{pesel}]]>
                </textFieldExpression>
            </textField>         

        </band>
    </detail>

</jasperReport>

And the last thing we have to do is to properly set database connection pool settings. A natural choice for Spring Boot application is Tomcat JDBC pool.

spring:
  application:
    name: jasper-service
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/datagrid?useSSL=false
    username: datagrid
    password: datagrid
    tomcat:
      initial-size: 20
      max-active: 30

Final words

In this article I showed you how to avoid out of memory exception while generating large PDF reports with JasperReports. I compared three solutions: in memory generation and two methods based on cutting the jasper print into different files and save them on the hard drive. For me, the most interesting was the solution based on single swapped file with JRSwapFileVirtualizer. It is slower a little than in memory generation but works faster than similar tests for JRFileVirtualizer and in contrast to in memory generation didn’t avoid out of memory exception for files larger than 500kb with 20 requests per second.

Testing Java Microservices

While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires different approach than tests designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect on all which are interacting with that service. Interaction between them is realized by messages. Usually these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are same as for monolith applications.

Unit tests Рwe are testing the smallest pieces of code, for example single method or component and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests –¬†test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests –¬†limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with Mongo database and storing there all customers.¬†Mapped between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity.¬†Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see test strategy for architecture showed in previous picture. For our tests we use embedded in-memory Mongo database and RESTful stubs generated with Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look on the big picture. We have four microservices interacting with each other like we see in the figure below. Spring Cloud Contract uses WireMock in the backgroud for recording and matching requests and responses. For testing purposes Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.

public interface CustomerRepository extends MongoRepository<Customer, String> {

	public Customer findByPesel(String pesel);
	public Customer findById(String id);

}

Here’s full controller code.

@RestController
public class CustomerController {

	@Autowired
	private AccountClient accountClient;
	@Autowired
	private ProductClient productClient;

	@Autowired
	CustomerRepository repository;

	protected Logger logger = Logger.getLogger(CustomerController.class.getName());

	@RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
	public Customer findById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = repository.findById(id);
		List<Account> accounts =  accountClient.getAccounts(id);
		logger.info(String.format("Customer.findById(): %s", accounts));
		customer.setAccounts(accounts);
		return customer;
	}

	@RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
	public Customer findWithProductsById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findWithProductsById(%s)", id));
		Customer customer = repository.findById(id);
		List<Product> products =  productClient.getProducts(id);
		logger.info(String.format("Customer.findWithProductsById(): %s", products));
		customer.setProducts(products);
		return customer;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.POST)
	public Customer add(@RequestBody Customer customer) {
		logger.info(String.format("Customer.add(%s)", customer));
		return repository.save(customer);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.PUT)
	public Customer update(@RequestBody Customer customer) {
		logger.info(String.format("Customer.update(%s)", customer));
		return repository.save(customer);
	}

}

To replace external Mongo database with embedded in-memory instance during automated tests we only have to add following dependency to pom.xml.

<dependency>
	<groupId>de.flapdoodle.embed</groupId>
	<artifactId>de.flapdoodle.embed.mongo</artifactId>
	<scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application seetings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
      host: localhost
      port: 27017
  logging:
    level:
      org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

To enable automated tests generation by Spring Cloud Contract we also have to add following plugin into pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>1.1.0.RELEASE</version>
	<extensions>true</extensions>
	<configuration>
			<packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
	</configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s base test class for account service tests. In our sample architecture account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

As opposed to the account service customer service consumes some services for collecting customer’s account and products. That’s why base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s sample contract for adding new account.

org.springframework.cloud.contract.spec.Contract.make {
  request {
    method 'POST'
    url '/accounts'
	body([
	  id: "1234567890",
          number: "12345678909",
          balance: 1234,
	  customerId: "123456789"
	])
	headers {
	  contentType('application/json')
	}
  }
response {
  status 200
  body([
    id: "1234567890",
    number: "12345678909",
    balance: 1234,
    customerId: "123456789"
  ])
  headers {
    contentType('application/json')
  }
 }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

	@Test
	public void validate_1_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
			assertThatJson(parsedJson).field("balance").isEqualTo(1234);
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
	}

	@Test
	public void validate_2_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
			assertThatJson(parsedJson).field("balance").isEqualTo(4675);
	}

	@Test
	public void validate_3_getAccounts() throws Exception {
		// given:
			MockMvcRequestSpecification request = given();

		// when:
			ResponseOptions response = given().spec(request)
					.get("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
			assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
			assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
			assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
	}

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside scenario1 catalog like we can see in the picture below. The number in every file’s prefix defines tests order. Second scenario has only one definition file and is also used in the customer service (findById API method). Third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define Continuous Integration pipeline for our microservices. Each of them should be build independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage we are updating our source code working for the newest version from repository. In the Build stage we are starting from checking out project version set inside pom.xml, then we build application using mvn clean install command. Finally, we are recording unit tests result using junit pipeline method. Same pipelines can be configured for all other microservices. In described sample all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside different repository with independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

    withMaven(maven: 'Maven') {

        stage ('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
        }

        stage ('Build') {
            def pom = readMavenPom file: 'pom.xml'
            def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
            env.pom_version = version
            print 'Build version: ' + version
            currentBuild.description = "v${version}"

            dir('account-service') {
                bat "mvn clean install -Dmaven.test.failure.ignore=true"
            }

            junit '**/target/surefire-reports/TEST-*.xml'
        }

    }

}

Here’s pipeline vizualization on Jenkins Management Dashboard.

account-pipeline

 

Microservices Continuous Delivery with Docker and Jenkins

Docker, Microservices, Continuous Delivery are currently some of¬†the most popular topics in the world of programming.¬†In an environment consisting of dozens of microservices¬†communicating with each other it seems to be particularly important¬†the automation of the testing, building¬†and deployment process. Docker is excellent solution for microservices, because¬†it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software¬†automation tool – Jenkins.

Sample Microservices

Before I get into the main topic of the article I say a few words about structure and tools used for¬†sample microservices creation. Sample application consists of two sample¬†microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul).¬†It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub. Spring Cloud has support for microservices discovery¬†and gateway out of the box – we only have to define right dependencies inside maven project¬†configuration file (pom.xml). The picture illustrating the adopted solution architecture is visible¬†below. Both customer, account REST¬†API services, discovery server and gateway running inside separated¬†docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer¬†microservice the request is load balanced with ¬†Ribbon¬† and ¬†Feignclient. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return¬†all¬†customer’s account details.

Image title

I¬†wouldn’t like¬†to go into the details of those microservices¬†implementation with¬†Spring Boot and Spring¬†Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog post here.¬†Generally, Spring framework¬†has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging¬†trace ID propagation, API gateway for microservices¬†with those solutions.

Dockerfiles

Each service in the sample source code has ¬†Dockerfilewith docker¬†image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.

<build>
  <finalName>account-service</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version>1.5.2.RELEASE</version>
      <configuration>
        <mainClass>pl.piomin.microservices.account.Application</mainClass>
        <addResources>true</addResources>
      </configuration>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Jenkins pipelines

We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees. There are four pipelines defined as you can see in the picture below.

Image title

Here’s pipeline definition written in Groovy language for discovery¬†service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from ¬†pom.xml. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('discovery-service') {
                def app = docker.build "localhost:5000/discovery-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery')
        }

        stage ('Final') {
            build job: 'account-service-pipeline', wait: false
        }      

    }

}

Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('account-service') {
                def app = docker.build "localhost:5000/account-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery')
        }

        stage ('Final') {
            build job: 'customer-service-pipeline', wait: false
        }      

    }

}

Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as  Jenkinsfile. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog.

docker run -d --name registry -p 5000:5000 registry

Testing

You should launch the build on discovery-service-pipeline. This pipeline will not only run build for discovery service but also call start next pipeline build (account-service-pipeline) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline. So, after all pipelines finish you can check the list of running docker containers by calling  docker ps  command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command  docker logs, for example  docker logs account. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account.

</div>
<div class="cm-replace _replace_51">CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
fa3b9e408bb4        localhost:5000/gateway-service:1.0-SNAPSHOT     "java -jar /gatewa..."   About an hour ago   Up About an hour    0.0.0.0:8765->8765/tcp   gateway
cc9e2b44fe44        localhost:5000/customer-service:1.0-SNAPSHOT    "java -jar /custom..."   About an hour ago   Up About an hour    0.0.0.0:3333->3333/tcp   customer
49657f4531de        localhost:5000/account-service:1.0-SNAPSHOT     "java -jar /accoun..."   About an hour ago   Up About an hour    0.0.0.0:2222->2222/tcp   account
fe07b8dfe96c        localhost:5000/discovery-service:1.0-SNAPSHOT   "java -jar /discov..."   About an hour ago   Up About an hour    0.0.0.0:8761->8761/tcp   discovery
f9a7691ddbba        registry</div>
<div class="cm-replace _replace_51">

Conclusion

I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (more here), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.