Serverless on AWS Lambda

Preface

Serverless is now one of the hottest trend in IT world. A more accurate name for it is Function as a Service (FaaS). Have any of you ever tried to share your APIs deployed in the cloud? Before serverless, I had to create a virtual machine with Linux on the cloud provider’s infrastructure, and then deploy and run that application implemented in, for example nodejs or Java. With serverless, you do not have to write any commands in Linux.

What serverless is different from another very popular topic – Microservices? To illustrate the difference serverless is often referred as nanoservices. For example, if we would like to create a microservice that provides API for CRUD operations on database table, then our APIs had several endpoints for searching (GET/{id}), updating (PUT), remowing (DELETE), inserting (POST) and maybe a few more for searching using different input criteria. According to serverless architecture, all of that endpoints would be independent functions created and deployed separately. While microservice can be built on an on-premise architecture, for example with Spring Boot, serverless is closely related to the cloud infrastructure.

Custom function implementation basing on cloud provider’s tools is really quick and easy. I’ll try to show it on sample functions deployed on AWS Amazon using AWS Lambda. Sample application source code is available on GitHub.

How it works

Here’s AWS Lambda solution description from Amazon site.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

aws-lambda

AWS Lambda is a compute platform for many application scenarios. It supports applications written in Node.js, Java, C# and Python. On the platform there are also some services available like DynamoDB – NoSQL database, Kinesis – streaming service, CloudWatch – provides monitoring and logs, Redshift – data warehouse solution, S3 – cloud storage and API Gateway. Every event coming to those services can trigger calling of your Lambda function. You can also interact with those service using AWS Lambda SDK.

Preparation

Let’s finish with the theory, all of us the most like concretes 🙂 First of all, we need to set up AWS account. AWS has web management console available here, but there is also command line client called AWS CLI, which can be downloaded here. There are also some other tools through which we can share our functions on AWS. I will tell you about them later. To be able to use them, including the command line client, we need to generate an access key. Go to web console and select My Security Credentials on your profile, then select Continue to Security Credentials and expand Access Keys. Create you new access key and save it on disc. There are to fields Access Key ID and Secret Access Key. If you would like to use AWS CLI first type aws configure and then you should provide those keys, default region and format (for example JSON or text).

You can use AWS CLI or even web console to deploy your Lambda Function on the cloud. However, I will present you other (in my opinion better :)) solutions. If you are using Eclipse for your development the best option is to download AWS Toolkit plugin. Now, I’m able to upload my function to AWS Lambda or even create or modify table on Amazon DynamoDB. After downloading Eclipse plugin you need to provide Access Key ID and Secret Access Key. You have AWS Management perspective available, where you can see all AWS staff including lambda function, DynamoDB tables, identity management or other service like S3, SNS or SQS. You can create special AWS Java Project or work with standard maven project.  Just display project menu by clicking right button on the project and then select Amazon Web Services and Upload function to AWS Lambda

aws-lambda-deploy-1

After selecting Upload function to AWS Lambda… you should window visible below. You can choose region for your deployment (us-east-1 by default), IAM role and what is most the important – name of your lambda function. We can create new function or update the existing one.

Another interesting possibility for uploading function into AWS Lambda is maven plugin. With lambda-maven-plugin we can define security credentials and all definitions of our functions in JSON format. Here’s plugin declaration in pom.xml. The plugin can be invoked during maven project build mvn clean install lambda:deploy-lambda. Dependencies should be attached into the output JAR file – that’s why maven-shade-plugin is used during build.

<plugin>
	<groupId>com.github.seanroy</groupId>
	<artifactId>lambda-maven-plugin</artifactId>
	<version>2.2.1</version>
	<configuration>
		<accessKey>${aws.accessKey}</accessKey>
		<secretKey>${aws.secretKey}</secretKey>
		<functionCode>${project.build.directory}/${project.build.finalName}.jar</functionCode>
		<version>${project.version}</version>
		<lambdaRoleArn>arn:aws:iam::436521214155:role/lambda_basic_execution</lambdaRoleArn>
		<s3Bucket>lambda-function-bucket-us-east-1-1498055423860</s3Bucket>
		<publish>true</publish>
		<forceUpdate>true</forceUpdate>
		<lambdaFunctionsJSON>
			[
			{
			"functionName": "PostAccountFunction",
			"description": "POST account",
			"handler": "pl.piomin.services.aws.account.add.PostAccount",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 10
			},
			{
			"functionName": "GetAccountFunction",
			"description": "GET account",
			"handler": "pl.piomin.services.aws.account.find.GetAccount",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 30
			},
			{
			"functionName": "GetAccountsByCustomerIdFunction",
			"description": "GET accountsCustomerId",
			"handler": "pl.piomin.services.aws.account.find.GetAccountsByCustomerId",
			"timeout": 30,
			"memorySize": 256,
			"keepAlive": 30
			}
			]
		</lambdaFunctionsJSON>
	</configuration>
</plugin>

Lambda functions implementation

I implemented sample AWS Lambda functions in Java. Here’s list of dependencies inside pom.xml.

<dependencies>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-events</artifactId>
		<version>1.3.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-core</artifactId>
		<version>1.1.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-lambda-java-log4j</artifactId>
		<version>1.0.0</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-java-sdk-s3</artifactId>
		<version>1.11.152</version>
	</dependency>
	<dependency>
		<groupId>com.amazonaws</groupId>
		<artifactId>aws-java-sdk-lambda</artifactId>
		<version>1.11.152</version>
	</dependency>
</dependencies>

Every function is connecting to Amazon DynamoDB. There are two tables created for that sample: account and customer. One customer could have more than one account and this assignment is realized through the customerId field in account table. AWS library for DynamoDB has ORM mapping mechanisms. Here’s Account entity definition. By using annotations we can declare table name, hash key, index and table attributes.

@DynamoDBTable(tableName = "account")
public class Account implements Serializable {

	private static final long serialVersionUID = 8331074361667921244L;
	private String id;
	private String number;
	private String customerId;

	public Account() {

	}

	public Account(String id, String number, String customerId) {
		this.id = id;
		this.number = number;
		this.customerId = customerId;
	}

	@DynamoDBHashKey(attributeName = "id")
	@DynamoDBAutoGeneratedKey
	public String getId() {
		return id;
	}

	public void setId(String id) {
		this.id = id;
	}

	@DynamoDBAttribute(attributeName = "number")
	public String getNumber() {
		return number;
	}

	public void setNumber(String number) {
		this.number = number;
	}

	@DynamoDBIndexHashKey(attributeName = "customerId", globalSecondaryIndexName = "Customer-Index")
	public String getCustomerId() {
		return customerId;
	}

	public void setCustomerId(String customerId) {
		this.customerId = customerId;
	}

}

In the described sample application there are five lambda functions:
PostAccountFunction – it receives Account object from request and insert it into the table
GetAccountFunction – find account by hash key id attribute
GetAccountsByCustomerId – find list of accounts by input customerId
PostCustomerFunction – it receives Customer object from request and insert it into the table
GetCustomerFunction – find customer by hash key id attribute

Every AWS Lambda function handler needs to implement RequestHandler interface with one method handleRequest. Here’s PostAccount handler class. It connects to DynamoDB using Amazon client and creates ORM mapper DynamoDBMapper, which saves input entity in database.

public class PostAccount implements RequestHandler<Account, Account> {

	private DynamoDBMapper mapper;

	public PostAccount() {
		AmazonDynamoDBClient client = new AmazonDynamoDBClient();
		client.setRegion(Region.getRegion(Regions.US_EAST_1));
		mapper = new DynamoDBMapper(client);
	}

	@Override
	public Account handleRequest(Account a, Context ctx) {
		LambdaLogger logger = ctx.getLogger();
		mapper.save(a);
		Account r = a;
		logger.log("Account: " + r.getId());
		return r;
	}

}

GetCustomer function not only interacts with DynamoDB, but also invokes GetAccountsByCustomerId function. Maybe this may not be the best example of the need to call another function, because it could directly retrieve data from the account table directly. But I wanted to separate the data layer from the function logic and jut show how invoking of another function works in AWS Lambda cloud.

public class GetCustomer implements RequestHandler<Customer, Customer> {

	private DynamoDBMapper mapper;
	private AccountService accountService;

	public GetCustomer() {
		AmazonDynamoDBClient client = new AmazonDynamoDBClient();
		client.setRegion(Region.getRegion(Regions.US_EAST_1));
		mapper = new DynamoDBMapper(client);

		accountService = LambdaInvokerFactory.builder() .lambdaClient(AWSLambdaClientBuilder.defaultClient())
				 .build(AccountService.class);
	}

	@Override
	public Customer handleRequest(Customer customer, Context ctx) {
		LambdaLogger logger = ctx.getLogger();
		logger.log("Account: " + customer.getId());
		customer = mapper.load(Customer.class, customer.getId());
		List<Account> aa = accountService.getAccountsByCustomerId(new Account(customer.getId()));
		customer.setAccounts(aa);
		return customer;
	}
}

AccountService is an interface. It uses @LambdaFunction annotation to declare name of invoked function in the cloud.

public interface AccountService {

	@LambdaFunction(functionName = "GetAccountsByCustomerIdFunction")
	List<Account> getAccountsByCustomerId(Account account);
}

API Configuration

I assume that you have already uploaded your Lambda functions. Now, you can go to AWS Web Console and see the full list of them in the AWS Lambda section. Every function can be tested by selecting item in the functions list and calling Test function action.

aws-lambda-3

If you did’t configure role permissions you probably got an error while trying to call your lambda function. I attached AmazonDynamoDBFullAccess policy to main lambda_basic_execution role for Amazon DynamoDB connection. Then I created new inline policy to enable invoking GetAccountsByCustomerIdFunction from other lambda function as you can see in the figure below. If you retry your tests now everything works fine.

aws-lambda-4

Well, now we are able to test our functions from AWS Lambda Web Test Console. But our main goal is to invoke them from outside client, for example REST client. Fortunately, there is a component called API Gateway which can be configured to proxy our HTTP requests from gateway to Lambda functions. Here’s figure with our API configuration, for example POST /customer is mapped to PostCustomerFunction, GET /customer/{id} is mapped to GetCustomerFunction etc.

aws-lambda-5

You can configure Models definitions and set them as input or output types for API.

{
"title": "Account",
"type": "object",
"properties": {
"id": {
"type": "string"
},
"number": {
"type": "string"
},
"customerId": {
"type": "string"
}
}
}

For GET request configuration is a little more complicated. We have to set mapping from path parameter into JSON object which is an input in Lambda functions. Select Integration Request element and then go to Body Mapping Templates section.

aws-lambda-6

Our API can also be exported as Swagger JSON definition. If you are not familiar with take a look on my previous article Microservices API Documentation with Swagger2.

aws-lambda-7

Final words

In my article I described the next steps illustrating how to create an API based on the AWS Lambda serverless solution. I showed the obvious advantages of this solution, such as no need for self-management of servers, the ability to easily deploy applications in the cloud, configuration and monitoring fully based on the solutions provided by the AWS Web Console. You can easily extend my sample with some other services, for example with Kinesis to enable data stream processing. In my opinion, serverless is the perfect solution for exposing simple APIs in the cloud.

Generating large PDF files using JasperReports

During the last ‘Code Europe’ conference in Warsaw appeared many topics related to microservices architecture. Several times I heard the conclusion that the best candidate for separation from monolith is service that generates PDF reports. It’s usually quite independent from the other parts of application. I can see a similar approach in my organization, where first microservice running in production mode was the one that generates PDF reports. To my surprise, the vendor which developed that microservice had to increase maximum heap size to 1GB on each of its instances. This has forced me to take a closer look at the topic of PDF reports generation process.
The most popular Java library for creating PDF files is JasperReports. During generation process, this library by default stores all objects in RAM memory. If such reports are large, this could be a problem my vendor encountered. Their solution, as I have mentioned before, was to increase the maximum size of Java heap 🙂

This time, unlike usual, I’m going to start with the test implementation. Here’s simple JUnit test with 20 requests per second sending to service endpoint.

public class JasperApplicationTest {

	protected Logger logger = Logger.getLogger(JasperApplicationTest.class.getName());
	TestRestTemplate template = new TestRestTemplate();

	@Test
	public void testGetReport() throws InterruptedException {
		List<HttpStatus> responses = new ArrayList<>();
		Random r = new Random();
		int i = 0;
		for (; i < 20; i++) {
			new Thread(new Runnable() {
				@Override
				public void run() {
					int age = r.nextInt(99);
					long start = System.currentTimeMillis();
					ResponseEntity<InputStreamResource> res = template.getForEntity("http://localhost:2222/pdf/{age}", InputStreamResource.class, age);
					logger.info("Response (" +  (System.currentTimeMillis()-start) + "): " + res.getStatusCode());
					responses.add(res.getStatusCode());
					try {
						Thread.sleep(50);
					} catch (InterruptedException e) {
						e.printStackTrace();
					}
				}
			}).start();
		}

		while (responses.size() != i) {
			Thread.sleep(500);
		}
		logger.info("Test finished");
	}
}

In my test scenario I inserted about 1M records into the person table. Everything works fine during running test. Generated files had about 500kb size and 200 pages. All requests were succeeded and each of them had been processed about 8 seconds. In comparison with single request which had been processed 4 seconds it seems to be a good result. The situation with RAM is worse as you can see in the figure below. After generating 20 PDF reports allocated heap size increases to more than 1GB and used heap size was about 550MB. Also CPU usage during report generation increased to 100% usage. I could easily image generating files bigger than 500kb in the production mode…

jasper-1

In our situation we have two options. We can always add more RAM memory or … look for another choice 🙂 Jasper library comes with solution – Virtualizers. The virtualizer cuts the jasper report print into different files and save them on the hard drive and/or compress it. There are three types of virtualizers:
JRFileVirtualizer, JRSwapFileVirtualizer and JRGzipVirtualizer. You can read more about them here. Now, look at the figure below. Here’s illustration of memory and CPU usage for the test with JRFileVirtualizer. It looks a little better than the previous figure, but it does not knock us down 🙂 However, requests with the same overload as for the previous test take much longer – about 30 seconds. It’s not a good message, but at least the heap size allocation is not increases as fast as for previous sample.

jasper-2

Same test has been performed for JRSwapFileVirtualizer. The requests was average processed around 10 seconds. The graph illustrating CPU and memory usage is rather more similar to in memory test than JRFileVirtualizer test.

jasper-3

To see the difference between those three scenarios we have to run our application with maximum heap size set. For my tests I set -Xmx128m -Xms128m. For test with file virtualizers we receive HTTP responses with PDF reports, but for in memory tests the exception is thrown by the sample application: java.lang.OutOfMemoryError: GC overhead limit exceeded.

For testing purposes I created Spring Boot application. Sample source code is available as usual on GitHub. Here’s full list of Maven dependencies for that project.

<dependency>
	<groupId>net.sf.jasperreports</groupId>
	<artifactId>jasperreports</artifactId>
	<version>6.4.0</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<scope>runtime</scope>
</dependency>

Here’s application main class. There are @Bean declarations of file virtualizers and JasperReport which is responsible for template compilation from .jrxml file. To run application for testing purposes type java -jar -Xms64m -Xmx128m -Ddirectory=C:\Users\minkowp\pdf sample-jasperreport-boot.jar.

@SpringBootApplication
public class JasperApplication {

	@Value("${directory}")
	private String directory;

	public static void main(String[] args) {
		SpringApplication.run(JasperApplication.class, args);
	}

	@Bean
	JasperReport report() throws JRException {
		JasperReport jr = null;
		File f = new File("personReport.jasper");
		if (f.exists()) {
			jr = (JasperReport) JRLoader.loadObject(f);
		} else {
			jr = JasperCompileManager.compileReport("src/main/resources/report.jrxml");
			JRSaver.saveObject(jr, "personReport.jasper");
		}
		return jr;
	}

	@Bean
	JRFileVirtualizer fileVirtualizer() {
		return new JRFileVirtualizer(100, directory);
	}

	@Bean
	JRSwapFileVirtualizer swapFileVirtualizer() {
		JRSwapFile sf = new JRSwapFile(directory, 1024, 100);
		return new JRSwapFileVirtualizer(20, sf, true);
	}

}

There are three endpoints exposed for the tests:
/pdf/{age} – in memory PDF generation
/pdf/fv/{age} – PDF generation with JRFileVirtualizer
/pdf/sfv/{age} – PDF generation with JRSwapFileVirtualizer

Here’s method generating PDF report. Report is generated in fillReport static method from JasperFillManager. It takes three parameters as input: JasperReport which encapsulates compiled .jrxml template file, JDBC connection object and map of parameters. Then report is ganerated and saved on disk as a PDF file. File is returned as an attachement in the response.

	private ResponseEntity<InputStreamResource> generateReport(String name, Map<String, Object> params) {
		FileInputStream st = null;
		Connection cc = null;
		try {
			cc = datasource.getConnection();
			JasperPrint p = JasperFillManager.fillReport(jasperReport, params, cc);
			JRPdfExporter exporter = new JRPdfExporter();
			SimpleOutputStreamExporterOutput c = new SimpleOutputStreamExporterOutput(name);
			exporter.setExporterInput(new SimpleExporterInput(p));
			exporter.setExporterOutput(c);
			exporter.exportReport();

			st = new FileInputStream(name);
			HttpHeaders responseHeaders = new HttpHeaders();
			responseHeaders.setContentType(MediaType.valueOf("application/pdf"));
			responseHeaders.setContentDispositionFormData("attachment", name);
			responseHeaders.setContentLength(st.available());
		    return new ResponseEntity<InputStreamResource>(new InputStreamResource(st), responseHeaders, HttpStatus.OK);
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			fv.cleanup();
			sfv.cleanup();
			if (cc != null)
				try {
					cc.close();
				} catch (SQLException e) {
					e.printStackTrace();
				}
		}
		return null;
	}

To enable virtualizer during report generation we only have to pass one parameter to the map of parameters – instance of virtualizer object.

	@Autowired
	JRFileVirtualizer fv;
	@Autowired
	JRSwapFileVirtualizer sfv;
	@Autowired
	DataSource datasource;
	@Autowired
	JasperReport jasperReport;

	@ResponseBody
	@RequestMapping(value = "/pdf/fv/{age}")
	public ResponseEntity<InputStreamResource> getReportFv(@PathVariable("age") int age) {
		logger.info("getReportFv(" + age + ")");
		Map<String, Object> m = new HashMap<>();
		m.put(JRParameter.REPORT_VIRTUALIZER, fv);
		m.put("age", age);
		String name = ++count + "personReport.pdf";
		return generateReport(name, m);
	}

Template file report.jrxml is available under /src/main/resources directory. Inside queryString tag there is SQL query which takes age parameter in WHERE statement. There are also five columns declared all taken from SQL query result.

<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE jasperReport PUBLIC "//JasperReports//DTD Report Design//EN"    "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd">

<jasperReport xmlns="http://jasperreports.sourceforge.net/jasperreports"               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"               xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports    http://jasperreports.sourceforge.net/xsd/jasperreport.xsd"               name="report2" pageWidth="595" pageHeight="842"                columnWidth="555" leftMargin="20" rightMargin="20"               topMargin="20" bottomMargin="20">
    <parameter name="age" class="java.lang.Integer"/>
    <queryString>
        <![CDATA[SELECT * FROM person WHERE age = $P{age}]]>
    </queryString>
    <field name="id" class="java.lang.Integer" />
    <field name="first_name" class="java.lang.String" />
    <field name="last_name" class="java.lang.String" />
    <field name="age" class="java.lang.Integer" />
    <field name="pesel" class="java.lang.String" />

    <detail>
        <band height="15">

            <textField>
                <reportElement x="0" y="0" width="50" height="15" />

                <textElement textAlignment="Right" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.Integer">
                    <![CDATA[$F{id}]]>
                </textFieldExpression>
            </textField>       

            <textField>
                <reportElement x="100" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{first_name}]]>
                </textFieldExpression>
            </textField> 

            <textField>
                <reportElement x="200" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{last_name}]]>
                </textFieldExpression>
            </textField>               

            <textField>
                <reportElement x="300" y="0" width="50" height="15"/>
                <textElement textAlignment="Right" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.Integer">
                    <![CDATA[$F{age}]]>
                </textFieldExpression>
            </textField>

           <textField>
                <reportElement x="380" y="0" width="80" height="15" />

                <textElement textAlignment="Left" verticalAlignment="Middle"/>

                <textFieldExpression class="java.lang.String">
                    <![CDATA[$F{pesel}]]>
                </textFieldExpression>
            </textField>         

        </band>
    </detail>

</jasperReport>

And the last thing we have to do is to properly set database connection pool settings. A natural choice for Spring Boot application is Tomcat JDBC pool.

spring:
  application:
    name: jasper-service
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/datagrid?useSSL=false
    username: datagrid
    password: datagrid
    tomcat:
      initial-size: 20
      max-active: 30

Final words

In this article I showed you how to avoid out of memory exception while generating large PDF reports with JasperReports. I compared three solutions: in memory generation and two methods based on cutting the jasper print into different files and save them on the hard drive. For me, the most interesting was the solution based on single swapped file with JRSwapFileVirtualizer. It is slower a little than in memory generation but works faster than similar tests for JRFileVirtualizer and in contrast to in memory generation didn’t avoid out of memory exception for files larger than 500kb with 20 requests per second.

Testing Java Microservices

While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires different approach than tests designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect on all which are interacting with that service. Interaction between them is realized by messages. Usually these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are same as for monolith applications.

Unit tests – we are testing the smallest pieces of code, for example single method or component and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests – test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests – limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with Mongo database and storing there all customers. Mapped between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity. Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see test strategy for architecture showed in previous picture. For our tests we use embedded in-memory Mongo database and RESTful stubs generated with Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look on the big picture. We have four microservices interacting with each other like we see in the figure below. Spring Cloud Contract uses WireMock in the backgroud for recording and matching requests and responses. For testing purposes Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.

public interface CustomerRepository extends MongoRepository<Customer, String> {

	public Customer findByPesel(String pesel);
	public Customer findById(String id);

}

Here’s full controller code.

@RestController
public class CustomerController {

	@Autowired
	private AccountClient accountClient;
	@Autowired
	private ProductClient productClient;

	@Autowired
	CustomerRepository repository;

	protected Logger logger = Logger.getLogger(CustomerController.class.getName());

	@RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
	public Customer findById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = repository.findById(id);
		List<Account> accounts =  accountClient.getAccounts(id);
		logger.info(String.format("Customer.findById(): %s", accounts));
		customer.setAccounts(accounts);
		return customer;
	}

	@RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
	public Customer findWithProductsById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findWithProductsById(%s)", id));
		Customer customer = repository.findById(id);
		List<Product> products =  productClient.getProducts(id);
		logger.info(String.format("Customer.findWithProductsById(): %s", products));
		customer.setProducts(products);
		return customer;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.POST)
	public Customer add(@RequestBody Customer customer) {
		logger.info(String.format("Customer.add(%s)", customer));
		return repository.save(customer);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.PUT)
	public Customer update(@RequestBody Customer customer) {
		logger.info(String.format("Customer.update(%s)", customer));
		return repository.save(customer);
	}

}

To replace external Mongo database with embedded in-memory instance during automated tests we only have to add following dependency to pom.xml.

<dependency>
	<groupId>de.flapdoodle.embed</groupId>
	<artifactId>de.flapdoodle.embed.mongo</artifactId>
	<scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application seetings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
      host: localhost
      port: 27017
  logging:
    level:
      org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

To enable automated tests generation by Spring Cloud Contract we also have to add following plugin into pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>1.1.0.RELEASE</version>
	<extensions>true</extensions>
	<configuration>
			<packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
	</configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s base test class for account service tests. In our sample architecture account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

As opposed to the account service customer service consumes some services for collecting customer’s account and products. That’s why base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s sample contract for adding new account.

org.springframework.cloud.contract.spec.Contract.make {
  request {
    method 'POST'
    url '/accounts'
	body([
	  id: "1234567890",
          number: "12345678909",
          balance: 1234,
	  customerId: "123456789"
	])
	headers {
	  contentType('application/json')
	}
  }
response {
  status 200
  body([
    id: "1234567890",
    number: "12345678909",
    balance: 1234,
    customerId: "123456789"
  ])
  headers {
    contentType('application/json')
  }
 }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

	@Test
	public void validate_1_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
			assertThatJson(parsedJson).field("balance").isEqualTo(1234);
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
	}

	@Test
	public void validate_2_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
			assertThatJson(parsedJson).field("balance").isEqualTo(4675);
	}

	@Test
	public void validate_3_getAccounts() throws Exception {
		// given:
			MockMvcRequestSpecification request = given();

		// when:
			ResponseOptions response = given().spec(request)
					.get("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
			assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
			assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
			assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
	}

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside scenario1 catalog like we can see in the picture below. The number in every file’s prefix defines tests order. Second scenario has only one definition file and is also used in the customer service (findById API method). Third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define Continuous Integration pipeline for our microservices. Each of them should be build independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage we are updating our source code working for the newest version from repository. In the Build stage we are starting from checking out project version set inside pom.xml, then we build application using mvn clean install command. Finally, we are recording unit tests result using junit pipeline method. Same pipelines can be configured for all other microservices. In described sample all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside different repository with independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

    withMaven(maven: 'Maven') {

        stage ('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
        }

        stage ('Build') {
            def pom = readMavenPom file: 'pom.xml'
            def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
            env.pom_version = version
            print 'Build version: ' + version
            currentBuild.description = "v${version}"

            dir('account-service') {
                bat "mvn clean install -Dmaven.test.failure.ignore=true"
            }

            junit '**/target/surefire-reports/TEST-*.xml'
        }

    }

}

Here’s pipeline vizualization on Jenkins Management Dashboard.

account-pipeline

 

Microservices Continuous Delivery with Docker and Jenkins

Docker, Microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other it seems to be particularly important the automation of the testing, building and deployment process. Docker is excellent solution for microservices, because it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software automation tool – Jenkins.

Sample Microservices

Before I get into the main topic of the article I say a few words about structure and tools used for sample microservices creation. Sample application consists of two sample microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul). It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub. Spring Cloud has support for microservices discovery and gateway out of the box – we only have to define right dependencies inside maven project configuration file (pom.xml). The picture illustrating the adopted solution architecture is visible below. Both customer, account REST API services, discovery server and gateway running inside separated docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer microservice the request is load balanced with  Ribbon  and  Feignclient. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return all customer’s account details.

Image title

I wouldn’t like to go into the details of those microservices implementation with Spring Boot and Spring Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog post here. Generally, Spring framework has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging trace ID propagation, API gateway for microservices with those solutions.

Dockerfiles

Each service in the sample source code has  Dockerfilewith docker image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.

<build>
  <finalName>account-service</finalName>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <version>1.5.2.RELEASE</version>
      <configuration>
        <mainClass>pl.piomin.microservices.account.Application</mainClass>
        <addResources>true</addResources>
      </configuration>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>

Jenkins pipelines

We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees. There are four pipelines defined as you can see in the picture below.

Image title

Here’s pipeline definition written in Groovy language for discovery service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from  pom.xml. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('discovery-service') {
                def app = docker.build "localhost:5000/discovery-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery')
        }

        stage ('Final') {
            build job: 'account-service-pipeline', wait: false
        }      

    }

}

Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.

node {

    withMaven(maven:'maven') {

        stage('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master'
        }

        stage('Build') {
            sh 'mvn clean install'

            def pom = readMavenPom file:'pom.xml'
            print pom.version
            env.version = pom.version
        }

        stage('Image') {
            dir ('account-service') {
                def app = docker.build "localhost:5000/account-service:${env.version}"
                app.push()
            }
        }

        stage ('Run') {
            docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery')
        }

        stage ('Final') {
            build job: 'customer-service-pipeline', wait: false
        }      

    }

}

Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as  Jenkinsfile. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog.

docker run -d --name registry -p 5000:5000 registry

Testing

You should launch the build on discovery-service-pipeline. This pipeline will not only run build for discovery service but also call start next pipeline build (account-service-pipeline) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline. So, after all pipelines finish you can check the list of running docker containers by calling  docker ps  command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command  docker logs, for example  docker logs account. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account.

</div>
<div class="cm-replace _replace_51">CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
fa3b9e408bb4        localhost:5000/gateway-service:1.0-SNAPSHOT     "java -jar /gatewa..."   About an hour ago   Up About an hour    0.0.0.0:8765->8765/tcp   gateway
cc9e2b44fe44        localhost:5000/customer-service:1.0-SNAPSHOT    "java -jar /custom..."   About an hour ago   Up About an hour    0.0.0.0:3333->3333/tcp   customer
49657f4531de        localhost:5000/account-service:1.0-SNAPSHOT     "java -jar /accoun..."   About an hour ago   Up About an hour    0.0.0.0:2222->2222/tcp   account
fe07b8dfe96c        localhost:5000/discovery-service:1.0-SNAPSHOT   "java -jar /discov..."   About an hour ago   Up About an hour    0.0.0.0:8761->8761/tcp   discovery
f9a7691ddbba        registry</div>
<div class="cm-replace _replace_51">

Conclusion

I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (more here), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.

Apache Karaf Microservices

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

Apache Karaf can be runned as standalone container and provides some enterprise ready features like shell console, remote access, hot deployment, dynamic configuration. It can be the perfect solution for microservices. The idea of microservices on Apache Karaf has already been introduced a few years ago. “What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive.” – Peter Kriens March 2010.

Karaf on Docker

First, we need to run docker container with Apache Karaf. Surprisingly, there is no official repository with such an image. I found image on Docker Hub with Karaf here. Unfortunately, there is no port 8181 exposed – default Karaf web port. We will use this image to create our own with 8181 port available outside. Here’s our Dockerfile.

FROM java:8-jdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

ENV KARAF_VERSION=4.0.8

RUN wget http://www-us.apache.org/dist/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /opt/karaf; \
    tar --strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz; \
    rm apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /deploy; \
    sed -i 's/^\(felix\.fileinstall\.dir\s*=\s*\).*$/\1\/deploy/' /opt/karaf/etc/org.apache.felix.fileinstall-deploy.cfg

VOLUME ["/deploy"]
EXPOSE 1099 8101 8181 44444
ENTRYPOINT ["/opt/karaf/bin/karaf"]

Then, by running docker commands below we are building our image from Dockerfile and starting new Karaf container.

docker build -t karaf-api .
docker run -d --name karaf -p 1099:1099 -p 8101:8101 -p 8181:8181 -p 44444:44444 karaf-api

Now, we can login to new docker container (1). Karaf is installed in /opt/karaf directory. We should run client by calling ./client in /opt/karaf/bin directory (2). Then we should install Apache Felix web console which is by default available under port 8181 (3). You can check it out by calling on web browser http://192.168.99.100:8181/system/console. Default username and password is karaf. In webconsole you can check full list of features installed on our OSGi cantainer. You can also display that list in karaf console using feature:list command (4). After webconsole installation you decide if you prefer using Karaf command line or Apache Felix console for further actions. For our sample application we need to add some OSGi repositories and features. First, we are adding Apache CXF framework repository (5) and its features for http and RESTful web services (6). Then we are adding repository for jackson framework (7) and some jackson and Jetty server features (8).

docker exec -i -t karaf /bin/bash (1)
cd /opt/karaf/bin
./client (2)
karaf@root()> feature:install webconsole (3)
karaf@root()> feature:list (4)
karaf@root()> feature:repo-add cxf 3.1.10 (5)
karaf@root()> feature:install http cxf-jaxrs cxf (6)
karaf@root()> feature:repo-add mvn:org.code-house.jackson/features/2.7.6/xml/features (7)
karaf@root()> feature:install jackson-jaxrs-json-provider jetty (8)

Microservices

Our environment has been configured. Now, we can take a brief look on sample application. It’s really simple. It has only three modules account-cxf, customer-cxf, sample-api. In the sample-api module we have base service interfaces and model objects. In account-cxf and customer-cxf there service implementations and OSGi services declarations in Blueprint file. Sample application source code is available on GitHub. Here’s account service controller class and its interface below.

public class AccountServiceImpl implements AccountService {

	private List<Account> accounts;

	public AccountServiceImpl() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, "1234567890", 12345, 1));
		accounts.add(new Account(2, "1234567891", 6543, 2));
		accounts.add(new Account(3, "1234567892", 45646, 3));
	}

	public Account findById(Integer id) {
		return accounts.stream().filter(a -> a.getId().equals(id)).findFirst().get();
	}

	public List<Account> findAll() {
		return accounts;
	}

	public Account add(Account account) {
		accounts.add(account);
		account.setId(accounts.size());
		return account;
	}

	@Override
	public List<Account> findAllByCustomerId(Integer customerId) {
		return accounts.stream().filter(a -> a.getCustomerId().equals(customerId)).collect(Collectors.toList());
	}

}

AccountService interface is in sample-api module. We use JAX-RS annotations for declaring REST endpoints.

public interface AccountService {

	@GET
	@Path("/{id}")
	@Produces("application/json")
	public Account findById(@PathParam("id") Integer id);

	@GET
	@Path("/")
	@Produces("application/json")
	public List<Account> findAll();

	@GET
	@Path("/customer/{customerId}")
	@Produces("application/json")
	public List<Account> findAllByCustomerId(@PathParam("customerId") Integer customerId);

	@POST
	@Path("/")
	@Consumes("application/json")
	@Produces("application/json")
	public Account add(Account account);

}

Here you can see OSGi services declaration in the blueprint.xml file. We have declared AccountServiceIpl bean and set that bean as a service for JAX-RS endpoint. Endpoint uses JacksonJsonProvider as data format provider. There is also important OSGi service declaration with AccountService referencing to AccountServiceImpl. This service will be available for other microservices deployed on Karaf container for example, customer-cxf.

    <cxf:bus id="accountRestBus">
    </cxf:bus>

    <bean id="accountServiceImpl" class="pl.piomin.services.cxf.account.service.AccountServiceImpl"/>
    <service ref="accountServiceImpl" interface="pl.piomin.services.cxf.api.AccountService" />

    <jaxrs:server address="/account" id="accountService">
        <jaxrs:serviceBeans>
            <ref component-id="accountServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:features>
            <cxf:logging />
        </jaxrs:features>
        <jaxrs:providers>
        	<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider"/>
        </jaxrs:providers>
    </jaxrs:server>

Now, let’s take a look on customer-cxf microservice. Here’s OSGi blueprint of that service. JAX-RS server declaration is pretty similar as for account-cxf. There is only one addition in comparision with earlier presented OSGi blueprint – reference to AccountService. This reference is injected into CustomerServiceImpl.

	<reference id="accountService" 		interface="pl.piomin.services.cxf.api.AccountService" />

	<bean id="customerServiceImpl" 		class="pl.piomin.services.cxf.customer.service.CustomerServiceImpl">
		<property name="accountService" ref="accountService" />
	</bean>

	<jaxrs:server address="/customer" id="customerService">
		<jaxrs:serviceBeans>
			<ref component-id="customerServiceImpl" />
		</jaxrs:serviceBeans>
		<jaxrs:features>
			<cxf:logging />
		</jaxrs:features>
		<jaxrs:providers>
			<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
		</jaxrs:providers>
	</jaxrs:server>

CustomerService uses OSGi reference to AccountService in findById method to collect all accounts belonged to the customer with the specified id path parameter and also exposes some other operations.

public class CustomerServiceImpl implements CustomerService {

	private AccountService accountService;

	private List<Customer> customers;

	public CustomerServiceImpl() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "XXX", "1234567890"));
		customers.add(new Customer(2, "YYY", "1234567891"));
		customers.add(new Customer(3, "ZZZ", "1234567892"));
	}

	@Override
	public Customer findById(Integer id) {
		Customer c = customers.stream().filter(a -> a.getId().equals(id)).findFirst().get();
		c.setAccounts(accountService.findAllByCustomerId(id));
		return c;
	}

	@Override
	public List<Customer> findAll() {
		return customers;
	}

	@Override
	public Customer add(Customer customer) {
		customers.add(customer);
		customer.setId(customers.size());
		return customer;
	}

	public AccountService getAccountService() {
		return accountService;
	}

	public void setAccountService(AccountService accountService) {
		this.accountService = accountService;
	}

}

Each service has packaging type bundle inside pom.xml and uses maven-bundle-plugin during build process. After running mvn clean install on the root project all bundles will be generated in target catalog.You can install them using Apache Felix web console or Karaf command line client in that order: sample-api, account-cxf, customer-cxf.

Testing

Finally, you can see a list of available CXF endpoints on Karaf by calling http://192.168.99.100:8181/cxf in your web browser. Call http://192.168.99.100:8181/cxf/customer/1 to test findById in CustomerService. You should see JSON with customer data and all accounts collected from account microservice.

Conclusion

Treat this post as a short introduction to microsevices conception on Apache Karaf OSGi container. I presented you how to use CXF endpoints on Karaf container as a some kind of service gateway and OSGi services for inter-communication process between deployed microservices. Instead of OSGi reference we could use JAX-RS proxy client for connecting with account service from customer service. You can find some basic examples of that concept on the web. There are also available more advanced solutions for service registration and discovery on Karaf, for example remore service call with Apahce ZooKeeper. I think we will take a closer look on them in subsequent posts.

Microservices security with Oauth2

Preface

One of the most important aspects to consider when exposing a public access API consisting of many microservices is security. Spring has some interesting features and frameworks which makes configuration of our microservices security easier. In this article I’m going to show you how to use Spring Cloud and Oauth2 to provide token access security behind API gateway.

Theory

OAuth2 standard is currently used by all the major websites that allow you to access their resources through the shared API. It is an open authorization standard allowing users to share their private resources stored in one page to another page without having to go into the service of their credentials. These are basic terms related to oauth2.

  • Resource Owner – dispose of access to the resource
  • Resource Server – server that stores the owner’s resources that can be shared using special token
  • Authorization Server – manages the allocation of keys, tokens and other temporary resource access codes. It also has to ensure that access is granted to the relevant person
  • Access Token – the key that allows access to a resource
  • Authorization Grant – grants permission for access. There are different ways to confirm access: authorization code, implicit, resource owner password credentials, and client credentials

You can read more about this standard here and in this digitalocean article. The flow of this protocol has three main steps. In the begining we authorization request is sent to Resource Owner. After response from Resource Owner we send authorization grant request to Authorization Server and receive access token. Finally, we send this access token to Resource Server and if it is valid the API serves the resource to the application.

Our solution

The picture below shows architecture of our sample. We have API Gateway (Zuul) which proxies our requests to authorization server and two instances of account microservice. Authorization server is some kind of infrastructure service which provides outh2 security mechanisms. We also have discovery service (Eureka) where all of our microservices are registered.

sec-micro

Gateway

For our sample we won’t provide any security on API gateway. It just has to proxy requests from clients to authorization server and account microservices. In the Zuul’s gateway configuration visible below we set sensitiveHeaders property on empty value to enable Authorization HTTP header forward. By default Zuul cut that header while forwarding our request to the target API which is incorrect because of the basic authorization demanded by our services behind gateway.

zuul:
  routes:
    uaa:
      path: /uaa/**
      sensitiveHeaders:
      serviceId: auth-server
    account:
      path: /account/**
      sensitiveHeaders:
      serviceId: account-service

Main class inside gateway source code is very simple. It only has to enable Zuul proxy feature and discovery client for collecting services from Eureka registry.

@SpringBootApplication
@EnableZuulProxy
@EnableDiscoveryClient
public class GatewayServer {

	public static void main(String[] args) {
		SpringApplication.run(GatewayServer.class, args);
	}

}

Authorization Server

Our authorization server is as simple as possible. It based on default Spring security configuration. Client authorization details are stored in an in-memory repository. Of cource in the production mode you would like to use other implementations instead of in-memory repository like JDBC datasource and token store. You can read more about Spring authorization mechanisms in Spring Security Reference and Spring Boot Security. Here’s fragment of configuration from application.yml. We provided user basic authentication data and basic security credentials for the /token endpoint: client-id and client-secret. The user credentials are the normal Spring Security user details.

security:
  user:
    name: root
    password: password
  oauth2:
    client:
      client-id: acme
      client-secret: secret

Here’s main class of our authentication server with @EnableAuthorizationServer. We also exposed one REST endpoint with user authentication details for account service and enabled Eureka registration and discovery for clients.

@SpringBootApplication
@EnableAuthorizationServer
@EnableDiscoveryClient
@EnableResourceServer
@RestController
public class AuthServer {

	public static void main(String[] args) {
		SpringApplication.run(AuthServer.class, args);
	}

	@RequestMapping("/user")
	public Principal user(Principal user) {
		return user;
	}

}

Application – account microservice

Our sample microservice has only one endpoint for @GET request which always returns the same account. In main class resource server and Eureka discovery are enabled. Service configuration is trivial. Sample application source code is available on GitHub.

@SpringBootApplication
@EnableDiscoveryClient
@EnableResourceServer
public class AccountService {

	public static void main(String[] args) {
		SpringApplication.run(AccountService.class, args);
	}

}
security:
  user:
    name: root
    password: password
  oauth2:
    resource:
      loadBalanced: true
      userInfoUri: http://localhost:9999/user

Testing

We only need web browser and REST client (for example Chrome Advanced REST client) to test our solution. Let’s start from sending authorization request to resource owner. We can call oauth2 authorize endpoint via Zuul gateway in the web browser.

http://localhost:8765/uaa/oauth/authorize?response_type=token&client_id=acme&redirect_uri=http://example.com&scope=openid&state=48532

After sending this request we should see page below. Select Approve and click Authorize for requests an access token from the authorization server. If the application identity is authenticated and the authorization grant is valid an access token to the application should be returned in the HTTP response.

oauth2

http://example.com/#access_token=b1acaa35-1ebd-4995-987d-56ee1c0619e5&token_type=bearer&state=48532&expires_in=43199

And the final step is to call account endpoint using access token. We had to put it into Authorization header as bearer token. In the sample application logging level for security operation is set to TRACE so you can easily find out what happened if something goes wrong.

call

Conclusion

To be honest I’m not very familiar with security issues in applications. So one very important thing for me is the simplicity of security solution I decided to use. In Spring Security we have almost all needed mechanisms out of the box. It also provides components which can be easily extendable for more advanced requirements. You should treat this article as a brief introduction to more advanced solutions using Spring Cloud and Spring Security projects.

Reactive microservices with Spring 5

Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. Today I’m going to take a closer look on solutions suggested by Spring team.

Following Spring WebFlux documentation  the Spring Framework uses Reactor internally for its own reactive support. Reactor is a Reactive Streams implementation that further extends the basic Reactive Streams Publisher contract with the Flux and Mono composable API types to provide declarative operations on data sequences of 0..N and 0..1. On the server-side Spring supports annotation based and functional programming models. Annotation model use @Controller and the other annotations supported also with Spring MVC. Reactive controller will be very similar to standard REST controller for synchronous services instead of it uses Flux, Mono and Publisher objects. Today I’m going to show you how to develop simple reactive microservices using annotation model and MongoDB reactive module. Sample application source code is available on GitHub.

For our example we need to use snapshots of Spring Boot 2.0.0 and Spring Web Reactive 0.1.0. Here are main pom.xml fragment and single microservice pom.xml below. In our microservices we use Netty instead of default Tomcat server.

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.0.BUILD-SNAPSHOT</version>
	</parent>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot.experimental</groupId>
				<artifactId>spring-boot-dependencies-web-reactive</artifactId>
				<version>0.1.0.BUILD-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot.experimental</groupId>
			<artifactId>spring-boot-starter-web-reactive</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.springframework.boot</groupId>
					<artifactId>spring-boot-starter-tomcat</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.ipc</groupId>
			<artifactId>reactor-netty</artifactId>
		</dependency>
		<dependency>
			<groupId>io.netty</groupId>
			<artifactId>netty-all</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>common</artifactId>
			<version>${project.version}</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.addons</groupId>
			<artifactId>reactor-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

We have two microservices: account-service and customer-service. Each of them have its own MongoDB database and they are exposing simple reactive API for searching and saving data. Also customer-service interacting with account-service to get all customer accounts and return them in customer-service method. Here’s our account controller code.

@RestController
public class AccountController {

	@Autowired
	private AccountRepository repository;

	@GetMapping(value = "/account/customer/{customer}")
	public Flux<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		return repository.findByCustomerId(customerId)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account")
	public Flux<Account> findAll() {
		return repository.findAll().map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account/{id}")
	public Mono<Account> findById(@PathVariable("id") Integer id) {
		return repository.findById(id)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@PostMapping("/person")
	public Mono<Account> create(@RequestBody Publisher<Account> accountStream) {
		return repository
				.save(Mono.from(accountStream)
						.map(a -> new pl.piomin.services.account.model.Account(a.getNumber(), a.getCustomerId(),
								a.getAmount())))
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

}

In all API methods we also perform mapping from Account entity (MongoDB @Document) to Account DTO available in our common module. Here’s account repository class. It uses ReactiveMongoTemplate for interacting with Mongo collections.

@Repository
public class AccountRepository {

	@Autowired
	private ReactiveMongoTemplate template;

	public Mono<Account> findById(Integer id) {
		return template.findById(id, Account.class);
	}

	public Flux<Account> findAll() {
		return template.findAll(Account.class);
	}

	public Flux<Account> findByCustomerId(String customerId) {
		return template.find(query(where("customerId").is(customerId)), Account.class);
	}

	public Mono<Account> save(Mono<Account> account) {
		return template.insert(account);
	}

}

In our Spring Boot main or @Configuration class we should declare spring beans for MongoDB with connection settings.

@SpringBootApplication
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	public @Bean MongoClient mongoClient() {
		return MongoClients.create("mongodb://192.168.99.100");
	}

	public @Bean ReactiveMongoTemplate reactiveMongoTemplate() {
		return new ReactiveMongoTemplate(mongoClient(), "account");
	}

}

I used docker MongoDB container for working on this sample.

docker run -d --name mongo -p 27017:27017 mongo

In customer service we call endpoint /account/customer/{customer} from account service. I declared @Bean WebClient in our main class.

	public @Bean WebClient webClient() {
		return WebClient.builder().clientConnector(new ReactorClientHttpConnector()).baseUrl("http://localhost:2222").build();
	}

Here’s customer controller fragment. @Autowired WebClient calls account service after getting customer from MongoDB.

	@Autowired
	private WebClient webClient;

	@GetMapping(value = "/customer/accounts/{pesel}")
	public Mono<Customer> findByPeselWithAccounts(@PathVariable("pesel") String pesel) {
		return repository.findByPesel(pesel).flatMap(customer -> webClient.get().uri("/account/customer/{customer}", customer.getId()).accept(MediaType.APPLICATION_JSON)
				.exchange().flatMap(response -> response.bodyToFlux(Account.class))).collectList().map(l -> {return new Customer(pesel, l);});
	}

We can test GET calls using web browser or REST clients. With POST it’s not so simple. Here are two simple test cases for adding new customer and getting customer with accounts. Test getCustomerAccounts need account service running on port 2222.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerTest {

	private static final Logger logger = Logger.getLogger("CustomerTest");

	private WebClient webClient;

	@LocalServerPort
	private int port;

	@Before
	public void setup() {
		this.webClient = WebClient.create("http://localhost:" + this.port);
	}

	@Test
	public void getCustomerAccounts() {
		Customer customer = this.webClient.get().uri("/customer/accounts/234543647565")
				.accept(MediaType.APPLICATION_JSON).exchange().then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

	@Test
	public void addCustomer() {
		Customer customer = new Customer(null, "Adam", "Kowalski", "123456787654");
		customer = webClient.post().uri("/customer").accept(MediaType.APPLICATION_JSON)
				.exchange(BodyInserters.fromObject(customer)).then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

}

Conclusion

Spring initiative with support for reactive programming seems promising, but now it’s on early stage of development. There is no availibility to use it with popular projects from Spring Cloud like Eureka, Ribbon or Hystrix. When I tried to add this dependencies to pom.xml my service failed to start. I hope that in the near future such functionalities like service discovery and load balancing will be available also for reactive microservices same as for synchronous REST microservices. Spring has also support for reactive model in Spring Cloud Stream project. It’s more stable than WebFlux framework. I’ll try use it in the future.