Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case.
I described there how to use very interesting Vault feature called secret engines for generating database user credentials dynamically. I used Spring Cloud Vault module in my Spring Boot application to automatically integrate with that feature of Vault. The implemented mechanism is pretty easy. The application calls Vault secret engine before it tries to connect to Postgres database on startup. Vault is integrated with Postgres via secret engine, and that’s why it creates user with sufficient privileges on Postgres. Then, generated credentials are automatically injected into auto-configured Spring Boot properties used for connecting with database spring.datasource.username and spring.datasource.password. The following picture illustrates described solution.

testcontainers-1 (1).png

Ok, we know how it works, now the question is how to automatically test it. With Testcontainers it is possible with just a few lines of code.

1. Building application

Let’s begin from a short intro to the application code. It is very simple. Here’s the list of dependencies required for building application that exposes REST API, and integrates with Postgres and Vault.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

Application connects to Postgres, enables integration with Vault via Spring Cloud Vault, and automatically creates/updates tables on startup.

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres
  jpa.hibernate.ddl-auto: update

It exposes the single endpoint. The following method is responsible for handling incoming requests. It just insert a record to database and return response with app name, version and id of inserted record.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
	
	@Autowired
	Optional<BuildProperties> buildProperties;
	@Autowired
	CallmeRepository repository;
	
	@GetMapping("/message/{message}")
	public String ping(@PathVariable("message") String message) {
		Callme c = repository.save(new Callme(message, new Date()));
		if (buildProperties.isPresent()) {
			BuildProperties infoProperties = buildProperties.get();
			LOGGER.info("Ping: name={}, version={}", infoProperties.getName(), infoProperties.getVersion());
			return infoProperties.getName() + ":" + infoProperties.getVersion() + ":" + c.getId();
		} else {
			return "callme-service:"  + c.getId();
		}
	}
	
}

2. Enabling Testcontainers

To enable Testcontainers for our project we need to include some dependencies to our Maven pom.xml. We have dedicated modules for Postgres and Vault. We also include Spring Boot Test dependency, because we would like to test the whole Spring Boot app.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-test</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>vault</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>postgresql</artifactId>
	<version>1.10.5</version>
	<scope>test</scope>
</dependency>

3. Running Vault test container

Testcontainers framework supports JUnit 4/JUnit 5 and Spock. The Vault container can be started before tests if it is annotated with @Rule or @ClassRule. By default it uses version 0.7, but we can override it with newest version, which is 1.0.2. We also may set a root token, which is then required by Spring Cloud Vault for integration with Vault.

@ClassRule
public static VaultContainer vaultContainer = new VaultContainer<>("vault:1.0.2")
	.withVaultToken("123456")
	.withVaultPort(8200);

That root token can be overridden before starting JUnit test on the test class.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, properties = {
    "spring.cloud.vault.token=123456"
})
public class CallmeTest { ... }

4. Running Postgres test container

As an alternative to @ClassRule, we can manually start the container in a @BeforeClass or @Before method in the test. With this approach you will also have to stop it manually in @AfterClass or @After method. We start Postgres container manually, because by default it is exposed on dynamically generated port, which need to be set for Spring Boot application before starting the test. The listen port is returned by method getFirstMappedPort invoked on PostgreSQLContainer.

private static PostgreSQLContainer postgresContainer = new PostgreSQLContainer()
	.withDatabaseName("postgres")
	.withUsername("postgres")
	.withPassword("postgres123");
	
@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	// ...
}

@AfterClass
public static void shutdown() {
	postgresContainer.stop();
}

5. Integrating Vault and Postgres containers

Once we have succesfully started both Vault and Postgres containers, we need to integrate them via Vault secret engine. First, we need to enable database secret engine Vault. After that we must configure connection to Postgres. The last step is is to configure a role. A role is a logical name that maps to a policy used to generated those credentials. All these actions may be performed using Vault commands. You can launch command on Vault container using execInContainer method. Vault configuration commands should be executed just after Postgres container startup.

@BeforeClass
public static void init() throws IOException, InterruptedException {
	postgresContainer.start();
	int port = postgresContainer.getFirstMappedPort();
	System.setProperty("spring.datasource.url", String.format("jdbc:postgresql://192.168.99.100:%d/postgres", postgresContainer.getFirstMappedPort()));
	vaultContainer.execInContainer("vault", "secrets", "enable", "database");
	String url = String.format("connection_url=postgresql://{{username}}:{{password}}@192.168.99.100:%d?sslmode=disable", port);
	vaultContainer.execInContainer("vault", "write", "database/config/postgres", "plugin_name=postgresql-database-plugin", "allowed_roles=default", url, "username=postgres", "password=postgres123");
	vaultContainer.execInContainer("vault", "write", "database/roles/default", "db_name=postgres",
		"creation_statements=CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";",
		"default_ttl=1h", "max_ttl=24h");
}

6. Running application tests

Finally, we may run application tests. We just call the single endpoint exposed by the app using TestRestTemplate, and verify the output.

@Autowired
TestRestTemplate template;

@Test
public void test() {
	String res = template.getForObject("/callme/message/{message}", String.class, "Test");
	Assert.assertNotNull(res);
	Assert.assertTrue(res.endsWith("1"));
}

If you are interested what exactly happens during the test you can set a breakpoint inside test method and execute docker ps command manually.

testcontainers-2

Advertisements

RabbitMQ Cluster with Consul and Vault

Almost two years ago I wrote an article about RabbitMQ clustering RabbitMQ in cluster. It was one of the first post on my blog, and it’s really hard to believe it has been two years since I started this blog. Anyway, one of the question about the topic described in the mentioned article inspired me to return to that subject one more time. That question pointed to the problem of an approach to setting up the cluster. This approach assumes that we are manually attaching new nodes to the cluster by executing the command rabbitmqctl join_cluster with cluster name as a parameter. If I remember correctly it was the only one available method of creating cluster at that time. Today we have more choices, what illustrates an evolution of RabbitMQ during last two years. RabbitMQ cluster can be formed in a number of ways:

  • Manually with rabbitmqctl (as described in my article RabbitMQ in cluster)
  • Declaratively by listing cluster nodes in config file
  • Using DNS-based discovery
  • Using AWS (EC2) instance discovery via a dedicated plugin
  • Using Kubernetes discovery via a dedicated plugin
  • Using Consul discovery via a dedicated plugin
  • Using etcd-based discovery via a dedicated plugin

Today, I’m going to show you how to create RabbitMQ cluster using service discovery based on HashiCorp’s Consul. Additionally, we will include Vault to our architecture in order to use its interesting feature called secrets engine for managing credentials used for accessing RabbitMQ. We will setup this sample on the local machine using Docker images of RabbitMQ, Consul and Vault. Finally, we will test our solution using simple Spring Boot application that sends and listens for incoming messages to the cluster. That application is available on GitHub repository sample-haclustered-rabbitmq-service in the branch consul.

Architecture

We use Vault as a credentials manager when applications try to authenticate against RabbitMQ node or user tries to login to RabbitMQ web admin console. Each RabbitMQ node registers itself after startup in Consul and retrieves list of nodes running inside a cluster. Vault is integrated with RabbitMQ using dedicated secrets engine. Here’s an architecture of our sample solution.

rabbit-consul-logo (1)

1. Configure RabbitMQ Consul plugin

The integration between RabbitMQ and Consul is realized via plugin rabbitmq-peer-discovery-consul. This plugin is not enabled by default on the official RabbitMQ Docker container. So, the first step is to build our own Docker image based on official RabbitMQ image that installs and enables required plugin. By default, RabbitMQ main configuration file is available under path /etc/rabbitmq/rabbitmq.conf inside Docker container. To override it we just use the COPY statement as shown below. The following Dockerfile definition takes RabbitMQ with management web console as base image and enabling rabbitmq_peer_discovery_consul plugin.

FROM rabbitmq:3.7.8-management
COPY rabbitmq.conf /etc/rabbitmq
RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul

Now, let’s take a closer look on our plugin configuration settings. Because I run Docker on Windows Consul is not available under default localhost address, but on 192.168.99.100. So, first we need to set that IP address using property cluster_formation.consul.host. We also need to set Consul as a default peer discovery implementation by setting the name of plugin for property cluster_formation.peer_discovery_backend. Finally, we have to set two additional properties to make it work in our local Docker environment. It is related with the address of RabbitMQ node sent to Consul during registration process. It is important to compute it properly, and not to send for example localhost. After setting property cluster_formation.consul.svc_addr_use_nodename to false node will register itself using host name instead of node name. We can set the name of host for container inside its running command. Here’s my full RabbitMQ configuration file used in demo for this article.

loopback_users.guest = false
listeners.tcp.default = 5672
hipe_compile = false
management.listener.port = 15672
management.listener.ssl = false
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_consul
cluster_formation.consul.host = 192.168.99.100
cluster_formation.consul.svc_addr_auto = true
cluster_formation.consul.svc_addr_use_nodename = false

After saving the configuration visible above in the file rabbitmq.conf we can proceed to building our custom Docker image with RabbitMQ. This image is available in my Docker repository under alias piomin/rabbitmq, but you can also build it by yourself from Dockerfile by executing the following command.

$ docker build -t piomin/rabbitmq:1.0 .
Sending build context to Docker daemon  3.072kB
Step 1 : FROM rabbitmq:3.7.8-management
 ---> d69a5113ceae
Step 2 : COPY rabbitmq.conf /etc/rabbitmq
 ---> aa306ef88085
Removing intermediate container fda0e21178f9
Step 3 : RUN rabbitmq-plugins enable --offline rabbitmq_peer_discovery_consul
 ---> Running in 0892a42bffef
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@fda0e21178f9...
The following plugins have been enabled:
  rabbitmq_peer_discovery_common
  rabbitmq_peer_discovery_consul

set 5 plugins.
Offline change; changes will take effect at broker restart.
 ---> cfe73f9d9904
Removing intermediate container 0892a42bffef
Successfully built cfe73f9d9904

2. Running RabbitMQ cluster on Docker

In the previous step we have succesfully created Docker image of RabbitMQ configured to run in cluster mode using Consul discovery. Before running this image we need to start instance of Consul. Here’s the command that starts Docker container with Consul and exposing it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

We will also create Docker network to enable communication between containers by hostname. It is required in this scenario, because each RabbitMQ container is register itself using container hostname.

$ docker network create rabbitmq

Now, we can run our three clustered RabbitMQ containers. We will set unique hostname for every single container (using -h option) and set the same Docker network everywhere. We also have to set container environment variable RABBITMQ_ERLANG_COOKIE.

$ docker run -d --name rabbit1 -h rabbit1 --network rabbitmq -p 30000:5672 -p 30010:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit2 -h rabbit2 --network rabbitmq -p 30001:5672 -p 30011:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0
$ docker run -d --name rabbit3 -h rabbit3 --network rabbitmq -p 30002:5672 -p 30012:15672 -e RABBITMQ_ERLANG_COOKIE='rabbitmq' piomin/rabbitmq:1.0

After running all three instances of RabbitMQ we can first take a look on Consul web console. You should see there the new service called rabbitmq. This value is the default name of cluster set by RabbitMQ Consul plugin. We can override inside rabbitmq.conf using cluster_formation.consul.svc property.

rabbit-consul-1

We can check out if cluster has been succesfully started using RabbitMQ web management console. Every node is exposing it. I just had to override default port 15672 to avoid port conflicts between three running instances.

rabbit-consul-10

3. Integrating RabbitMQ with Vault

In the two previous steps we have succesfully run the cluster of three RabbitMQ nodes based on Consul discovery. Now, we will include Vault to our sample system to dynamically generate user credentials. Let’s begin from running Vault on Docker. You can find detailed information about it in my previous article Secure Spring Cloud Microservices with Vault and Nomad. We will run Vault in development mode using the following command.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

You can copy the root token from container logs using docker logs -f vault command. Then you have to login to Vault web console available under address http://192.168.99.100:8200 using this token and enable RabbitMQ secret engine as shown below.

rabbit-consul-2

And confirm.

rabbit-consul-3

You can easily run Vault commands using terminal provided by web admin console or do the same thing using HTTP API. The first command visible below is used for writing connection details. We just need to pass RabbitMQ address and admin user credentials. The provided configuration settings points to #1 RabbitMQ node, but the changes are then replicated to the whole cluster.

$ vault write rabbitmq/config/connection connection_uri="http://192.168.99.100:30010" username="guest" password="guest"

The next step is to configure a role that maps a name in Vault to virtual host permissions.

$ vault write rabbitmq/roles/default vhosts='{"/":{"write": ".*", "read": ".*"}}'

We can test our newly created configuration by running command vault read rabbitmq/creds/default as shown below.

rabbit-consul-4

4. Sample application

Our sample application is pretty simple. It consists of two modules. First of them sender is responsible for sending messages to RabbitMQ, while second listener for receiving incoming messages. Both of them are Spring Boot applications that integrates with RabbitMQ and Vault using the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-rabbitmq</artifactId>
	<version>2.0.2.RELEASE</version>
</dependency>

We need to provide some configuration settings in bootstrap.yml file to integrate our application with Vault. First, we need to enable plugin for that integration by setting property spring.cloud.vault.rabbitmq.enabled to true. Of course, Vault address and root token are required. It is also important to set property spring.cloud.vault.rabbitmq.role with the name of Vault role configured in step 3. Spring Cloud Vault injects username and password generated by Vault to the application properties spring.rabbitmq.username and spring.rabbitmq.password, so the only thing we need to configure in bootstrap.yml file is the list of available cluster nodes.

spring:
  rabbitmq:
    addresses: 192.168.99.100:30000,192.168.99.100:30001,192.168.99.100:30002
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: s.7DaENeiqLmsU5ZhEybBCRJhp
      rabbitmq:
        enabled: true
        role: default
        backend: rabbitmq

For the test purposes you should enable high-available queues on RabbitMQ. For instructions how to configure them using policies you can refer to my article RabbitMQ in cluster. The application works at the level of exchanges. Auto-configured connection factory is injected into the application and set for RabbitTemplate bean.

@SpringBootApplication
public class Sender {
	
	private static final Logger LOGGER = LoggerFactory.getLogger("Sender");
	
	@Autowired
	RabbitTemplate template;

	public static void main(String[] args) {
		SpringApplication.run(Sender.class, args);
	}

	@PostConstruct
	public void send() {
		for (int i = 0; i < 1000; i++) {
			int id = new Random().nextInt(100000);
			template.convertAndSend(new Order(id, "TEST"+id, OrderType.values()[(id%2)]));
		}
		LOGGER.info("Sending completed.");
	}
    
    @Bean
    public RabbitTemplate template(ConnectionFactory connectionFactory) {
        RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
        rabbitTemplate.setExchange("ex.example");
        return rabbitTemplate;
    }
    
}

Our listener app is connected only to the third node of the cluster (spring.rabbitmq.addresses=192.168.99.100:30002). However, the test queue is mirrored between all clustered nodes, so it is able to receive messages sent by sender app. You can easily test using my sample applications.

@SpringBootApplication
@EnableRabbit
public class Listener {

	private static final  Logger LOGGER = LoggerFactory.getLogger("Listener");

	private Long timestamp;

	public static void main(String[] args) {
		SpringApplication.run(Listener.class, args);
	}

	@RabbitListener(queues = "q.example")
	public void onMessage(Order order) {
		if (timestamp == null)
			timestamp = System.currentTimeMillis();
		LOGGER.info((System.currentTimeMillis() - timestamp) + " : " + order.toString());
	}

	@Bean
	public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
		SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
		factory.setConnectionFactory(connectionFactory);
		factory.setConcurrentConsumers(10);
		factory.setMaxConcurrentConsumers(20);
		return factory;
	}
	
}

Secure Spring Cloud Microservices with Vault and Nomad

One of the significant topics related to microservices security is managing and protecting sensitive data like tokens, passwords or certificates used by your application. As a developer you probably often implement a software that connects with external databases, message brokers or just the other applications. How do you store the credentials used by your application? To be honest, most of the software code I have seen in my life just stored a sensitive data as a plain text in the configuration files. Thanks to that, I could always be able to retrieve the credentials to every database I needed at a given time just by looking at the application source code. Of course, we can always encrypt sensitive data, but if we working with many microservices having separated databases I may not be very comfortable solution.

Today I’m going to show you how to integrate you Spring Boot application with HashiCorp’s Vault in order to store your sensitive data properly. The first good news is that you don’t have to create any keys or certificates for encryption and decryption, because Vault will do it in your place. In this article in a few areas I’ll refer to my previous article about HashiCorp’s solutions Deploying Spring Cloud Microservices on HashiCorp’s Nomad. Now, as then, I also deploy my sample applications on Nomad to take an advantage of build-in integration between those two very interesting HashiCorp’s tools. We will also use another HashiCorp’s solution for service discovery in inter-service communication – Consul. It’s also worth mentioning that Spring Cloud provides a dedicated project for integration with Vault – Spring Cloud Vault.

Architecture

The sample presented in this article will consists of two applications deployed on HashiCorp’s Nomad callme-service and caller-service. Microservice caller-service is calling endpoint exposed by callme-service. An inter-service communication is performed using the name of target application registered in Consul server. Microservice callme-service will store the history of all interactions triggered by caller-service in database. The credentials to database are stored on Vault. Nomad is integrated with Vault and store root token, which is not visible by the applications. The architecture of described solution is visible on the following picture.

vault-1

The current sample is pretty similar to the sample presented in my article Deploying Spring Cloud Microservices on Hashicorp’s Nomad. It is also available in the same repository on GitHub sample-nomad-java-service, but in the different branch vault. The current sample add an integration with PostgreSQL and Vault server for managing credentials to database.

1. Running Vault

We will run Vault inside Docker container in a development mode. Server in development mode does not require any further setup, it is ready to use just after startup. It provides in-memory encrypted storage and unsecure (HTTP) connection, which is not a problem for a demo purposes. We can override default server IP address and listening port by setting environment property VAULT_DEV_LISTEN_ADDRESS, but we won’t do that. After startup our instance of Vault is available on port 8200. We can use admin web console, which is for me available under address http://192.168.99.100:8200. The current version of Vault is 1.0.0.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

It is possible to login using different methods, but the most suitable way for us is through a token. To do that we have to display container logs using command docker logs vault, and then copy Root Token as shown below.

vault-1

Now you can login to Vault web console.

vault-2

2. Integration with Postgres database

In Vault we can create Secret Engine that connects to other services and generates dynamic credentials on demand. Secrets engines are available under path. There is the dedicated engine for the various databases, for example PostgreSQL. Before activating such an engine we should run an instance of Postgres database. This time we will also use Docker container. It is possible to set login and password to the database using environment variables.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=postgres123456 -e POSTGRES_USER=postgres postgres

After starting database, we may proceed to the engine configuration in Vault web console. First, let’s create our first secret engine. We may choose between some different types of engine. The right choice for now is Databases.

vault-3

You can apply a new configuration to Vault using vault command or by HTTP API. Vault web console provides terminal for running CLI commands, but it could be problematic in some cases. For example, I have a problem with escaping strings in some SQL commands, and therefore I had to add it using HTTP API. No matter which method you use, the next steps are the same. Following Vault documentation we first need to configure plugin for PostgreSQL database and then provide connection settings and credentials.

$ vault write database/config/postgres plugin_name=postgresql-database-plugin allowed_roles="default" connection_url="postgresql://{{username}}:{{password}}@192.168.99.100:5432?sslmode=disable" username="postgres" password="postgres123456"

Alternatively, you can perform the same action using HTTP API method. To authenticate against Vault we need to add header X-Vault-Token with root token. I have disabled SSL for connection with Postgres by setting sslmode=disable. There is only one role allowed to use this plugin: default. Now, let’s configure that role.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"plugin_name": "postgresql-database-plugin","allowed_roles": "default","connection_url": "postgresql://{{username}}:{{password}}@localhost:5432?sslmode=disable","username": "postgres","password": "postgres123456"}' http://192.168.99.100:8200/v1/database/config/postgres

The role can created either with CLI or with HTTP API. The name of role should the same as the name passed in field allowed_roles in the previous step. We also have to set target database name and SQL statement that creates user with privileges.

$ vault write database/roles/default db_name=postgres creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" default_ttl="1h" max_ttl="24h"

Alternatively you can call the following HTTP API endpoint.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"db_name":"postgres", "creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";"]}' http://192.168.99.100:8200/v1/database/roles/default

And it’s all. Now, we can test our configuration using command with role’s name vault read database/creds/default as shown below. You can login to database using returned credentials. By default, they are valid for one hour.

vault-5

3. Enabling Spring Cloud Vault

We have succesfully configured secret engine that is responsible for creating user on Postgres. Now, we can proceed to the development and integrate our application with Vault. Fortunately, there is a project Spring Cloud Vault, which provides out-of-the-box integration with Vault database secret engines. The only thing we have to do is to include Spring Cloud Vault to our project and provide some configuration settings. Let’s start from setting Spring Cloud Release Train. We use the newest stable version Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We have to include two dependencies to our pom.xml. Starter spring-cloud-starter-vault-config is responsible for loading configuration from Vault and spring-cloud-vault-config-databases responsible for integration with secret engines for databases.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The sample application also connects to Postgres database, so we will include the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

The only thing we have to do is to configure integration with Vault via Spring Cloud Vault. The following configuration settings should be placed in bootstrap.yml (no application.yml). Because we run our application on Nomad server, we use the port number dynamically set by Nomad available under environment property NOMAD_HOST_PORT_http and secret token from Vault available under environment property VAULT_TOKEN.

server:
  port: ${NOMAD_HOST_PORT_http:8091}

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres

The important part of the configuration visible above is under the property spring.cloud.vault.postgresql. Following Spring Cloud documentation “Username and password are stored in spring.datasource.username and spring.datasource.password so using Spring Boot will pick up the generated credentials for your DataSource without further configuration”. Spring Cloud Vault is connecting with Vault, and then using role default (previously created on Vault) to generate new credentials to database. Those credentials are injected into spring.datasource properties. Then, the application is connecting to database using injected credentials. Everything works fine. Now, let’s try to run our applications on Nomad.

4. Deploying apps on Nomad

Before starting Nomad node we should also run Consul using its Docker container. Here’s Docker command that starts single node Consul instance.

$ docker run -d --name consul -p 8500:8500 consul

After that we can configure connection settings to Consul and Vault in Nomad configuration. I have create the file nomad.conf. Nomad is authenticating itself against Vault using root token. Connection with Consul is not secured. Sometimes it is also required to set network interface name and total CPU on the machine for Nomad client. Most clients are able to determine it automatically, but it does not work for me.

client {
  network_interface = "Połączenie lokalne 4"
  cpu_total_compute = 10400
}

consul {
  address = "192.168.99.100:8500"
}

vault {
  enabled = true
  address = "http://192.168.99.100:8200"
  token = "s.6jhQ1WdcYrxpZmpa0RNd0LMw"
}

Let’s run Nomad in development mode passing configuration file location.

$ nomad agent -dev -config=nomad.conf

If everything works fine you should see the similar log on startup.

vault-6

Once we have succesfully started Nomad agent integrated with Consul and Vault, we can proceed to the applications deployment. First build the whole project with mvn clean install command. The next step is to prepare Nomad’s job descriptor file. For more details about Nomad deployment process and its descriptor file you can refer to my previous article about it (mentioned in the preface of this article). Descriptor file is available inside application GitHub under path callme-service/job.nomad for callme-service, and caller-service/job.nomad for caller-service.

job "callme-service" {
	datacenters = ["dc1"]
	type = "service"
	group "callme" {
		count = 2
		task "api" {
			driver = "java"
			config {
				jar_path    = "C:\\Users\\minkowp\\git\\sample-nomad-java-services-idea\\callme-service\\target\\callme-service-1.0.0-SNAPSHOT.jar"
				jvm_options = ["-Xmx256m", "-Xms128m"]
			}
			resources {
				cpu    = 500 # MHz
				memory = 300 # MB
				network {
					port "http" {}
				}
			}
			service {
				name = "callme-service"
				port = "http"
			}
			vault {
				policies = ["nomad"]
			}
		}
		restart {
			attempts = 1
		}
	}
}

You will have to change value of jar_path property with your path of application binaries. Before applying this deployment to Nomad we will have to add some additional configuration on Vault. When adding integration with Vault we have to pass the name of policies used for checking permissions. I set the policy with name nomad, which now has to created in Vault. Our application requires a permission for reading paths /secret/* and /database/* as shown below.

vault-7

Finally, we can deploy our application callme-service on Nomad by executing the following command.

$ nomad job run job.nomad

The similar descriptor file is available for caller-service, so we can also deploy it. All the microservice has been registered in Consul as shown below.

vault-8

Here are the list of registered instances of caller-service. As you can see on the picture below it is available under port 25816.

vault-9

You can also take a look on Nomad jobs view.

vault-10

Running Jenkins Server with Configuration-As-Code

Some days ago I came across a newly created Jenkins plugin called Configuration as Code (JcasC). This plugin allows you to define Jenkins configuration in very popular format these days – YAML notation. It is interesting that such a plugin has not been created before, but better late than never. Of course, we could have use some other Jenkins plugins for that, like Job DSL Plugin, but it is based on Groovy language.
If you have any experience with Jenkins, you probably know how many plugins and other configuration settings it requires to have in order to work in your organization as a main CI server. With JcasC plugin, you can store such a configuration in human-readable declarative YAML files. In this article I’m going to show you how to create and run Jenkins with configuration as code letting you to build Java application using such tools like declarative pipelines, Git, Maven. I’ll also show how to manage sensitive data using Vault server.

1. Using Vault server

We will begin from running Vault server on the local machine. The easiest way to do that is with Docker image. By default, official Vault image is started in development mode. The following command runs an in-memory server, which listens on address 0.0.0.0:8200

docker run -d --name=vault --cap-add=IPC_LOCK -p 8200:8200 vault:0.9.0

There is one thing that should be clarified here. I do not run the newest version of Vault, because it forces us to call endpoints from version 2 of KV (Key-Value Secrets Engine) HTTP API, which is used for manipulating secrets. This version, in turn, is not supported by JcasC plugin that can communicate only with endpoints from version 1 of KV HTTP API. It does not apply to older version of Vault, for example 0.9.0, which allows to call KV in version 1. After running container we should obtain the token used for authentication against Vault from the console logs. To do that just run command docker logs vault and find the following fragment in the logs.

vault-token
Now, using this authentication token we may add credentials required for accessing Jenkins web dashboard and our account on Git repository host. Jenkins account will be identified by rootPassword key, while GitHub account by githubPassword key.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" --request POST -d '{"rootPassword":"your_root_password", "githubPassword":"your_github_password"}' https://192.168.99.100:8200/v1/secret/jenkins

To check out if the parameters has been saved on Vault just call GET method with the same context path.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" https://192.168.99.100:8200/v1/secret/jenkins

2. Building Jenkins image

The same as for Vault server we also run Jenkins on Docker container. However, we need to add some configuration settings to Jenkins official image before running it. JcasC plugin requires setting an environment variable that points to location of the current YAML configuration files. This variable can point to the following:

  • Path to a folder containing a set of config files
  • A full path to a single file
  • A URL pointing to a file served on the web, or for example your internal configuration server

The next step is to set some configuration settings required for establishing connection to Vault server. We have to pass the authentication token, the path of created key and the URL of running server. All these configuration settings are set as environment variables and may be overridden on container startup. The same rule applies to the location of JcasC configuration file. The following Dockerfile definition extends Jenkins base image, and add all the required parameters for running it using JcasC plugin and with secrets taken from Vault.

FROM jenkins/jenkins:lts
ENV CASC_JENKINS_CONFIG="/var/jenkins_home/jenkins.yml"
ENV CASC_VAULT_TOKEN=5bcab13b-6cf5-2f58-8b37-34dca31bebde
ENV CASC_VAULT_PATH=/secret/jenkins
ENV CASC_VAULT_URL=http://192.168.99.100:8200
COPY jenkins.yml ${CASC_JENKINS_CONFIG}
USER jenkins
RUN /usr/local/bin/install-plugins.sh configuration-as-code configuration-as-code-support git workflow-cps-global-lib

Now, let’s build the Docker image using Dockerfile visible above. Alternatively, you can just pull the image stored in my Docker Hub repository.

$ docker build -t piomin/jenkins-casc:1.0 .

Finally, you can run the container based on the built image with the following command. Of course, before that we need to prepare the YAML configuration file for JcasC plugin.

$ docker run -d --name jenkins-casc -p 8080:8080 -p 50000:50000 piomin/jenkins-casc:1.0 .

3. Preparing configuration

JcasC plugin provides many configuration settings that allows you to configure various components of your jenkins master installation. However, I will limit myself to defining the basic configuration used for building my sample Java application. We need the following Jenkins components to be configured after startup:

  1. A set of Jenkins plugins allowing to create declarative pipeline that checkouts source code from Git repository, builds it using Maven, and records JUnit test results
  2. Basic security realm containing credentials for a single Jenkins user. The user password is read from property rootPassword stored on Vault server
  3. JDK location directory
  4. Maven installation settings – Maven is not installed by default in Jenkins, so we have to set required version and tool name
  5. Credentials for accessing Git repository containing application source code
plugins: # (1)
  required:
    git: 3.9.1
    pipeline-model-definition: 1.3.2
    pipeline-stage-step: 2.3
    pipeline-maven: 3.5.12
    workflow-aggregator: 2.5
    junit: 1.25
  sites:
  - id: "default"
    url: "https://updates.jenkins.io/update-center.json"
jenkins:
  agentProtocols:
  - "JNLP4-connect"
  - "Ping"
  authorizationStrategy:
    loggedInUsersCanDoAnything:
      allowAnonymousRead: false
  crumbIssuer:
    standard:
      excludeClientIPFromCrumb: false
  disableRememberMe: false
  mode: NORMAL
  numExecutors: 2
  primaryView:
    all:
      name: "all"
  quietPeriod: 5
  scmCheckoutRetryCount: 0
  securityRealm: # (2)
    local:
      allowsSignup: false
      enableCaptcha: false
      users:
      - id: "piomin"
        password: ${rootPassword}
  slaveAgentPort: 50000
  views:
  - all:
      name: "all"
tool:
  git:
    installations:
    - home: "git"
      name: "Default"
  jdk: # (3)
    installations:
    - home: "/docker-java-home"
      name: "jdk"
  maven: # (4)
    installations:
    - name: "maven"
      properties:
      - installSource:
          installers:
          - maven:
              id: "3.5.4"
credentials: # (5)
  system:
    domainCredentials:
      - domain :
          name: "github.com"
          description: "GitHub"
        credentials:
          - usernamePassword:
              scope: SYSTEM
              id: github-piomin
              username: piomin
              password: ${githubPassword}

4. Exporting configuration

After running Jenkins with JcasC plugin installed you can easily export the current configuration to the file. First, navigate to section Manage Jenkins -> Configuration as Code.

jcasc-1

Then, after choosing Export Configuration button, the YAML file with Jenkins configuration will be downloaded to your machine. But following the comment visible below you cannot rely on that file, because this feature is still not stable. For my configuration it didn’t export Maven tool settings and list of Jenkins plugins. However, the JcasC plugin is probably still under active development, so I hope that feature will work succesfully soon.

jcasc-2

5. Running sample pipeline

Finally you can create and run pipeline for your sample application. Here’s the definition of my pipeline.

pipeline {
    agent any
    tools {
        maven 'maven'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
		stage('Test') {
            steps {
                dir('example-service') {
                    sh 'mvn clean test'
                }
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean install'
                }
            }
        }
    }
    post {
        always {
            junit '**/target/reports/**/*.xml'
        }
    }
}

Summary

The idea around Jenkins Configuration as Code Plugin is the step in the right direction. I’ll be following the development of this product with the great interest. There are still some features that needs to be added to make it more useful, and some bugs that need to be fixed. But after that I’ll will definitely consider using this plugin for maintaining the current Jenkins master server inside my organization.