Apache Ignite Cluster together with Spring Boot

I have already introduced Apache Ignite in one of my previous articles In-memory data grid with Apache Ignite. Apache Ignite can be easily launched locally together with Spring Boot application. The only thing we have to do is to include artifact org.apache.ignite:ignite-spring-data to the project dependencies and then declare Ignite instance @Bean. Sample @Bean declaration is visible below.

@Bean
public Ignite igniteInstance() {
	IgniteConfiguration cfg = new IgniteConfiguration();
	cfg.setIgniteInstanceName("ignite-cluster-node");
	CacheConfiguration ccfg1 = new CacheConfiguration("PersonCache");
	ccfg1.setIndexedTypes(Long.class, Person.class);
	CacheConfiguration ccfg2 = new CacheConfiguration("ContactCache");
	ccfg2.setIndexedTypes(Long.class, Contact.class);
	cfg.setCacheConfiguration(ccfg1, ccfg2);
	IgniteLogger log = new Slf4jLogger();
	cfg.setGridLogger(log);
	return Ignition.start(cfg);
}

In this article I would like to show you a little more advanced sample where we will start multiple Ignite’s nodes inside cluster, Ignite’s web console for monitoring cluster, and Ignite’s agent for providing communication between cluster’s nodes and web console. Let’s begin by looking on the picture with an architecture of our sample solution.

ignite-2-1

We have three nodes which are part of the cluster. If you carefully take a look at the picture illustrating an architecture you have probably noticed that there are two nodes called as Server Node, and one called as Client Node. By default, all Ignite nodes are started as server nodes. Client mode needs to be explicitly enabled. Server nodes participate in caching, compute execution, stream processing, while client nodes provide an ability to connect to the servers remotely. However, they allow using the whole set of Ignite APIs, including near caching, transactions, compute and streaming.

Here’s Ignite’s client instance @Bean declaration.

@Bean
public Ignite igniteInstance() {
	IgniteConfiguration cfg = new IgniteConfiguration();
	cfg.setIgniteInstanceName("ignite-cluster-node");
	cfg.setClientMode(true);
	CacheConfiguration ccfg1 = new CacheConfiguration("PersonCache");
	ccfg1.setIndexedTypes(Long.class, Person.class);
	CacheConfiguration ccfg2 = new CacheConfiguration("ContactCache");
	ccfg2.setIndexedTypes(Long.class, Contact.class);
	cfg.setCacheConfiguration(ccfg1, ccfg2);
	return Ignition.start(cfg);
}

The fact is that we don’t have to do anything more to make our nodes working together within the cluster. Every new node is automatically detected by all other cluster’s nodes using multicast communication. When starting our sample application we only have to guarantee that each instance’s server would listen of different port by overriding server.port Spring Boot property. Here’s command that starts the sample application, which is available on GitHub (https://github.com/piomin/sample-ignite-jpa.git) under branch cluster (https://github.com/piomin/sample-ignite-jpa/tree/cluster). Each node exposes the same REST API, which may be easily tested using Swagger2 just by opening its dashboard available under address http://localhost:port/swagger-ui.html.

java -jar -Dserver.port=8901 -Xms512m -Xmx1024m -XX:+UseG1GC -XX:+DisableExplicitGC -XX:MaxDirectMemorySize=256m target/ignite-rest-service-1.0-SNAPSHOT.jar

If you have successfully started a new node you should see the similar information in your application logs.

>>> +----------------------------------------------------------------------+
>>> Ignite ver. 2.4.0#20180305-sha1:aa342270b13cc1f4713382a8eb23b2eb7edaa3a5
>>> +----------------------------------------------------------------------+
>>> OS name: Windows 10 10.0 amd64
>>> CPU(s): 4
>>> Heap: 1.0GB
>>> VM name: 14132@piomin
>>> Ignite instance name: ignite-cluster-node
>>> Local node [ID=9DB1296A-7EEC-4564-BAAD-14E5D4A3A08D, order=2, clientMode=false]
>>> Local node addresses: [piomin/0:0:0:0:0:0:0:1, piomin/127.0.0.1, piomin/192.168.1.102, piomin/192.168.116.1, /192.168.226.1, /192.168.99.1]
>>> Local ports: TCP:8082 TCP:10801 TCP:11212 TCP:47101 UDP:47400 TCP:47501

Let’s move back for a moment to the source code of our sample application. I assume you have already cloned a given repository from GitHub. There are two Maven modules available. The module ignite-rest-service is responsible for starting Ignite’s cluster node in server mode, while ignite-client-service for starting node in client mode. Because we run only a single instance of client’s node, we would not override its default port set inside application.yml file. You can build the project using mvn clean install command and then start with java -jar or just run the main class IgniteClientApplication from your IDE.

There is also JUnit test class inside module ignite-client-service, which defines one test responsible for calling HTTP endpoints (POST /person, POST /contact) that put data into Ignite’s cache. This test performs two operations. It puts some data to the Ignite’s in-memory cluster by calling endpoints exposed by client node, and then check if that data has been propagated through the cluster by calling GET /person/{id}/withContacts endpoint exposed by one of the selected server nodes.

public class TestCluster {

	TestRestTemplate template = new TestRestTemplate();
	Random r = new Random();
	int[] clusterPorts = new int[] {8901, 8902};

	@Test
	public void testCluster() throws InterruptedException {
		for (int i=0; i<1000; i++) {
			Person p = template.postForObject("http://localhost:8090/person", createPerson(), Person.class);
			Assert.notNull(p, "Create person failed");
			Contact c1 = template.postForObject("http://localhost:8090/contact", createContact(p.getId(), 0), Contact.class);
			Assert.notNull(c1, "Create contact failed");
			Contact c2 = template.postForObject("http://localhost:8090/contact", createContact(p.getId(), 1), Contact.class);
			Assert.notNull(c2, "Create contact failed");
			Thread.sleep(10);
			Person result = template.getForObject("http://localhost:{port}/person/{id}/withContacts", Person.class, clusterPorts[r.nextInt(2)], p.getId());
			Assert.notNull(result, "Person not found");
			Assert.notEmpty(result.getContacts(), "Contacts not found");
		}
	}

	private Contact createContact(Long personId, int index) {
		...
	}

	private Person createPerson() {
		...
	}

}

Before running any tests, we should launch two additional elements being a part of our architecture: Ignite's web console and agent. The most suitable way to run Ignite's web console on the local machine is through its Docker image apacheignite/web-console-standalone.  Here's Docker command that starts Ignite's web console and exposes it on port 80. Because I run Docker on Windows, it is now available under default VM address http://192.168.99.100/.

docker run -d -p 80:80 -p 3001:3001 -v /var/data:/var/lib/mongodb --name ignite-web-console apacheignite/web-console-standalone

In order to access it you should first register your user. Although mail server is not available on the Docker container, you would be logged in after it. You can configure your cluster using Ignite’s web console, and also run some SQL queries on that cluster. Of course, we still need to connect our cluster consisting of three nodes with the instance of web console started on Docker container. To achieve it you have to download a web agent. Probably it is not very intuitive, but you have to click button Start Demo, which is located on the right corner of Ignite’s web console. Then you would be redirected to the download page, where you can accept download of ignite-web-agent-2.4.0.zip file, which contains all needed libraries and configuration to start web agent locally.

ignite-2-2

After downloading and unpacking web agent go to its main directory and change property server-uri to http://192.168.99.100 inside default.properties file. Then you may run script ignite-web-agent.bat (or .sh if you are testing it on Linux), which starts web agent. Unfortunately, it’s not all what has to be done. Every server node’s application should include artifact ignite-rest-http in order to be able to communicate with the agent. It is responsible for exposing HTTP endpoint that is accessed by a web agent. It is based on Jetty server, what causes some problems in conjunction with Spring Boot. Spring Boot sets default versions of Jetty libraries used inside the project. The problem is that ignite-rest-http requires older versions of that libraries, so we also have to override some default managed versions in pom.xml file according to the sample visible below.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-http</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-server</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-io</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-continuation</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-util</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-xml</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
	</dependencies>
</dependencyManagement>

After implementing the changes described above, we may finally proceed to running all the elements being a part of our sample system. If you start Ignite Web Agent locally it should automatically detect all running cluster nodes. Here’s the screen with the logs displayed by the agent after startup.

ignite-2-3

At the same time you should see that a new cluster has been detected by Ignite Web Console.

ignite-2-4

You can configure a new or a currently existing cluster using web console, or just run a test query on the selected managed cluster. You have to include a name of cache as a prefix to the table name when defining a query.

ignite-2-5

Similar queries have be declared inside a repository interface. Here are additional methods used for finding entities stored in PersonCache. If you would like to include results stored in other cache, you have to explicitly declare its name together with table name.

@RepositoryConfig(cacheName = "PersonCache")
public interface PersonRepository extends IgniteRepository {

	List findByFirstNameAndLastName(String firstName, String lastName);

	@Query("SELECT p.id, p.firstName, p.lastName, c.id, c.type, c.location FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.id=?")
	List<List> findByIdWithContacts(Long id);

	@Query("SELECT c.* FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List selectContacts(String firstName, String lastName);

	@Query("SELECT p.id, p.firstName, p.lastName, c.id, c.type, c.location FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List<List> selectContacts2(String firstName, String lastName);
}

We are nearing the end. Now, let’s run our JUnit test TestCluster in order to generate some test data and put it into the clustered cache. You can monitor a size of a cache using web console. All you have to do is to run SELECT COUNT(*) query, and set graph mode as a default mode for result displaying. The chart visible below illustrates number of entities stored inside Ignite’s cluster at 5s intervals.

ignite-2-6

Advertisements

In-memory data grid with Apache Ignite

Apache Ignite is a relatively new solution, but quickly increasing its popularity. It is hard to assigned to a single area of database engines division, because it has characteristics typical for some of them. The primary purpose of this solution is an in memory data grid and a key-value storage. It also has some common RDBMS features like support for SQL queries and ACID transactions. But that’s not to say it is full SQL and transactional database. It does not support foreign key constraints and transactions are available only at key-value level. Despite that Apache Ignite seems to be very interesting solution.

Apache Ignite may be easily started as a node embedded to Spring Boot application. The simplest way to achieve that is by using Spring Data Ignite library. Apache Ignite implements Spring Data CrudRepository interface that supports basic CRUD operations and also provides access to the Apache Ignite SQL Grid using the unified Spring Data interfaces. Although it has a support for distributed, ACID and SQL-compliant disk store persistence we design a solution which store in-memory cache objects in MySQL database. The architecture of presented solution is visible on the figure below and you can see it is very simple. The application put data to the in-memory cache on Apache Ignite. Apache Ignite automatically synchronizes this changes with database in an asynchronous, background task. The way of reading data by application also should not surprise you. If an entity is not cached it is read from database and put to the cache for a future use.

ignite

I’m going to guide you through the process of the sample application development. The result of this development is available on GitHub. I have found a few examples on the web, but there were only the basics. I’ll show you how to configure Apache Ignite to write objects from cache in database and create some more complex cross-cache join queries. Let’s begin from running database.

1. Setup MySQL database

The best way to start MySQL database locally is of course by Docker container. For Docker on Windows, MySQL database is now available on 192.168.99.100:33306.

docker run -d --name mysql -e MYSQL_DATABASE=ignite -e MYSQL_USER=ignite -e MYSQL_PASSWORD=ignite123 -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -p 33306:3306 mysql

The next step is to create tables used by application entities to store the data: PERSON, CONTACT. Those to tables are in 1…N relation where table CONTACT holds the foreign key referenced to PERSON id.

CREATE TABLE `person` (
  `id` int(11) NOT NULL,
  `first_name` varchar(45) DEFAULT NULL,
  `last_name` varchar(45) DEFAULT NULL,
  `gender` varchar(10) DEFAULT NULL,
  `country` varchar(10) DEFAULT NULL,
  `city` varchar(20) DEFAULT NULL,
  `address` varchar(45) DEFAULT NULL,
  `birth_date` date DEFAULT NULL,
  PRIMARY KEY (`id`)
);

CREATE TABLE `contact` (
  `id` int(11) NOT NULL,
  `location` varchar(45) DEFAULT NULL,
  `contact_type` varchar(10) DEFAULT NULL,
  `person_id` int(11) NOT NULL,
  PRIMARY KEY (`id`)
);

ALTER TABLE `ignite`.`contact` ADD INDEX `person_fk_idx` (`person_id` ASC);
ALTER TABLE `ignite`.`contact`
ADD CONSTRAINT `person_fk` FOREIGN KEY (`person_id`) REFERENCES `ignite`.`person` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;

2. Maven configuration

The easiest way to start working with Apache Ignite’s Spring Data repository is by adding the following Maven dependency to an application’s pom.xml file. All the other Ignite dependencies would be automatically included. We also need MySQL JDBC driver, Spring JDBC dependencies to configure connection to database. They are required, because we are embedding Apache Ignite to the application and it has to establish connection with MySQL in order to be able to synchronize cache with database tables.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
   <groupId>mysql</groupId>
   <artifactId>mysql-connector-java</artifactId>
   <scope>runtime</scope>
</dependency>
<dependency>
   <groupId>org.apache.ignite</groupId>
   <artifactId>ignite-spring-data</artifactId>
   <version>${ignite.version}</version>
</dependency>

3. Configure Ignite node

Using IgniteConfiguration class we are able to configure all available Ignite’s node settings. The most important thing here is a cache configuration (1). We should add primary key and entity classes as an indexed types (2). Then we have to enable export cache updates to database (3) and read data not found in a cache from database (4). The interaction between Ignite’s node and MySQL may be configured using CacheJdbcPojoStoreFactory class (5). We should pass there DataSource @Bean (6), dialect (7) and mapping between object fields and table columns (8).

@Bean
public Ignite igniteInstance() {
   IgniteConfiguration cfg = new IgniteConfiguration();
   cfg.setIgniteInstanceName("ignite-1");
   cfg.setPeerClassLoadingEnabled(true);

   CacheConfiguration<Long, Contact> ccfg2 = new CacheConfiguration<>("ContactCache"); // (1)
   ccfg2.setIndexedTypes(Long.class, Contact.class); // (2)
   ccfg2.setWriteBehindEnabled(true);
   ccfg2.setWriteThrough(true); // (3)
   ccfg2.setReadThrough(true); // (4)
   CacheJdbcPojoStoreFactory<Long, Contact> f2 = new CacheJdbcPojoStoreFactory<>(); // (5)
   f2.setDataSource(datasource); // (6)
   f2.setDialect(new MySQLDialect()); // (7)
   JdbcType jdbcContactType = new JdbcType(); // (8)
   jdbcContactType.setCacheName("ContactCache");
   jdbcContactType.setKeyType(Long.class);
   jdbcContactType.setValueType(Contact.class);
   jdbcContactType.setDatabaseTable("contact");
   jdbcContactType.setDatabaseSchema("ignite");
   jdbcContactType.setKeyFields(new JdbcTypeField(Types.INTEGER, "id", Long.class, "id"));
   jdbcContactType.setValueFields(new JdbcTypeField(Types.VARCHAR, "contact_type", ContactType.class, "type"), new JdbcTypeField(Types.VARCHAR, "location", String.class, "location"), new JdbcTypeField(Types.INTEGER, "person_id", Long.class, "personId"));
   f2.setTypes(jdbcContactType);
   ccfg2.setCacheStoreFactory(f2);

   CacheConfiguration<Long, Person> ccfg = new CacheConfiguration<>("PersonCache");
   ccfg.setIndexedTypes(Long.class, Person.class);
   ccfg.setWriteBehindEnabled(true);
   ccfg.setReadThrough(true);
   ccfg.setWriteThrough(true);
   CacheJdbcPojoStoreFactory<Long, Person> f = new CacheJdbcPojoStoreFactory<>();
   f.setDataSource(datasource);
   f.setDialect(new MySQLDialect());
   JdbcType jdbcType = new JdbcType();
   jdbcType.setCacheName("PersonCache");
   jdbcType.setKeyType(Long.class);
   jdbcType.setValueType(Person.class);
   jdbcType.setDatabaseTable("person");
   jdbcType.setDatabaseSchema("ignite");
   jdbcType.setKeyFields(new JdbcTypeField(Types.INTEGER, "id", Long.class, "id"));
   jdbcType.setValueFields(new JdbcTypeField(Types.VARCHAR, "first_name", String.class, "firstName"), new JdbcTypeField(Types.VARCHAR, "last_name", String.class, "lastName"), new JdbcTypeField(Types.VARCHAR, "gender", Gender.class, "gender"), new JdbcTypeField(Types.VARCHAR, "country", String.class, "country"), new JdbcTypeField(Types.VARCHAR, "city", String.class, "city"), new JdbcTypeField(Types.VARCHAR, "address", String.class, "address"), new JdbcTypeField(Types.DATE, "birth_date", Date.class, "birthDate"));
   f.setTypes(jdbcType);
   ccfg.setCacheStoreFactory(f);

   cfg.setCacheConfiguration(ccfg, ccfg2);
   return Ignition.start(cfg);
}

Here’s Spring datasource configuration for MySQL running as Docker container.

spring:
  datasource:
    name: mysqlds
    url: jdbc:mysql://192.168.99.100:33306/ignite?useSSL=false
    username: ignite
    password: ignite123

On that occasion it should be mentioned that Apache Ignite has still has some definencies. For example, it maps Enum to integer taking its ordinal value although it has configured VARCHAR as JDCB type. When reading such a row from database it is not mapped properly to Enum in object – you would have null in this response field.

new JdbcTypeField(Types.VARCHAR, "contact_type", ContactType.class, "type")

4. Model objects

Like I mentioned before we have two tables in the database schema. There are also two model classes and two cache configuration one per each model class. Here’s model class implementation. One of the few interesting things here is ID generation with AtomicLong class. It is one of basic Ignite’s component acting as sequence generator. We can also see a specific annotation @QuerySqlField, which marks the field as available for usage as a query parameter in SQL.

@QueryGroupIndex.List(
   @QueryGroupIndex(name="idx1")
)
public class Person implements Serializable {

   private static final long serialVersionUID = -1271194616130404625L;
   private static final AtomicLong ID_GEN = new AtomicLong();

   @QuerySqlField(index = true)
   private Long id;
   @QuerySqlField(index = true)
   @QuerySqlField.Group(name = "idx1", order = 0)
   private String firstName;
   @QuerySqlField(index = true)
   @QuerySqlField.Group(name = "idx1", order = 1)
   private String lastName;
   private Gender gender;
   private Date birthDate;
   private String country;
   private String city;
   private String address;
   private List<Contact> contacts = new ArrayList<>();

   public void init() {
	  this.id = ID_GEN.incrementAndGet();
   }

   public Long getId() {
	  return id;
   }

   public void setId(Long id) {
	  this.id = id;
   }

   public String getFirstName() {
	  return firstName;
   }

   public void setFirstName(String firstName) {
	  this.firstName = firstName;
   }

   public String getLastName() {
	  return lastName;
   }

   public void setLastName(String lastName) {
	  this.lastName = lastName;
   }

   public Gender getGender() {
	  return gender;
   }

   public void setGender(Gender gender) {
	  this.gender = gender;
   }

   public Date getBirthDate() {
	  return birthDate;
   }

   public void setBirthDate(Date birthDate) {
	  this.birthDate = birthDate;
   }

   public String getCountry() {
	  return country;
   }

   public void setCountry(String country) {
	  this.country = country;
   }

   public String getCity() {
	  return city;
   }

   public void setCity(String city) {
	  this.city = city;
   }

   public String getAddress() {
	  return address;
   }

   public void setAddress(String address) {
	  this.address = address;
   }

   public List<Contact> getContacts() {
	  return contacts;
   }

   public void setContacts(List<Contact> contacts) {
	  this.contacts = contacts;
   }

}

5. Ignite repositories

I assume that you are familiar with Spring Data JPA concept of creating repositories. A repository handling should be enabled on the main or @Configuration class.

@SpringBootApplication
@EnableIgniteRepositories
public class IgniteRestApplication {

   @Autowired
   DataSource datasource;

   public static void main(String[] args) {
	SpringApplication.run(IgniteRestApplication.class, args);
   }

   // ...
}

Then we have to extend our @Repository interface with base CrudRepository interface. It supports only inherited methods with id parameter. In the PersonRepository fragment visible below I defined some find methods using Spring Data naming convention and Ignite’s queries. In those samples you can see that we can return full object or selected fields as a query result – according to the needs.

@RepositoryConfig(cacheName = "PersonCache")
public interface PersonRepository extends IgniteRepository<Person, Long> {

	List<Person> findByFirstNameAndLastName(String firstName, String lastName);

	@Query("SELECT c.* FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List<Contact> selectContacts(String firstName, String lastName);

	@Query("SELECT p.id, p.firstName, p.lastName, c.id, c.type, c.location FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List<List<?>> selectContacts2(String firstName, String lastName);
}

6. API and testing

Finally, we can inject the repository beans to the REST controller classes. API would expose methods for adding new object to the cache, updating or removing existing objects and some for searching using the primary key or the other more complex indices.

@RestController
@RequestMapping("/person")
public class PersonController {

	private static final Logger LOGGER = LoggerFactory.getLogger(PersonController.class);

	@Autowired
	PersonRepository repository;

	@PostMapping
	public Person add(@RequestBody Person person) {
		person.init();
		return repository.save(person.getId(), person);
	}

	@PutMapping
	public Person update(@RequestBody Person person) {
		return repository.save(person.getId(), person);
	}

	@DeleteMapping("/{id}")
	public void delete(Long id) {
		repository.delete(id);
	}

	@GetMapping("/{id}")
	public Person findById(@PathVariable("id") Long id) {
		return repository.findOne(id);
	}

	@GetMapping("/{firstName}/{lastName}")
	public List<Person> findByName(@PathVariable("firstName") String firstName, @PathVariable("lastName") String lastName) {
		return repository.findByFirstNameAndLastName(firstName, lastName);
	}

	@GetMapping("/contacts/{firstName}/{lastName}")
	public List<Person> findByNameWithContacts(@PathVariable("firstName") String firstName, @PathVariable("lastName") String lastName) {
		List<Person> persons = repository.findByFirstNameAndLastName(firstName, lastName);
		List<Contact> contacts = repository.selectContacts(firstName, lastName);
		persons.stream().forEach(it -> it.setContacts(contacts.stream().filter(c -> c.getPersonId().equals(it.getId())).collect(Collectors.toList())));
		LOGGER.info("PersonController.findByIdWithContacts: {}", contacts);
		return persons;
	}

	@GetMapping("/contacts2/{firstName}/{lastName}")
	public List<Person> findByNameWithContacts2(@PathVariable("firstName") String firstName, @PathVariable("lastName") String lastName) {
		List<List<?>> result = repository.selectContacts2(firstName, lastName);
		List<Person> persons = new ArrayList<>();
		for (List<?> l : result) {
			persons.add(mapPerson(l));
		}
		LOGGER.info("PersonController.findByIdWithContacts: {}", result);
		return persons;
	}

	private Person mapPerson(List<?> l) {
		Person p = new Person();
		Contact c = new Contact();
		p.setId((Long) l.get(0));
		p.setFirstName((String) l.get(1));
		p.setLastName((String) l.get(2));
		c.setId((Long) l.get(3));
		c.setType((ContactType) l.get(4));
		c.setLocation((String) l.get(4));
		p.addContact(c);
		return p;
	}

}

It is certainly important to test the performance of the implementated solution, especially when it is related with in-memory data grid and databases. For that purpose I created some junit tests which put a large number of objects into the cache and then invoke some find methods using random input data to test queries performance. Here’s method which generates many Person and Contact objects and puts them into cache using API endpoints.

@Test
public void testAddPerson() throws InterruptedException {
	ExecutorService es = Executors.newCachedThreadPool();
	for (int j = 0; j < 10; j++) { es.execute(() -> {
		TestRestTemplate restTemplateLocal = new TestRestTemplate();
			Random r = new Random();
			for (int i = 0; i < 1000000; i++) {
				Person p = restTemplateLocal.postForObject("http://localhost:8090/person", createTestPerson(), Person.class);
				int x = r.nextInt(6);
				for (int k = 0; k < x; k++) {
					restTemplateLocal.postForObject("http://localhost:8090/contact", createTestContact(p.getId()), Contact.class);
				}
			}
		});
	}
	es.shutdown();
	es.awaitTermination(60, TimeUnit.MINUTES);
}

Spring Boot provides methods for capturing basic metrics of API response times. To enable that feature we have to include Spring Actuator to the dependencies. Metrics endpoint is available under http://localhost:8090/metrics address. In addition to each API method processing time it also prints such statistics like number of running threads or free memory.

7. Running application

Let’s run our sample application with embedded Apache Ignite’s node. Following some performance suggestions available in the Ignite’s docs I defined JVM configuration visible below.

java -jar -Xms512m -Xmx1024m -XX:MaxDirectMemorySize=256m -XX:+DisableExplicitGC -XX:+UseG1GC target/ignite-rest-service-1.0-SNAPSHOT.jar

Now, we can run JUnit test class IgniteRestControllerTest. It puts some data into the cache and then calls find methods. The metrics for the tests with 1M Person objects and 2.5M Contact objects in the cache are visible below. All find methods have taken about 1ms on average.

{
"mem": 624886,
"mem.free": 389701,
"processors": 4,
"instance.uptime": 2446038,
"uptime": 2466661,
"systemload.average": -1,
"heap.committed": 524288,
"heap.init": 524288,
"heap.used": 133756,
"heap": 1048576,
"threads.peak": 107,
"threads.daemon": 25,
"threads.totalStarted": 565,
"threads": 80,
...
"gauge.response.person.contacts.firstName.lastName": 1,
"gauge.response.contact": 1,
"gauge.response.person.firstName.lastName": 1,
"gauge.response.contact.location.location": 1,
"gauge.response.person.id": 1,
"gauge.response.person": 0,
"counter.status.200.person.id": 1000,
"counter.status.200.person.contacts.firstName.lastName": 1000,
"counter.status.200.person.firstName.lastName": 1000,
"counter.status.200.contact": 2500806,
"counter.status.200.person": 1000000,
"counter.status.200.contact.location.location": 1000
}

JPA caching with Hazelcast, Hibernate and Spring Boot

Preface

In-Memory Data Grid is an in-memory distributed key-value store that enables caching data using distributed clusters. Do not confuse this solution with in-memory or nosql database. In most cases it is used for performance reasons – all data is stored in RAM not in the disk like in traditional databases. For the first time I had a touch with in-memory data grid while we considering moving to Oracle Coherence in one of organizations I had been working before. The solution really made me curious. Oracle Coherence is obviously a paid solution, but there are also some open source solutions among which the most interesting seem to be Apache Ignite and Hazelcast. Today I’m going to show you how to use Hazelcast for caching data stored in MySQL database accessed by Spring Data DAO objects. Here’s the figure illustrating architecture of presented solution.

hazelcast-1

Implementation

  • Starting Docker containers

We use three Docker containers. First with MySQL database, second with Hazelcast instance and third for Hazelcast Management Center – UI dashboard for monitoring Hazelcast cluster instances.

docker run -d --name mysql -p 33306:3306 mysql
docker run -d --name hazelcast -p 5701:5701 hazelcast/hazelcast
docker run -d --name hazelcast-mgmt -p 38080:8080 hazelcast/management-center:latest

If we would like to connect with Hazelcast Management Center from Hazelcast instance we need to place custom hazelcast.xml in /opt/hazelcast catalog inside Docker container. This can be done in two ways, by extending hazelcast base image or just by copying file to existing hazelcast container and restarting it.

docker run -d --name hazelcast -p 5701:5701 hazelcast/hazelcast
docker stop hazelcast
docker start hazelcast

Here’s the most important Hazelcast’s configuration file fragment.

<hazelcast xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.8.xsd">
     <group>
          <name>dev</name>
          <password>dev-pass</password>
     </group>
     <management-center enabled="true" update-interval="3">http://192.168.99.100:38080/mancenter</management-center>
...
</hazelcast>

Hazelcast Dashboard is available under http://192.168.99.100:38080/mancenter address. We can monitor there all running cluster members, maps and some other parameters.

hazelcast-mgmt-1

  • Maven configuration

Project is based on Spring Boot 1.5.3.RELEASE. We also need to add Spring Web and MySQL Java connector dependencies. Here’s root project pom.xml.


	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.5.3.RELEASE</version>
	</parent>
	...
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
			<scope>runtime</scope>
		</dependency>
	...
	</dependencies>

Inside person-service module we declared some other dependencies to Hazelcast artifacts and Spring Data JPA. I had to override managed hibernate-core version for Spring Boot 1.5.3.RELEASE, because Hazelcast didn’t worked properly with 5.0.12.Final. Hazelcast needs hibernate-core in 5.0.9.Final version. Otherwise, an exception occurs when starting application.

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>com.hazelcast</groupId>
			<artifactId>hazelcast</artifactId>
		</dependency>
		<dependency>
			<groupId>com.hazelcast</groupId>
			<artifactId>hazelcast-client</artifactId>
		</dependency>
		<dependency>
			<groupId>com.hazelcast</groupId>
			<artifactId>hazelcast-hibernate5</artifactId>
		</dependency>
		<dependency>
			<groupId>org.hibernate</groupId>
			<artifactId>hibernate-core</artifactId>
			<version>5.0.9.Final</version>
		</dependency>
	</dependencies>
  • Hibernate Cache configuration

Probably you can configure it in several different ways, but for me the most suitable solution was inside application.yml. Here’s YAML configurarion file fragment. I enabled L2 Hibernate cache, set Hazelcast native client address, credentials and cache factory class HazelcastCacheRegionFactory. We can also set HazelcastLocalCacheRegionFactory. The differences between them are in performance – local factory is faster since its operations are handled as distributed calls. While if you use HazelcastCacheRegionFactory, you can see your maps on Management Center.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/datagrid?useSSL=false
    username: datagrid
    password: datagrid
  jpa:
    properties:
      hibernate:
        show_sql: true
        cache:
          use_query_cache: true
          use_second_level_cache: true
          hazelcast:
            use_native_client: true
            native_client_address: 192.168.99.100:5701
            native_client_group: dev
            native_client_password: dev-pass
          region:
            factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
  • Application code

First, we need to enable caching for Person @Entity.

@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
@Entity
public class Person implements Serializable {

	private static final long serialVersionUID = 3214253910554454648L;

	@Id
	@GeneratedValue
	private Integer id;
	private String firstName;
	private String lastName;
	private String pesel;
	private int age;

	public Integer getId() {
		return id;
	}

	public void setId(Integer id) {
		this.id = id;
	}

	public String getFirstName() {
		return firstName;
	}

	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return lastName;
	}

	public void setLastName(String lastName) {
		this.lastName = lastName;
	}

	public String getPesel() {
		return pesel;
	}

	public void setPesel(String pesel) {
		this.pesel = pesel;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

	@Override
	public String toString() {
		return "Person [id=" + id + ", firstName=" + firstName + ", lastName=" + lastName + ", pesel=" + pesel + "]";
	}

}

DAO is implemented using Spring Data CrudRepository. Sample application source code is available on GitHub.

public interface PersonRepository extends CrudRepository<Person, Integer> {
	public List<Person> findByPesel(String pesel);
}

Testing

Let’s insert a little more data to the table. You can use my AddPersonRepositoryTest for that. It will insert 1M rows into the person table. Finally, we can call enpoint http://localhost:2222/persons/{id} twice with the same id. For me, it looks like below: 22ms for first call, 3ms for next call which is read from L2 cache. Entity can be cached only by primary key. If you call http://localhost:2222/persons/pesel/{pesel} entity will always be searched bypassing the L2 cache.

2017-05-05 17:07:27.360 DEBUG 9164 --- [nio-2222-exec-9] org.hibernate.SQL                        : select person0_.id as id1_0_0_, person0_.age as age2_0_0_, person0_.first_name as first_na3_0_0_, person0_.last_name as last_nam4_0_0_, person0_.pesel as pesel5_0_0_ from person person0_ where person0_.id=?
Hibernate: select person0_.id as id1_0_0_, person0_.age as age2_0_0_, person0_.first_name as first_na3_0_0_, person0_.last_name as last_nam4_0_0_, person0_.pesel as pesel5_0_0_ from person person0_ where person0_.id=?
2017-05-05 17:07:27.362 DEBUG 9164 --- [nio-2222-exec-9] o.h.l.p.e.p.i.ResultSetProcessorImpl     : Starting ResultSet row #0
2017-05-05 17:07:27.362 DEBUG 9164 --- [nio-2222-exec-9] l.p.e.p.i.EntityReferenceInitializerImpl : On call to EntityIdentifierReaderImpl#resolve, EntityKey was already known; should only happen on root returns with an optional identifier specified
2017-05-05 17:07:27.363 DEBUG 9164 --- [nio-2222-exec-9] o.h.engine.internal.TwoPhaseLoad         : Resolving associations for [pl.piomin.services.datagrid.person.model.Person#444]
2017-05-05 17:07:27.364 DEBUG 9164 --- [nio-2222-exec-9] o.h.engine.internal.TwoPhaseLoad         : Adding entity to second-level cache: [pl.piomin.services.datagrid.person.model.Person#444]
2017-05-05 17:07:27.373 DEBUG 9164 --- [nio-2222-exec-9] o.h.engine.internal.TwoPhaseLoad         : Done materializing entity [pl.piomin.services.datagrid.person.model.Person#444]
2017-05-05 17:07:27.373 DEBUG 9164 --- [nio-2222-exec-9] o.h.r.j.i.ResourceRegistryStandardImpl   : HHH000387: ResultSet's statement was not registered
2017-05-05 17:07:27.374 DEBUG 9164 --- [nio-2222-exec-9] .l.e.p.AbstractLoadPlanBasedEntityLoader : Done entity load : pl.piomin.services.datagrid.person.model.Person#444
2017-05-05 17:07:27.374 DEBUG 9164 --- [nio-2222-exec-9] o.h.e.t.internal.TransactionImpl         : committing
2017-05-05 17:07:30.168 DEBUG 9164 --- [nio-2222-exec-6] o.h.e.t.internal.TransactionImpl         : begin
2017-05-05 17:07:30.171 DEBUG 9164 --- [nio-2222-exec-6] o.h.e.t.internal.TransactionImpl         : committing

Query Cache

We can enable JPA query caching by marking repository method with @Cacheable annotation and adding @EnableCaching to main class definition.

public interface PersonRepository extends CrudRepository<Person, Integer> {

	@Cacheable("findByPesel")
	public List<Person> findByPesel(String pesel);

}

In addition to the @EnableCaching annotation we should declare HazelcastIntance and CacheManager beans. As a cache manager HazelcastCacheManager from hazelcast-spring library is used.

@SpringBootApplication
@EnableCaching
public class PersonApplication {

	public static void main(String[] args) {
		SpringApplication.run(PersonApplication.class, args);
	}

	@Bean
	HazelcastInstance hazelcastInstance() {
		ClientConfig config = new ClientConfig();
		config.getGroupConfig().setName("dev").setPassword("dev-pass");
		config.getNetworkConfig().addAddress("192.168.99.100");
		config.setInstanceName("cache-1");
		HazelcastInstance instance = HazelcastClient.newHazelcastClient(config);
		return instance;
	}

	@Bean
	CacheManager cacheManager() {
		return new HazelcastCacheManager(hazelcastInstance());
	}

}

Now, we should try find person by PESEL number by calling endpoint http://localhost:2222/persons/pesel/{pesel}. Cached query is stored as a map as you see in the picture below.

hazelcast-3

Clustering

Before final words let me say a little about clustering, what is the key functionality of Hazelcast in memory data grid. In the previous chapters we based on single Hazelcast instance. Let’s begin from running second container with Hazelcast exposed on different port.

docker run -d --name hazelcast2 -p 5702:5701 hazelcast/hazelcast

Now we should perform one change in hazelcast.xml configuration file. Because data grid is ran inside docker container the public address has to be set. For the first container it is 192.168.99.100:5701, and for second 192.168.99.100:5702, because it is exposed on 5702 port.

     <network>
        ...
	<public-address>192.168.99.100:5701</public-address>
        ...
     </network>

When starting person-service application you should see in the logs similar to visible below – connection with two cluster members.

Members [2] {
Member [192.168.99.100]:5702 - 04f790bc-6c2d-4c21-ba8f-7761a4a7422c
Member [192.168.99.100]:5701 - 2ca6e30d-a8a7-46f7-b1fa-37921aaa0e6b
}

All Hazelcast running instances are visible in Management Center.

hazelcast-2

Conclusion

Caching and clustering with Hazelcast are simple and fast. We can cache JPA entities and queries. Monitoring is realized via Hazelcast Management Center dashboard. One problem for me is that I’m able to cache entities only by primary key. If I would like to find entity by other index like PESEL number I had to cache findByPesel query. Even if entity was cached before by id query will not find it in the cache but perform SQL on database. Only next query call is cached. I’ll show you smart solution for that problem in my next article about that subject In memory data grid with Hazelcast.