While reading Understanding the JVM – Advanced Features and Best Practices, Second Edition (in Chinese) recently, there is a guide in chapter one to build JVM from source. It is based on OpenJDK7, which only works when using a Java6/Java7 VM as build bootstrap. Java8 bootstrap has more strict code checks and will finally fail the build. So, I just switched to a more recent OpenJDK8 code. The file name is openjdk-8u40-src-b25-10_feb_2015.zip.

The code provides a better build experience, and compiles on my Linux box almost out of box. But remember, do not use a gcc compiler >= gcc-6. It defaults to C++14 and breaks the build. On macOS, the build scripts seem only support gcc. Actually, a clang compiler is required to build the objc code.

1. So the first step after downloading and unzipping the code, modify the configure script:

Comment out the lines(2 appearances):

2. Now install freetype and run configure:

A slowdebug build disables optimization and helps a lot when debugging. The summary output looks like:

3. Apply the following patch to fix build errors. Partially picked from an official OpenJDK10 changeset:

4. Start the build:

Lots of warnings, but the build should finish successfully:

5. Debugging with lldb:

The output binary lie in openjdk/build/macosx-x86_64-normal-server-slowdebug/jdk. The output of java -version:

Never used lldb before, seems to be compatible with gdb:

The article is inspired by the posts here and here.

There is a RESTful service as the infrastructure for data access in our team. It is based on Jersey/JAX-RS and runs fast. However, it consumes large memory when constructing large data set as response. Since it builds the entire response in memory before sending it.

As suggested in the above posts. Streaming is the solution. They integrated Hibernate or Spring Data for easy adoption. But I need a general purpose RESTful service, say, I do not know the schema of a table. So I decided to implement it myself using raw JDBC interface.

My class is so-called MysqlStreamTemplate:

  • It does not extend JdbcTemplate, since there is only one interface for streaming, not one series. I’m not writing a general purpose library.
  • It is MySQL only, I have no time to verify with other relation databases.
  • It does accept a DataSource as the parameter of the its constructor.
  • Staff like Hibernate session is not concerned, since it maintains Statement & Connection by itself.
  • Staff like @Transcational is not concerned, since we do not care about transactions. Actually, MySQL gives HOLD_CURSORS_OVER_COMMIT in StatementImpl#getResultSetHoldability() in its JDBC driver, saying that our ResultSet survives after commit.

So, here is my class. NOTE: closing our Statement & Connection requires explicit invoke of Stream#close():

Read inline comments for additional details. Now the response entry and controller mapping:

Complete code can be find on my GitHub repository.

My simple benchmark script looks like:

Dramatic improvements in memory usage as shown in jconsole, especially Old Gen:
all_memory
old_gen_memory

Some raw data from jmap:

  • Jersey
  • Spring Boot
  • Spring Boot with Streams

Well, new to the big data world.

Following Appendix A in the book Hadoop: The Definitive Guide, 4th Ed, just get it to work. I’m running Ubuntu 14.04.

1. Download and unpack the hadoop package, and set environment variables in your ~/.bashrc.

Verify with:

The 2.5.2 distribution package is build in 64bit for *.so files, use the 2.4.1 package if you want 32bit ones.

2. Edit config files in $HADOOP_HOME/etc/hadoop:

3. Config SSH:
Hadoop needs to start daemons on hosts of a cluster via SSH connection. A public key is generated to avoid password input.

Verify with:

4. Format HDFS filesystem:

5. Start HDFS:

Verify running with jps command:

6. Some tests:

7. Stop HDFS:

8. If there is an error like:

Just edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh and export JAVA_HOME explicitly again here. It does happen under Debian. Not knowing why the environment variable is not passed over SSH.

9. You can also set HADOOP_CONF_DIR to use a separate config directory for convenience. But make sure you have the whole directory copied from the Hadoop package. Otherwise, nasty errors may occur.

Starting with 1.56, boost/asio provides asio::spawn() to work with coroutines. Just paste the sample code here, with minor modifications:

The Python in my previous article can be used to work with the code above. I also tried to write a TCP server with only boost::coroutines classes. select() is used, since I want the code to be platform independent. NOTE: with coroutines, we have only _one_ thread.