SRE Case Study: Triaging a Non-Heap JVM Out of Memory Issue

原文链接: https://www.ebayinc.com/stories/blogs/tech/sre-case-study-triage-a-non-heap-jvm-out-of-memory-issue/

Most Java virtual machine out of memory issues happen on the heap, but this time proved to be a little different.
A Java virtual machine (JVM) has an auto memory management feature, so Java developers don’t need to care about object reclaiming. But they should still be concerned about memory, as it isn’t unlimited, and we do see out of memory errors sometimes. For out of memory issues, there are generally two possible reasons: 1) the memory settings for the JVM are too small, and 2) applications have memory leaks. For the first type, it is easy to fix with more memory; just change some JVM memory setting parameters. For the second type, we need to figure out where the leak is and fix it in code. Today I am going to share a JVM memory leak case that is a little different.

Symptoms
At the beginning, we noticed garbage collection (GC) overhead exceeded and CPU usage alerts for some hosts. GC overhead was around 60%~70%, and the CPU was busy with GC. It appeared to be a memory issue.
请输入图片描述
gcoverhead2
Figure 1. GC overhead alert

Action
Not all the servers for that application had this issue, just some, which meant it could take time to fill up the memory, anywhere from 1 or 2 hours to a few days. In order to mitigate this issue on site, first we took a heap dump and then nuked them for temporary recovery.

Analysis
For GC overhead issues, we analyze the verbose GC log, analyze the heap dump, and analyze the source code.

  1. Analyze the verbose GC log
    

请输入图片描述
The app enables the verbose GC log, which is very useful to analyze memory issues. From the following screenshot, we can see there is a lot of free memory in both young and old generations, but GC is filling up more and more.

gcviewer

This is a little strange, as most of time, we see the both young and old generations are used up, and JVM doesn’t have enough heap to allocate a new object. This issue is not caused by less memory in the young/old generation, so where is the issue?

We all know that the JVM permanent generation full and explicit System.gc() call can also trigger a full GC. Next, we check these two possibilities:

  1. If the full GC is triggered by an explicit System.gc() call, we will see the “system” keyword in the GC log, but we don't see it this time.

  2. If it is triggered by permanent generation full, we can easily identify it in the GC raw log. From the following GC raw log, we can see that the permanent generation has enough free memory.

Verbose GC log snippet:

2018-09-13T20:23:29.058-0700: 2518960.051: [GC2018-09-13T20:23:29.059-0700: 2518960.051: [ParNew Desired survivor size 41943040 bytes, new threshold 6 (max 6) - age 1: 3787848 bytes, 3787848 total - age 2: 2359600 bytes,6147448 total : 662280K->7096K(737280K), 0.0319710 secs] 1224670K->569486K(2170880K), 0.0324480 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]

2018-09-13T20:23:44.824-0700: 2518975.816: [Full GC2018-09-13T20:23:44.824-0700: 2518975.817: [CMS: 562390K->563346K(1433600K), 2.9864680 secs] 795326K->563346K(2170880K), [CMS Perm : 271273K->271054K(524288K)], 2.9869590 secs] [Times: user=2.97 sys=0.00, real=2.99 secs]

2018-09-13T20:23:58.130-0700: 2518989.123: [Full GC2018-09-13T20:23:58.131-0700: 2518989.123: [CMS: 563346K->561519K(1433600K), 2.8341560 secs] 867721K->561519K(2170880K), [CMS Perm : 271080K->271054K(524288K)], 2.8345980 secs] [Times: user=2.84 sys=0.00, real=2.83 secs]

2018-09-13T20:24:01.902-0700: 2518992.894: [Full GC2018-09-13T20:24:01.902-0700: 2518992.895: [CMS: 561519K->560375K(1433600K), 2.6886910 secs] 589208K->560375K(2170880K), [CMS Perm : 271055K->271055K(524288K)], 2.6891280 secs] [Times: user=2.69 sys=0.00, real=2.69 secs]

Therefore, these two possibilities have been ruled out.

In the past, we encountered a complicated case whose symptoms were similar: Both young generation and old generation had 700M free space separately after full GC, and no issue in permanent generation or explicit System.gc() call, but the JVM continued doing full GC. The cause was a java.util.Vector on heap that used about 400M memory, and it tried to extend its size. As the JDK code wrote, each time it extended, it doubled its size, so it needed an extra 800M memory to expand. The JVM couldn't find such a large free space, so it resorted to continuous full GC.

This time, we didn't see this kind of big collection instance.

  1. Check the application log, and find the issue
    

We started to analyze the heap dump, but in the meantime, in the application log, we see a very useful error message: java.lang.OutOfMemoryError: Direct buffer memory. This error points out where the issue is.

OOM error in the log:

INFO | jvm 1| 2018/09/15 03:43:13 | Caused by: java.lang.OutOfMemoryError: Direct buffer memory

INFO | jvm 1| 2018/09/15 03:43:13 | at java.nio.Bits.reserveMemory(Bits.java:658)

INFO | jvm 1| 2018/09/15 03:43:13 | at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)

INFO | jvm 1| 2018/09/15 03:43:13 | at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)

The direct buffer memory is the OS’ native memory, which is used by the JVM process, not in the JVM heap. It is used by Java NIO to quickly write data to network or disk; no need to copy between JVM heap and native memory. Java application can set the JVM parameter –XX:MaxDirectMemorySize to limit the direct buffer memory size. If no such parameter is set, the JVM can use all the available OS’ native memory. In our case, we checked the JVM’s parameter; it was set to -XX:MaxDirectMemorySize=1024M, which means this application set the Direct Buffer limit as 1G. Based on the above log, this 1G native memory was used up, and then threw the OOM error.

  1. Find the direct memory issue in the heap dump
    

Although the direct buffer memory is out of heap, the JVM still takes care of it. Each time the JVM requests a direct buffer memory, there will be a java.nio.DirectBuffer instance to represent it in the heap. This instance had the native memory address and the size of this memory block, etc. As the DirectBuffer instance’s life cycle was managed by the JVM, it could be collected by the GC thread when there was no reference to it. The associated native memory could also be released when the JVM GC thread collected the DirectBuffer instance.

Why does this app needs more than 1G direct buffer memory? Why it doesn’t it release the memory during the full GC? Now that we have the heap dump, can we find any clue from it? As we just mentioned, the DirectBuffer objects in the heap have some information about the direct buffer memory.

From the application error log, the JVM tries to create a new DirectByteBuffer instance. Let’s check the DirectByteBuffer first. With OQL, we see there are lots of DirectByteBuffer instances in the heap, and we don’t see other DirectBuffer instances, like DirectCharBuffers.

We can confirm how much native memory these DirectByteBuffers are using with this OQL query:

SELECT x, x.capacity FROM java.nio.DirectByteBuffer x WHERE ((x.capacity > 1024 * 1024) and (x.cleaner != null)) //here we only care objects whose capacity is bigger than 1M

The capacity field in DirectByteBuffer means how many memory are requested in the DirectByteBuffer instance. And here we filter the object instances with: x.cleaner != null, which means we skip the sliced DirectByteBuffer instances that are just a view of other DirectByteBuffer instances. In this dump, there are many DirectByteBuffer objects whose capacity is less than 1M; we just skip them. This is the result:
请输入图片描述
heapAnalysis

In this result, there are 25 instances that are holding more than 1M native memory. The biggest one is 179M (188124977/1024/1024), and second one is 124M (130804508/1024/1024). The summary of these top 25 instances is almost 1G. That’s why the total 1G direct buffer memory is used up.

  1. Why are these DirectByteBuffer not collected by GC?
    

If these DirectByteBuffer instances are collected by GC, then direct buffer native memory can also be released. Why can't these DirectByteBuffer instances be collected by the GC thread?

We further check the reference chain. From it, we can clearly see there are some thread local BufferCaches that are holding the references of DirectByteBuffer, and these thread local objects belong to some daemon threads, like the Tomcat daemon threads. That’s why they can’t be collected, as shown in the following reference chain screenshot:
请输入图片描述

Who put these DirectByteBuffers in these thread local BufferCaches? And why not remove them?

Following the reference chain, we looked into the source code of sun.nio.ch.Util.java class. In this class, you see the thread local BufferCache, and you see the method: getTemporaryDirectBuffer(int), which put the DirectByteBuffer objects in the BufferCache. This getTemporaryDirectBuffer is called by serval methods in JDK’s NIO classes. Also, the BufferCache reuses the DirectByteBuffer if the thread requests are not bigger direct buffer native memory. JDK NIO classes use these thread local DirectByteBuffer instances, but don’t release them if that thread is alive.

From above analysis, the issue is in the JDK’s code. This was identified as a JDK issue. In the JDK 8u102 Update Release Notes, a new system property, jdk.nio.maxCachedBufferSize, was added to fix this issue. But in this note, it also says, this parameter can only fix part of this issue and not all cases.

The fix
Most of the time, your application won’t have this issue because your threads are short-life threads, where BufferCache and DirectByteBuffer are collected by the GC thread, and the direct buffer native memory is released to the OS, or because where each time you just need very little direct buffer memory, and the JVM will reuse them. When the only multiple threads are long-life threads, and these threads request a more and more direct buffer memory until reach the max direct buffer limit or all the memory is used up, you will see this issue.

For our case, the app tries to allocate some direct buffer native memory for uploaded files, and Tomcat’s daemon threads handle these requests. There are some very big uploaded files, some more than 100M, and the app opens 40 daemon threads for Tomcat, then at last, it reaches the 1G direct buffer upper limit.

In order to fix it, the app should split bytes to small ones before they operate with NIO utilities. This can be changed in application logic.

Summary
Mostly we see out of memory issues on the heap, but it could happen on the direct buffer. When the direct buffer native memory is used up, even when it is not on the heap, we can still use a heap dump to help analyze the root cause.

一个进程打开了哪些端口在监听

为了让Node.js 能够充分利用多核的CPU,会开一个进程多个worker的模式, 每个worker是一个Node.js event loop. 如何查看开了哪些端口.

eric@eric1:~$ sudo netstat --all --program | grep '8481'
tcp        0      0 localhost:6666          *:*                     LISTEN      8481/pm2: Daemon
tcp        0      0 localhost:ircd          *:*                     LISTEN      8481/pm2: Daemon
tcp6       0      0 [::]:http-alt           [::]:*                  LISTEN      8481/pm2: Daemon
tcp6       0      0 [::]:8082               [::]:*                  LISTEN      8481/pm2: Daemon
tcp6       0      0 [::]:10100              [::]:*                  LISTEN      8481/pm2: Daemon
tcp6       0      0 [::]:10101              [::]:*                  LISTEN      8481/pm2: Daemon
tcp6       0      0 [::]:10102              [::]:*                  LISTEN      8481/pm2: Daemon
unix  3      [ ]         STREAM     CONNECTED     573099484 8481/pm2: Daemon
unix  3      [ ]         STREAM     CONNECTED     573099481 8481/pm2: Daemon
unix  3      [ ]         STREAM     CONNECTED     573099491 8481/pm2: Daemon

eric@eric1:~$ sudo lsof -i -P |grep 8481
pm2:       8481 rebot    3u  IPv6 573099604      0t0  TCP *:10100 (LISTEN)
pm2:       8481 rebot   12u  IPv4 573098742      0t0  TCP localhost:6666 (LISTEN)
pm2:       8481 rebot   13u  IPv4 573098743      0t0  TCP localhost:6667 (LISTEN)
pm2:       8481 rebot   17u  IPv6 573099599      0t0  TCP *:8082 (LISTEN)
pm2:       8481 rebot   18u  IPv6 573099600      0t0  TCP *:8080 (LISTEN)
pm2:       8481 rebot   20u  IPv6 573099610      0t0  TCP *:10101 (LISTEN)
pm2:       8481 rebot   22u  IPv6 573099619      0t0  TCP *:10102 (LISTEN)

进程的线程之间的相互关系:

pstree -a -p -H 8481
pstree -a -l -p -s 8481
top -H -p 8481
ps -L H 8481
ps -eLf
htop 8481

如何debug System.gc() call

有时候, 你在GC log 中发现在年轻代, 老年代, 永久带, 物理内存(包括Java 8 metaSpace, DirectBuffer), DirectBuffer 都有很多空闲, 还在full GC 的时候, 就可以看看是不是System.gc() 或者Runtime.gc() 在作怪了.

  1. 首先使用 -XX:+DisableExplicitGC 去看看, 是不是消除了,如果消除了, 说明就是 这2中gc() call;
  2. 然后 拉代码, 本地debug, 在以上2个方法上设置断点, 进行debug.

另外 以前某书, 或文章说 System.gc() 都会在 gc verbose log 里加 System 字样, 其实不完全是这样, 如最近我遇到这个, 就没有 System 字样.

164638.058: [Full GC (System) [PSYoungGen: 22789K->0K(992448K)] [PSOldGen: 1645508K->1666990K(2097152K)] 1668298K->1666990K(3089600K) [PSPermGen: 164914K->164914K(166720K)], 5.7499132 secs] [Times: user=5.69 sys=0.06, real=5.75 secs]

2019-02-14T00:33:36.136-0700: 3014642.000: [Full GC2019-02-14T00:33:36.136-0700: 3014642.000: [CMS: 766173K->766173K(1433600K), 3.0342400 secs] 775885K->766173K(2170880K), [CMS Perm : 168960K->168960K(524288K)], 3.0345150 secs] [Times: user=3.03 sys=0.00, real=3.03 secs]
2019-02-14T00:33:39.272-0700: 3014645.136: [Full GC2019-02-14T00:33:39.272-0700: 3014645.136: [CMS: 766173K->766173K(1433600K), 2.9704160 secs] 776581K->766173K(2170880K), [CMS Perm : 168960K->168960K(524288K)], 2.9706910 secs] [Times: user=2.98 sys=0.00, real=2.97 secs]

eclipse tomcat: preparing launch delegate

重启了一下eclipse, tomcat 就每次停在那里不动了, 看detail, 就说 preparing launch delegate.

google 一下, 有不同动解决办法, 我这边是 找到那个进程在占用我tomcat 要用的 8080 端口, 然后kill 掉就好了

_$ lsof -i:8080
java 26417 tian 45u IPv6 0x92e167181899ff9 0t0 TCP *:http-alt (LISTEN)

_$ kill -9 26417

之前一直报 端口被占用的, 很直观, 这次直接停在那里了.

后来发现 这样还不行, “preparing launch delegate” 这句一直不动的意思是: “我被什么block 了, 不能继续“. 最后发现我的问题是这样的:
我在debug那个jar 包的代码在call System.gc(), 所以在这个gc()方法上设置了断点, 当我让tomcat 以debug 模式启动时, 它在早起就可能 call 了 System.gc(), 这个时候就被 pause了, 直接不打任何log, 就停在那里了. 如果我直接不debug启动, 而是正常启动, 是可以的. 另外我把这个断点去掉, 也能正常启动.

所以 一定是什么block tomcat 了.

eclipse 绝对是浪费生命最多的地方.

Java Mission Control (JMC) and Java Flight Recorder (JFR)

JMC 是个图形化工具, 监控JVM 以及操作系统的一些指标; 它可以直接连JVM 去采集, 也可以读存档的JFR文件;
JFR 对于java应用程序进行诊断和profiling的工具, 它集成在HotSpot JVM 里面, 并且有很小的性能影响. 基于一组事件去采集,记录;可以通过JMC图形化界面控制JFR 事件,也可以通过jcmd 命令行来执行;

  1. 默认开启的JFR事件有小于1%的性能影响;

  2. 事件类型: 内存, 线程, I/O, code (编译, hot package, hot class),系统;
    -- 基于事件长度的event, 记录事件长度, 你可以设置超过每个长度的才记录;
    -- 瞬时事件
    -- 采样(sampling) 事件, 你可以配置采样频率;

  3. 每个事件有 事件名称, 时间戳, 和payload 组成;

  4. 通过各种纬度的事件, 你可以构建运行时系统状态;

  5. 数据流: JFR collects data from the JVM (through internal APIs) and from the Java application (through the JFR APIs). This data is stored in small thread-local buffers that are flushed to a global in-memory buffer. Data in the global in-memory buffer is then written to disk.

  6. JFR 架构
    -- JFR 运行时引擎, 它产生事件到buffer,并可选持久化到磁盘
    -- JFR plugin 在JMC里面分析JFR事件

  7. JFR 默认是disabled, 并且在HotSpot JVM里面是商业软件, 所以要在启动时通过2个flags 来启动它
    -- java -XX:+UnlockCommercialFeatures -XX:+FlightRecorder MyApp
    -- java -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:FlightRecorderOptions=defaultrecording=true MyApp
    -- for OpenJDK, 要使用JDK 11

  8. 启动app时, 可以通过 -XX:+FlightRecorderOptions=string 来设置参数, 多个参数用逗号隔开, 可选参数如下

    name=name
    The name used to identify the recording.
    defaultrecording=<true|false>
    Whether to start the recording initially. The default value is false; for reactive analysis, this should be set to true.
    settings=path
    Name of the file containing the JFR settings (see next section).
    delay=time
    The amount of time (e.g., 30s, 1h) before the recording should start.
    duration=time
    The amount of time to make the recording.
    filename=path
    Name of the file to write the recording to.
    compress=<true|false>
    Whether to compress (with gzip) the recording; the default is false.
    maxage=time
    Maximum time to keep recorded data in the circular buffer.
    maxsize=size
    Maximum size (e.g., 1024K, 1M) of the recording’s circular buffer.

  9. 上面参数可以在启动时传入, 但是更灵活的方式是, 通过jcmd 运行时传入, 如:
    jcmd process_id JFR.start [options_list] //start 时候可以选择模版 settings=
    jcmd process_id JFR.dump [options_list] //dump 一个连续的recording
    jcmd process_id JFR.check [verbose] //check 当前的recording
    jcmd process_id JFR.stop [options_list]

  10. 选取JFR 事件
    JFR 事件是基于模版的, HotSpot 默认有2个模版: default template & a profile template;
    这些event template 都是xml 文件, 2个默认的模版在$JAVA_HOME/ jre/lib/jfr 目录, 用户自定义的在 $USER_HOME/.jmc/ 目录;