Author: Liu Yaoming, JD Retail
Monitor Concept
Memory layout of Java objects
In addition to the custom attributes we define, objects also have other data. In memory, these data can be divided into three regions: object header, instance data, and alignment padding. Only when these three regions are combined can a complete object be formed.
Object header: In JVM, a large number of objects need to be stored. To achieve some additional functions during storage, some marker fields need to be added to the object to enhance its functionality, and these marker fields constitute the object header.

Instance data: Stores class attribute data information, including the attribute information of the superclass.
Alignment padding: Since the virtual machine requires the actual address of the object to be a multiple of 8 bytes, there must be padding areas to meet the requirement of 8-byte multiples. The padding data is not necessarily present and is only used for byte alignment.
Figure 1
Java Object Header
There are two ways to represent object headers in JVM (taking a 32-bit virtual machine as an example):
Normal object
| Object Header (64 bits) | | Mark Word (32 bits) | Klass Word (32 bits) |
Array object
| Object Header (96 bits) | | Mark Word (32 bits) | Klass Word (32 bits) | array length (32 bits) |
Mark Word
This part is mainly used to store the runtime data of the object itself, such as hashcode, GC generation age, etc. The bit length of the Mark Word is the size of a Word in JVM, that is, a 32-bit JVM Mark Word is 32 bits, and a 64-bit JVM is 64 bits. In order to store more information in a word size, JVM sets the lowest two bits of the word as marker bits. The Mark Word under different marker bits is illustrated as follows:
| Mark Word (32 bits) | State | | identity_hashcode:25 | age:4 | biased_lock:1 | lock:2 | Normal | | thread:23 | epoch:2 | age:4 | biased_lock:1 | lock:2 | Biased | | ptr_to_lock_record:30 | lock:2 | LightweightLocked | | ptr_to_heavyweight_monitor:30 | lock:2 | HeavyweightLocked | | | lock:2 | Marked for GC |
The meanings of each part are as follows:
**lock:** A 2-bit lock status marker bit. The meaning of the entire Mark Word varies depending on the value of this marker.
| biased_lock | lock | status | | 0 | 01 | Unlocked | | 1 | 01 | Biased lock | | 0 | 00 | Lightweight lock | | 0 | 10 | Heavyweight lock | | 0 | 11 | GC mark |
**biased_lock:** Whether the object enables the biased lock flag, which occupies only 1 binary bit. When it is 1, it indicates that the object enables the biased lock, and when it is 0, it indicates that the object does not have a biased lock.
**age:** A 4-bit Java object age. In GC, if the object is copied to the Survivor area once more, the age increases by 1. When the object reaches the set threshold, it will promote to the old generation. By default, the age threshold for parallel GC is 15, and the age threshold for concurrent GC is 6. Since age only has 4 bits, the maximum value is 15, which is why the maximum value of the -XX:MaxTenuringThreshold option is 15.
**identity_hashcode:** A 25-bit object representation of the hash code, using lazy loading technology. It is calculated by calling System.idenHashcode() and the result is written to the object header. When the object is locked, this value moves to the Monitor object.
**thread:** The thread ID holding the biased lock.
**epoch:** The biased timestamp.
**ptr_to_lock_record:** Pointer to the lock record on the stack.
**ptr_to_heavyweight_monitor:** Pointer to the Monitor object.
Klass Word
This part is used to store the type pointer of the object, which points to its class metadata. The JVM determines the object's class instance through this pointer, and the bit length of this pointer is the size of a word in the JVM, that is, 32 bits for a 32-bit JVM and 64 bits for a 64-bit JVM.
array length
If the object is an array, there is additional space in the object header for storing the array length, and the length of this data varies with the JVM architecture: 32-bit JVM is 32 bits, and 64-bit JVM is 64 bits.
Monitor Principle
Monitor is translated asMonitorsorMonitors
Each Java object can be associated with a Monitor object. If a lock is applied to the object using synchronized (a heavy-weight lock), a pointer to the Monitor object is set in the Mark Word of the object header.
The structure of Monitor is as follows:
Figure 2
• Initially, the Owner of the Monitor is null
• When Thread-2 executes synchronized(obj), it sets the Owner of the Monitor to Thread-2, and only one Owner can be in the Monitor
• During the locking process of Thread-2, if Thread-3, Thread-4, and Thread-5 also execute synchronized(obj), they will enter the BLOCKED state in the EntryList
• After Thread-2 executes the content of the synchronized block, it wakes up the threads waiting in the EntryList to compete for the lock, and the competition is not fair, that is, the first to enter is not necessarily the first to acquire the lock
• In Figure 2, Thread-0 and Thread-1 in the WaitSet are threads that have previously acquired the lock but do not enter the WAITING state due to unsatisfied conditions. This will be analyzed when discussing wait-notify.
Note:
• Synchronized must be entered into the Monitor of the same object to have the effect described above
• Objects not marked with synchronized do not associate with a monitor and do not follow the above rules
Synchronized Principle
static final Object lock = new Object();
static int counter = 0;
public static void main(String[] args) {
synchronized (lock) {
counter++;
}
}
The corresponding bytecode is:
public static main([Ljava/lang/String;)V
TRYCATCHBLOCK L0 L1 L2 null
TRYCATCHBLOCK L2 L3 L2 null
L4
LINENUMBER 6 L4
GETSTATIC MyClass03.lock : Ljava/lang/Object;
DUP
ASTORE 1
MONITORENTER //Comment 1
L0
LINENUMBER 7 L0
GETSTATIC MyClass03.counter : I
ICONST_1
IADD
PUTSTATIC MyClass03.counter : I
L5
LINENUMBER 8 L5
ALOAD 1
MONITOREXIT //Comment 2
L1
GOTO L6
L2
FRAME FULL [[Ljava/lang/String; java/lang/Object] [java/lang/Throwable]
ASTORE 2
ALOAD 1
MONITOREXIT //Comment 3
L3
ALOAD 2
ATHROW
L6
LINENUMBER 9 L6
FRAME CHOP 1
RETURN
L7
LOCALVARIABLE args [Ljava/lang/String; L4 L7 0
MAXSTACK = 2
MAXLOCALS = 3
Comment 1
The meaning of MONITORENTER is: each object has a monitoring lock (Monitor), and when the Monitor is occupied, it is in a locked state. When a thread executes the MONITORENTER instruction, it attempts to obtain ownership of the Monitor, and the process is as follows:
• If the entering count of the Monitor is 0, then the thread enters the Monitor and sets the entering count to 1, making this thread the owner (Owner) of the Monitor.
• If the thread has already occupied the Monitor and is just re-entering the Monitor, then the entering count of the Monitor is increased by 1.
• If another thread has already occupied the Monitor, then this thread enters a blocked state until the entering count of the Monitor is 0, and then it will try to obtain ownership of the Monitor again.
Comment 2
The meaning of MONITOREXIT is: when executing instructions, the entering count of Monitor is reduced by 1. If the entering count is reduced to 0, the thread exits the Monitor and is no longer the owner of this Monitor. Other threads blocked by the Monitor will then try to obtain ownership of the Monitor again.
Summary
From comments 1 and 2, we can know the implementation principle of synchronized, which is actually completed by the Monitor object at the bottom. In fact, the wait and notify methods also rely on Monitor, which is why the wait and notify methods must be called within the synchronized method, otherwise it will throw java.lang.IllegalMonitorStateException
If the program runs normally, it can be completed as described above. If an exception occurs within the synchronized method, the code will go through comment 3. In comment 3, you can see the MONITOREXIT instruction, which means that synchronized has handled the exit under abnormal conditions
Note: Method-level synchronized does not appear in bytecode instructions, but is added in the constant poolACC_SYNCHRONIZEDIdentifier, the JVM implements synchronization through this identifier, when the method is called, the JVM will judge the method'sACC_SYNCHRONIZEDWhether it is set, if it is set, the thread will first obtain the ownership of the Monitor before executing the method, and release the ownership of the Monitor after executing the method, which is essentially the same
Advanced principles of synchronized
Lightweight lock
Use case of lightweight lock: If an object is to be locked by multiple threads, but the locking time is staggered (that is, there is no competition), then lightweight lock can be used to optimize
The lightweight lock is transparent to the user, that is, the syntax is still synchronized
Assuming there are two synchronized blocks with the same object for locking
static final Object obj = new Object();
public static void method1() {
synchronized (obj) { // Synchronization block A
method2();
}
}
public static void method2() {
synchronized (obj) { // Synchronization block B
}
}
Create a Lock Record (Lock Record) object, each thread's stack frame will contain a lock record structure, which can store the Mark Word of the locked object
Figure 3
Let the Object reference in the lock record point to the lock object, and try to use CAS to replace the Mark Word of the object, storing the value of the Mark Word into the lock record
Figure 4
If CAS replacement is successful, the object header storesLock record address and status 00This indicates that the lock is added to the object by this thread, as shown in the following figure
Figure 5
There are two cases if CAS fails
• If another thread already holds the lightweight lock of the object, this indicates competition and enters the lock inflation process
• If it is the current thread that has re-entered the synchronized lock, then an additional Lock Record should be added as a re-entry technique
Figure 6
When exiting the synchronized code block (unlocking), if there is a lock record with a value of null, it indicates reentrancy, this is to reset the lock record, indicating that the reentrancy technique has decremented by one
Figure 7
When exiting the synchronized code block (unlocking), if the value of the lock record is not null, use CAS to restore the value of the Mark Word to the object header
• Success, then unlock successfully
• Fail, indicating that the lightweight lock has been inflated or has already upgraded to a heavy-weight lock, and enter the heavy-weight lock unlock process
Lock inflation
If, during the process of attempting to add a lightweight lock, the CAS operation cannot be successful, this is a situation where other threads have added a lightweight lock to this object (there is contention), and this requires lock inflation, converting the lightweight lock to a heavy-weight lock.
When Thread-1 attempts to add a lightweight lock, Thread-0 has already added a lightweight lock to the object
Figure 8
This is because Thread-1 failed to add a lightweight lock and entered the lock inflation process
• This is to apply the Monitor lock to the Object object, making Object point to the heavy-weight lock address
• Then enter the BLOCKED EntryList of the Monitor
Figure 9
When Thread-0 exits the synchronized block and unlocks, use CAS to restore the value of the Mark Word to the object header, fail, and this will enter the heavy-weight unlock process, i.e., find the Monitor object according to the Monitor address, set Owner to null, and wake up the BLOCKED threads in the EntryList.
Spin optimization
When there is a contention for heavy-weight locks, spin can be used for optimization. If the current thread spins successfully (i.e., the thread holding the lock has exited the synchronization and released the lock), the current thread can avoid blocking.
Successful spin retry
| Thread 1 (core 1) | Object Mark | Thread 2 (core 2) | | - | 10 (weight lock) | - | | Access synchronized block, acquire Monitor | 10 (weight lock) weight lock pointer | - | | Success (locked) | 10 (weight lock) weight lock pointer | - | | Execute synchronized block | 10 (weight lock) weight lock pointer | - | | Execute synchronized block | 10 (weight lock) weight lock pointer | Access synchronized block, acquire Monitor | | Execute synchronized block | 10 (weight lock) weight lock pointer | Spin retry | | Execution completed | 10 (weight lock) weight lock pointer | Spin retry | | Success (unlocked) | 01 (no lock) | Spin retry | | - | 10 (weight lock) weight lock pointer | Success (locked) | | - | 10 (weight lock) weight lock pointer | Execute synchronized block | | - | … | … |
The case where spin retry fails.
| Thread 1 (on core 1) | Object Mark | Thread 2 (on core 2) | | - | 10 (heavyweight lock) | - | | Accessing synchronized block, acquiring Monitor | 10 (heavyweight lock) heavyweight lock pointer | - | | Succeeded (locked) | 10 (heavyweight lock) heavyweight lock pointer | - | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | - | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | Accessing synchronized block, acquiring Monitor | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | Spin retry | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | Spin retry | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | Spin retry | | Executing synchronized block | 10 (heavyweight lock) heavyweight lock pointer | Spin retry | | - | … | … |
• Spinning consumes CPU time, and spinning on a single-core CPU is a waste, while spinning on a multi-core CPU can take advantage of its advantages.
• After Java 6, the spin lock is adaptive. For example, if the last spin operation of an object was successful, it is considered that the possibility of this spin operation being successful is high, so it spins more times; otherwise, it spins less or not at all, in short, it is very intelligent.
• After Java 7, we cannot control whether to enable the spin function.
Biased Locking
Lightweight locks still need to perform CAS operations each time they are re-entered when there is no competition (just for this thread).
Java 6 introduced biased locking for further optimization: only the first time CAS sets the thread ID to the object's Mark Word header, and then finds that this thread ID is its own, indicating that there is no competition, no need to CAS again, and that the object belongs to this thread as long as there is no competition in the future.
Note:
After Java 15, biased locking has been deprecated and is turned off by default. To use biased locking, configure the -XX:+UseBiasedLocking startup parameter.
After starting the biased locking, there is a delayed activation mechanism because the JVM performs a series of complex activities during startup, such as loading configurations and initializing system classes. During this process, a large number of synchronized keywords are used to lock objects, and most of these locks are not biased locks. To reduce initialization time, the JVM defaults to delayed loading of biased locks. This delay is approximately 4 seconds, but it varies depending on the machine. Of course, we can also set the JVM parameter -XX:BiasedLockingStartupDelay=0 to cancel the delayed loading of biased locks.
For example:}
static final Object obj = new Object();
public static void m1() {
synchronized (obj) { // Synchronization block A
m2();
}
}
public static void m2() {
synchronized (obj) { // Synchronization block B
m3();
}
}
public static void m3() {
synchronized (obj) {
}
}
If biased locks are disabled, using lightweight lock situation:
Figure 10
Enabling biased locks, using biased locks situation:
Figure 11
Biased state
Recall the object header format
| Mark Word (32 bits) | State | | identity_hashcode:25 | age:4 | biased_lock:1 | lock:2 | Normal | | thread:23 | epoch:2 | age:4 | biased_lock:1 | lock:2 | Biased | | ptr_to_lock_record:30 | lock:2 | LightweightLocked | | ptr_to_heavyweight_monitor:30 | lock:2 | HeavyweightLocked | | | lock:2 | Marked for GC |
When an object is created:
• If biased locks are enabled (enabled by default), after the object is created, the Mark Word value is 0x05, which means the last 3 bits are 101, at this time its thread, epoch, and age are all 0
• If biased locks are not enabled, after the object is created, the Mark Word value is 0x01, which means the last 3 bits are 001, at this time its hashcode and age are both 0, and the value will be assigned only when hashcode is used for the first time
Let's verify it, using the JOL third-party tool, and processing the object header printed by the tool to make it easier to open:
Test code
public synchronized static void main(String[] args){
log.info("{}", toSimplePrintable(object));
}
Under the condition of enabling biased locks
The printed data is as follows (since Java 15 has been biased towards lock elimination, enabling biased lock printing will issue a warning)
17:15:17 [main] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000101
The last one is 101, and the others are all 0, which verifies the first point mentioned above.
You may also ask, I haven't used the synchronized keyword here, shouldn't it also be lock-free? Why is it biased locking?
Take a close look at the composition of biased locking, compare it with the red underlined position in the output result, and you will find that the positions occupied by thread and epoch are all 0, indicating that the current biased locking is not biased towards any thread. At this time, this biased lock is in a biasable state, ready to bias! You can also understand that at this time, the biased lock is aThe lock-free state in a special state.
The situation where biased locking is turned off
The printed data is as follows
17:18:32 [main] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001
The last one is 001, and the others are all 0, which verifies the second point mentioned above.
Next, let's verify the locking situation, the code is as follows:
private static Object object = new Object();
public synchronized static void main(String[] args){
new Thread(()->{
log.info("{}", "synchronized before");
log.info("{}", toSimplePrintable(object));
synchronized (object){
log.info("{}", "synchronized in");
log.info("{}", toSimplePrintable(object));
}
log.info("{}", "synchronized after");
log.info("{}", toSimplePrintable(object));
}
The situation where biased locking is turned on, the printed data is as follows
17:24:05 [t1] c.MyClass03 - before synchronized
17:24:05 [t1] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000101
17:24:05 [t1] c.MyClass03 - within synchronized
17:24:05 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 00001110 00000111 01001000 00000101
17:24:05 [t1] c.MyClass03 - after synchronized
17:24:05 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 00001110 00000111 01001000 00000101
Biased locking is used, and the value of the thread (a string of numbers before 101) is recorded, but after the object locked with biased locking is unlocked, the thread ID is still stored in the object header.
The situation where biased locking is turned off, the printed data is as follows
17:28:24 [t1] c.MyClass03 - before synchronized
17:28:24 [t1] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001
17:28:24 [t1] c.MyClass03 - within synchronized
17:28:24 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 01110000 00100100 10101001 01100000
17:28:24 [t1] c.MyClass03 - synchronized after
17:28:24 [t1] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001
Use lightweight locks (last 000) and record the address of the lock information stored in the middle (the number before 000), and recover to the original state after the end of the synchronized block (because hashcode is not used, so the hashcode value is 0).
Bias Lock Revocation
Before explaining the bias revocation in detail, it is necessary to clarify a concept with everyone - the revocation of biased locking and the release of biased locking are two different things.
• Revocation: Broadly speaking, it is mainly to inform that the lock object cannot be used in biased mode when multiple threads compete and can no longer use the biased mode
• Release: As you might understand, it corresponds to the exit of the synchronized method or the end of the synchronized block
What is bias revocation?
Withdraw from the original state from the biased state, that is, change the value of the third bit (whether the bias is revoked) of the MarkWord from 1 to 0
If it is only one thread that acquires the lock, and the 'bias' mechanism is added, there is no reason to revoke the biasTherefore, the revocation of bias can only occur under competitive conditions
Revocation - hashcode call
Calling an object's hashcode can cause biased locking to be revoked:
• Lightweight locks record hashcode in lock record
• Heavyweight locks record hashcode in Monitor
The test code is as follows
private static Object object = new Object();
public synchronized static void main(String[] args){
object.hashCode(); // Call hashcode
new Thread(()->{
log.info("{}", "synchronized before");
log.info("{}", toSimplePrintable(object));
synchronized (object){
log.info("{}", "synchronized in");
log.info("{}", toSimplePrintable(object));
}
log.info("{}", "synchronized after");
log.info("{}", toSimplePrintable(object));
}
The output is as follows:
17:36:05 [t1] c.MyClass03 - before synchronized
17:36:06 [t1] c.MyClass03 - 00000000 00000000 00000000 01011111 00100001 00001000 10110101 00000001
17:36:06 [t1] c.MyClass03 - in synchronized
17:36:06 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 01101110 00010011 11101001 01100000
17:36:06 [t1] c.MyClass03 - after synchronized
17:36:06 [t1] c.MyClass03 - 00000000 00000000 00000000 01011111 00100001 00001000 10110101 00000001
Revoke - other threads use the object
When other threads use the biased lock object, the biased lock will be upgraded to a lightweight lock.
The test code is as follows
private static void test2() {
Thread t1 = new Thread(() -> {
synchronized (object) {
log.info("{}", toSimplePrintable(object));
}
synchronized (MyClass03.class) {
MyClass03.class.notify();//t1 executes after t2 is notified to execute
}
}, "t1");
t1.start();
Thread t2 = new Thread(() -> {
synchronized (MyClass03.class) {
try {
MyClass03.class.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
log.info("{}", toSimplePrintable(object));
synchronized (object) {
log.info("{}", toSimplePrintable(object));
}
log.info("{}", toSimplePrintable(object));
}, "t2");
t2.start();
}
The data printed is as follows
17:51:38 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 01000111 00000000 11101000 00000101
17:51:38 [t2] c.MyClass03 - 00000000 00000000 00000000 00000001 01000111 00000000 11101000 00000101
17:51:38 [t2] c.MyClass03 - 00000000 00000000 00000000 00000001 01111000 00100000 01101001 01010000
17:51:38 [t2] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001
It can be seen that thread t1 uses biased locks, and thread t2 is the same before using locks. However, once a lock is used, it is upgraded to a lightweight lock, and it is restored to the revoked biased lock state after executing the synchronized code.
Revoke - call wait/notify
The code is as follows
private static void test3(){
Thread t1 = new Thread(() -> {
log.info("{}", toSimplePrintable(object));
synchronized (object) {
log.info("{}", toSimplePrintable(object));
try {
object.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("{}", toSimplePrintable(object));
}
}, "t1");
t1.start();
new Thread(() -> {
try {
Thread.sleep(6000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized (object) {
log.debug("notify");
object.notify();
}
}, "t2").start();
}
The data printed is as follows
17:57:57 [t1] c.MyClass03 - 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000101
17:57:57 [t1] c.MyClass03 - 00000000 00000000 00000000 00000001 00001111 00001100 11010000 00000101
17:58:02 [t2] c.MyClass03 - notify
17:58:02 [t1] c.MyClass03 - 00000000 00000000 01100000 00000000 00000011 11000001 10000010 01110010
The wait and notify methods use Monitor, so they will be upgraded from biased locks to heavyweight locks.
Batch Rebias
If an object is accessed by multiple threads but there is no competition, the biased thread t1 object still has the opportunity to rebias to t2. Rebias will reset the object's Thread ID.
After the bias lock threshold is exceeded 20 times, the JVM will think, 'Have I biased incorrectly?', and will rebias these objects to the locking thread when locking them.
The code is as follows
public static class Dog{}
private static void test4() {
Vector<Dog> list = new Vector<>();
Thread t1 = new Thread(() -> {
for (int i = 0; i < 30; i++) {
Dog d = new Dog();
list.add(d);
synchronized (d) {
log.info("{}", i+"\t"+toSimplePrintable(d));
}
}
synchronized (list) {
list.notify();
}
}, "t1");
t1.start();
Thread t2 = new Thread(() -> {
synchronized (list) {
try {
list.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
log.debug("===============> ");
for (int i = 0; i < 30; i++) {
Dog d = list.get(i);
log.info("{}", i+"\t"+toSimplePrintable(d));
synchronized (d) {
log.info("{}", i+"\t"+toSimplePrintable(d));
}
log.info("{}", i+"\t"+toSimplePrintable(d));
}
}, "t2");
t2.start();
}
Print as follows
Figure 12
In addition, I am testing whether a thread finds that when the object is a regular class (such as Dog), the threshold for rebiasing is 20, that is, the 21st time the biased lock was opened. However, if the regular class is replaced with Object, the threshold for rebiasing is 9, that is, the 10th time the biased lock was opened and rebiased (as shown in Figure 13). What is this about? Those who understand can comment and communicate.
Figure 13
Batch revocation
After the bias lock threshold is revoked more than 40 times, the JVM thinks that it is indeed biased incorrectly and should not have been biased at all. Therefore, all objects of the entire class will become unbiased, and new objects will also be unbiased.
The code is as follows
static Thread t1, t2, t3;
private static void test6() throws InterruptedException {
Vector<Dog> list = new Vector<>();
int loopNumber = 40;
t1 = new Thread(() -> {
for (int i = 0; i < loopNumber; i++) {
Dog d = new Dog();
list.add(d);
synchronized (d) {
log.info("{}", i + "\t" + toSimplePrintable(d));
}
}
LockSupport.unpark(t2);
}, "t1");
t1.start();
t2 = new Thread(() -> {
LockSupport.park();
log.debug("===============> ");
for (int i = 0; i < loopNumber; i++) {
Dog d = list.get(i);
log.info("{}", i + "\t" + toSimplePrintable(d));
synchronized (d) {
log.info("{}", i + "\t" + toSimplePrintable(d));
}
log.info("{}", i + "\t" + toSimplePrintable(d));
}
LockSupport.unpark(t3);
}, "t2");
t2.start();
t3 = new Thread(() -> {
LockSupport.park();
log.debug("===============> ");
for (int i = 0; i < loopNumber; i++) {
Dog d = list.get(i);
log.info("{}", i + "\t" + toSimplePrintable(d));
synchronized (d) {
log.info("{}", i + "\t" + toSimplePrintable(d));
}
log.info("{}", i + "\t" + toSimplePrintable(d));
}
}, "t3");
t3.start();
t3.join();
log.info("{}", toSimplePrintable(new Dog()));
}
Print as follows
Exploration and practice of optimizing the file size of Android dynamic link libraries
4. Fine-grained Synchronized Global Lock
1. Horizontal splitting of the database, setting different initial values and the same step size
Guokr Cloud: Understand the security advantages of IPv6 upgrade and transformation in ten minutes

评论已关闭