In general, caching and DB The relationship between visits

Cache penetration

It refers to querying a data that does not exist , Because the query object is not in the cache ( The cache always fails to hit the corresponding data ), At this time, we will go to the database to query the data , If there is no corresponding data in the database, it cannot be written to the cache , under these circumstances , Every time the query does not exist, the database will be queried , This is cache penetration
.

Impact :

In the case of high concurrency , Cache penetration may slow down the database , And then slow down the whole system , Even down .

terms of settlement :

When the corresponding data cannot be hit in the cache , And access to the database did not query the target data , At this time, the empty result is stored in the cache . In this case , Each query is judged first redis Do you have target data in ( Namely exist(String
key)),key If it exists, the cache result will be returned directly , Even if the cache result is null .

Cache avalanche

When a large number of caches fail in the same period of time , A large number of database access queries will be triggered during this period , Bring more pressure to the database .

terms of settlement :

* At the database access level , Lock / Queue based walkthrough access
* Analyze the actual situation of system cache ( Including user scenarios, etc ), The failure time with uniform distribution is designed .
* Data preheating , Before the peak of concurrency that can be met , Advance uniform , Planned update of cache data , Prevent a large number of cache failures in the peak concurrency period .
* Set business hotspot data to never expire , Only do cache update operation
* In the case of distributed database , Distribute hotspot data evenly , Single database access pressure caused by distributed cache avalanche
* Access restriction ( Least recommended )
It is recommended to use 2 ,3 ,4
Methods to prevent and solve the cache avalanche problem , Lock or queue access control , It will definitely bring performance loss , The problems that can be avoided in advance should be avoided as early as possible , It's better not to wait until the accident happens .

Lock reasonably

Double detection lock :
public User selectById(String id) { User user = (User) hash.get(id); if (null
== user) { synchronized (this) { // One more cache check is the key here user = (User) hash.get(id); if
(null == user) { user = //... query data base hash.put("user", user); } } } return user; }
advantage : When high concurrency , And the cache is out of date , Without hot data preheating ,
The first thread gets the lock object , Other threads are waiting ; Within the time that the first thread queries the database , Massive thread squeeze , When the first thread DB After setting cache for query results , Other threads waiting for the lock object get the lock in turn , At this time, the most crucial step came , Checking the cache again can avoid squeezing these threads to do unnecessary database access
.

 

Technology
©2019-2020 Toolsou All rights reserved,
Huawei 2021 session Hardware Engineer Logical post (FPGA) Super detailed surface !!!Vue-element-admin upgrade ui edition virtual machine VMware Download and install the most detailed tutorial !C++ Move constructor and copy constructor sound of dripping water java Backstage interview pygame Realize full screen mode and adjustable window size mysql Database setting character set configuration modification my.ini file (windows)30 What's the experience of being a junior programmer at the age of 20 C++ Multithreading programming ( Summary of common functions and parameters )python_ cherry tree